Vaidehi Agarwalla 🔸

Founder @ Pineapple Ops
7608 karmaJoined
vaidehiagarwalla.com

Bio

I care about making funding for high impact causes more robust & diversified. I'm based in the Bay Area, advise Asia-based community builders and run Pineapple Ops. I previously worked in consulting, recruiting and marketing, with a BA in Sociology and focused on social movements. (A little on my journey to EA)

Unless otherwise stated, I always write in a personal capacity.

/'vɛðehi/ or VEH-the-hee

Some posts I've written and particularly like: 

Advice I frequently give:

I'm always keen to hear feedback: admonymous.co/vaidehiagarwalla

How others can help me

If you feel I can do something (anything) better, please let me know. I want to be warm, welcoming & supportive - and I know I can fail to live up to those standards sometimes. Have a low bar for reaching out - (anonymous form here). 

If you think you have different views to me (on anything!), reach out -I want to hear more from folks with different views to me. If you have deep domain expertise in a very specific area (especially non-EA) I'd love to learn about it!

Connect me to fundraisers, product designers, people with ops & recruiting backgrounds and potential PA/ops folks! 

How I can help others

I can give specific feedback on movement building & meta EA project plans and career advising. 

I can also give feedback on posts and grant applications. 

Sequences
6

Operations in EA FAQs
Events in EA: Learnings & Critiques
EA Career Advice on Management Consulting
Exploratory Careers Landscape Survey 2020
Local Career Advice Network
Towards A Sociological Model of EA Movement Building

Comments
698

Topic contributions
69

Some norms I would like to see when folks use LLMs substantively (not copy editing or brainstorming):

  1. explaining the extent of LLM use

  2. explaining the extent of human revision or final oversight

  • ideally I'd love to know section by section. Personally I'd prefer it if authors only included sections they have reviewed and fully endorse, even if this means much shorter posts
  1. not penalizing people who do 1) and 2) for keeping AI-speak or style
  • I think this unfairly discriminates against people who are busy, weak in writing and/or English. I personally dislike it, but don't think it's fair to impose this on others

If this tool could be used by orgs I think it would be super useful. E.g. analyzing the HIP talent directory or inbound applications against a job you're recruiting for. Any chance you could make the prompts/tool available to orgs as well?

Thanks for this write up! It was really insightful. A few questions:

People who apply to found an NGO come with all sorts of motivations.

Could you say more about what motivations they come with? 

As this is a regional program, we couldn’t have a cohort composed entirely of only 4 countries, even though several were outstanding candidates.

Base on my experience working in India, I've seen a lot of benefits of having multiple orgs working in the same geographies at the same time/stage to share resources, advice, talent, etc. Curious what you were limited by here / what factors went into this decision (e.g. I imagine you could have branded this as a regional wide program with a focused initial cohort, with a plan to do focused outreach into other geographies later).

Finally

A common refrain in EA is that the broader social sector doesn’t care about impact, and that good intentions are their only north star.

I have not heard this sentiment quite stated so strongly in EA, but if it is then I'd like to also strongly disagree! After years of working with dozens of nonprofit fundraisers all over the US, I am confident that people do care about impact - they care a lot about effectiveness and using their limited time and resources efficiently. In fact, many switch into fundraising from programmatic because they their organisation needed it and saw it as important. The main difference is that they aren't prioritising EA causes, but I don't think that can be chalked up to good intentions. 

Recent example - an op ed piece on the AI safety pipeline having too many researchers is labelled community, a post advocating for more AI field building by an OP grant maker is not. 

Thanks for flagging those! I think my original point probably still stands regarding more projects for different species etc., since the incubated projects are from the existing recommended ideas list.

I noticed that AIM has  recommended 0 aquatic animal charities in 2026, 1 in 2025 and 0 in 2022-2024. 

Curious if either of you (or AIM researchers) have thoughts on why that is, and to what degree that is an indicator of a lack of high-EV, foundable (e.g. talent exists) projects in the space. 

Have not thought about compounding returns to orgs! I can think of some concrete examples with AIM ecosystem charities (e.g one org helping bring another into creation or creating a need for others to exist). Food for thought.

Curious how you see the communitarianism playing out in practice?

There's definitely a cooperative side to things that makes it a lot easier to ask for help amongst EAs than the relevant professional groups someone might be a part of, but not sure I'm seeing obvious implications.

So keen to hear from the disagrees (currently about 21% (5/23) votes) on which parts folks disagree with!

Load more