Introducing Rethink Priorities’ Cross-Cause Prioritization Series
On how to do as much good as possible with charitable dollars
This is the first in a series of posts on how to compare interventions across cause areas such as global health, animal welfare, and AI risk, and what that comparison suggests for how you should allocate your donations. Over the coming weeks, I’ll walk you through the approach Rethink Priorities has been building, the tools we’ve developed to help donors make more informed decisions, and some tentative conclusions. If you’d like to follow along, subscribe to get each post as it comes out.
If you care about doing as much good as possible with your charitable dollars, how do you decide which cause to fund? We’ve been working on a rigorous answer.
There are many worthy causes to donate to. For example, you might give to Global Health, where a donation to the Against Malaria Foundation saves a life for $3,000–$5,000. Or you might decide to focus on animal welfare, given that roughly 10 times more land animals are raised and slaughtered every year than the total human population, often in horrifying conditions. Unless you’re also looking to reduce catastrophic risks from AI or pandemics, where the expected impact could exceed both of the above, assuming interventions we pick are effective. But how do you choose?
This is a question about cause prioritization, and it’s harder than it looks.
That’s what this series is about. Rethink Priorities has spent the last several years building out what we think is the most systematic approach to comparing interventions across cause areas that currently exists. Over the course of this series, I want to walk through what that work actually looks like, why we built our models for cause prioritization, and what the models suggest.
Why prioritization matters
Toby Ord famously argued that some global health interventions are 15,000 times more cost-effective than others, meaning that millions more lives could be saved through prioritizing the most promising ones.
But some have argued that work in other cause areas could be significantly more impactful. For example, as I mentioned before, farmed animals vastly outnumber humans, and wild animals vastly outnumber farmed animals. And since the long-term future could contain trillions of lives, actions that preserve or improve it could also be enormously effective.
And it is not merely the case that some interventions are many times more effective than others. Many well-intentioned actions may do nothing or actively make things worse. Prioritization is therefore vital not only to do good better, but to do good at all.
The hurdle: most decisions about how to allocate resources across causes are made on a surprisingly unsystematic basis
The typical approach is to identify a promising-sounding cause area, based on something like: it affects a lot of beings (scale), it seems solvable (tractability), and not many people are working on it (neglectedness). These are reasonable starting points. But they’re rough assessments of broad, heterogeneous areas. A cause area can score well on all three and still contain a huge range of interventions, some highly cost-effective and some not, and some of which might even be net-negative.
To move from “this cause area looks promising” to “this is how I should actually allocate my money,” you need to compare the cost-effectiveness of specific interventions across cause areas. That requires a lot of work that, honestly, very few organizations of the EA research ecosystem have tried to do.
GiveWell does rigorous cost-effectiveness analysis for global health and development charities. Animal Charity Evaluators does similar work in the field of animal welfare. But there’s no GiveWell-like group trying to compare those two domains head-to-head, let alone against AI risk or other cause areas. And so the decisions that happen across causes, which are not small decisions, often get made through intuition, theoretical arguments, or a kind of rough, vibes-based allocation between cause buckets.
At Rethink Priorities, we think that can be done better.
What makes cross-cause prioritization hard (and why that matters)
I want to be upfront: doing this rigorously is genuinely difficult, and I don’t want to oversell what we’ve figured out.
A few of the specific challenges are worth flagging now:
The cost-effectiveness of an intervention isn’t static. The impact of your next dollar depends on how much funding an intervention already has. Models that look at average cost-effectiveness miss this.
Comparing interventions across causes requires comparing moral weights across species. How does preventing a chicken from cage confinement compare to preventing a case of malaria? This isn’t a question that has an uncontroversial empirical answer. It depends on how you value different kinds of lives and experiences, which depends on ethical views that are themselves deeply uncertain.
There are also questions about how to handle risk, how to think about impact that happens far in the future, and how to aggregate across different moral frameworks when you’re uncertain which one is right.
These are not problems you can just ignore. If your decision model doesn’t address them, it’s still making implicit choices about them. Everyone is modeling, even if they think they aren’t. We think the better approach is to make those choices explicit, model the uncertainty behind them, and see how sensitive your conclusions are to different assumptions.
What this series will cover
Over the coming weeks, I’m going to walk through how Rethink Priorities has tried to tackle this. The series will include:
A look at the common pitfalls in cross-cause prioritization, including the “bucket” problem of pre-allocating to causes before comparing specific interventions
A more detailed case for why explicit modeling is worth the effort, even when inputs are uncertain
A walkthrough of the Donor Compass tool we’ve built, which lets people plug in their own values and uncertainty ranges to see what allocations those suggest
Some tentative recommendations, with appropriate caveats, about what our model currently points to.
I’ll also be sharing a conversation from a recent podcast where I discuss some of this more informally; that’s coming out in the next few days, so if you’re not subscribed to this Substack yet, now is a good time.
The goal of the series isn’t to tell you what to believe. It’s to show you what a more rigorous version of these decisions could look like, and let you update from there.
I think this work matters. I also think it’s incomplete, and probably wrong in some ways we haven’t identified yet. But I’d rather be transparent about a serious attempt than quietly confident about something that doesn’t hold up to scrutiny.
More soon.


Thanks for sharing, Marcus.
"I think this work matters. I also think it’s incomplete, and probably wrong in some ways we haven’t identified yet. But I’d rather be transparent about a serious attempt than quietly confident about something that doesn’t hold up to scrutiny."
I like this spirit. I wonder whether you accounted for effects on soil animals (https://forum.effectivealtruism.org/topics/soil-animals). One of the "key takeaways" from Rethink Priorities' (RP's) work on risk aversion was that "Spending on corporate cage-free campaigns for egg-laying hens is robustly[8] cost-effective under nearly all reasonable types and levels of risk aversion considered here". However, I think such campaigns can easily increase or decrease animal welfare accounting for effects on soil ants and termites (https://forum.effectivealtruism.org/posts/BnDQRikxE6hbJ3GRB/chicken-welfare-reforms-may-impact-soil-ants-and-termites). So I suspect they perform worse than inaction under moderate levels of any type of risk aversion you considered (“avoiding the worst” risk aversion, difference-making risk aversion, and ambiguity aversion).