We Should be Uncertain about Cause Prioritization in Philanthropy
The interesting decisions about cross-cause prioritization rely on a lot of shaky philosophical judgments
This is a short version of an argument made at greater depth in We should be more uncertain about cause prioritization based on philosophical arguments on the EA Forum.
The philosopher Charles Mills once joked that while physicists and chemists do actual experiments, philosophers do not. Instead, in philosophy when you want to do an experiment you “assume the experimental position” of placing your hand on your forehead for 30-60 minutes and then simply declare the results of your thought experiment. One advantage of this approach for philosophy is how much money it saves compared to actual experiments. Unfortunately for philosophy, when it comes to how confident we should be in the results of such experiments, sometimes you get what you pay for.
The basic effective altruist case is strong, cross-domain prioritization not so much
It’s relatively easy to argue, all else equal, that it’s better to save 100 lives rather than 10 or that interventions with robust evidence of effectiveness are more appealing than those without such evidence. It’s harder (but still relatively easy) to argue, when spending charitable dollars it’s better to save 100 lives than spend the same amount of money exposing 100 people to art for one hour each. Not everyone is on board with these claims1 but they are very difficult to argue against. These are largely taken as background considerations within effective altruism (EA). Importantly, these points aren’t particularly contingent on highly contentious philosophical claims and can be endorsed by people with different views about normative ethics (i.e. which ethical theories to use, like deontology and utilitarianism), how to value present vs future people (population ethics), which decision theory to use, how to compare human to animal welfare, and how to deal with moral uncertainty.
However, some of the main topics of EA concern, such as weighing how causes (like global health and animal welfare) or interventions (say, malaria nets vs corporate campaigns to improve hen welfare) compare to each other, do not turn on uncontroversial questions like whether saving 100 lives is more valuable than saving 10. Rather, I think EA cause prioritization decisions often rest on (implicit or explicit) philosophical considerations that are much tougher to justify reaching with high confidence. This is because the action on, say, deciding between malaria and pandemic prevention often necessarily includes, among other things, consideration of normative views, decision theories, and population ethics. Not only can it be very difficult to compare really disparate outcomes using one unified framework, but the outcomes of these comparisons are also often heavily theory-laden. And even for a single broad theory, the outcomes can be fragile to changes in the assumptions or approach used, as Rethink Priorities’ Worldview Investigations team pointed out in their recent post on the different types of cause prioritization.
Because we’re uncertain about which philosophical theories are true–how much weight should we put on virtue ethics vs common sense vs being consequentalists–some have proposed taking that uncertainty seriously by using different aggregation methods to combine views that you are uncertain about. You can think of aggregation methods as different ways of solving a complex puzzle. How you choose to solve the puzzle can significantly change the final picture, even if the puzzle pieces (your fundamental beliefs) stay the same. There’s an entire discipline of how to combine opinions when there’s no unanimous consensus on what to do. Yet different methods can come to radically different outcomes, leaving even clever approaches of dealing with object-level uncertainty unlikely to produce a single takeaway about what you should do in practice.

Even real scientific experiments are often unreliable
Actual experiments are often verifiable through additional testing. Just as importantly they are falsifiable. You can get quite definitive evidence one way or another about whether, say, a new cancer treatment really works after repeated testing and through randomizing exposure to the treatment compared to the existing best treatment. But even such real experiments can be misleading.
You’ve surely seen headlines proclaiming a new treatment works only for this later to turn out to be wrong or overstated. Researchers of all kinds know this. As a result most researchers and savvy news consumers have taken a form of skepticism about new evidence of anything that hasn’t been thoroughly confirmed.
Indeed, for empirical studies, it is common for an initial study claiming a causal relationship to be subsequently complicated or nuanced by follow-up investigations. Holden Karnofsky’s Does X Cause Y? An in-depth evidence review blogpost effectively illustrated this dynamic. The strength of the evidence produced by philosophy in the cross-cause prioritization action-relevant cases is markedly weaker than the types of quantitative evidence that exists in the domain Karnofsky was critiquing and certainly weaker than the randomized controlled trials Karnofsky says he likes to rely on.
The philosophy underpinning cause prioritization is much less reliable
Philosophical thought experiments, and philosophy more generally, presume to have the same kind of structure as science: we generate new ideas, test them, and either revise them or proclaim some new conclusion. Unlike science, which restricts its purview to that which is in principle observable by third-parties, there’s rarely any way to definitively prove or disprove the conclusions in the much broader domain of philosophy. Decades or even centuries can go by without definitive evidence coming in basically because in some key areas, that type of evidence does not or could not exist.2 Generally, I submit, philosophy moves slowly and even when it converges on an answer, or a series of answers, it takes robust discussion for that to happen. To a rough approximation, you could say historically, most philosophy has been wrong or at least misguided. In the face of that type of history, I would argue that the best default attitude on any arbitrary philosophical position should be skepticism unless the position is extraordinarily well-mapped out.
And while there have been decades of investigation conducted on many of the major philosophical debates there hasn’t been widespread prolonged debate on several issues related to EA. Interspecies comparisons of moral weight, aggregation methods across normative theories, the correct philanthropic discount rate, and many other topics are largely niche issues.
Even where there is robust debate on these topics, you often end up having to answer questions like: “Which is better, creating a world with trillions of people with lives barely worth living, or a world with 10 billion people absolutely thriving?” or “Which is worse, having the best action today depend on facts about the lives of people in ancient Egypt, or predictably getting victimized by a mugger who claims if you don’t give him your money he’ll kill a quintillion people?”
There may be right answers to these questions, but we probably shouldn’t be 75% confident in either possible outcome, let alone 90% confident. Having strong opinions about EA cause prioritization depends on thinking there are such definitive answers, despite the underlying evidence heavily consisting of tests of our intuitions through thought experiments like these.
In general, weighing up which theories or positions survive an intellectual obstacle course of thought experiments doesn’t look like the kind of evidence that can lead one to have high confidence that ultimately we came to the correct conclusion in these domains. This kind of evidence is almost certainly weaker than the kind of experimental evidence that many people are already skeptical of in science. If you wouldn’t bet a public policy decision on a small randomized control trial that hasn’t been confirmed you should likely be skeptical of betting much of your philanthropic decisionmaking on philosophically contentious claims about decision theory or population ethics.
This doesn’t mean we can’t reach conclusions about what to do at all
To do the hard work of setting priorities in philanthropy, we need action-relevant evidence—which often means acting on significant uncertainties. Still, if the differences between the best charities and the rest are large enough—and they are!—we can make progress.
For example, many charities that are less efficient at saving lives, improving happiness, or creating a just society than alternatives working on the same topics will still be strongly disfavored even by the type of approach that relies more modestly on philosophical conclusions in controversial areas. Basically, charities that accomplish the same goals as a different group but less efficiently will be dominated even by this approach. The Against Malaria Foundation is still better than United to Beat Malaria at saving lives from anti-malarial nets.
While I see the concern about this type of approach meaning you should throw up your hands, there are no plausible ethical theories under which you should donate to art exposure for rich students in the US rather than save lives for the same amount of money. Not being certain when decisions are hard is not the same as not being certain about anything at all.
I think a more accurate reading of the concern about philosophical evidence being weak is something like “among plausible charitable targets like those that rise to the top tier on careful consideration under plausible ethical theories, it’s difficult to draw strong conclusions of one intervention or cause area over another” instead of “you typically can’t draw strong conclusions about charity X over charity Y working on the same problem”.
How big a problem is this for donating to charity based on evidence?
In some sense this isn’t a problem. It’s fine and healthy to have ongoing debate in philosophy. But this is a big problem if you want to stake out strong claims about what specific charitable interventions, policies, or broad areas of action are the best of all options to focus on.
We can still do far more good than we would without an EA approach by thinking carefully about the scale of our potential actions, taking evidence seriously, and generally treating charity with the same respect we would other potentially life-saving actions.
There are times we can’t be very certain, and that’s OK.
What charities are really great given plausible assumptions? If you are looking to help the global poor and save human lives, you’d be hard pressed to beat giving to the Against Malaria Foundation, one of GiveWell’s Top Charities. If you are looking to donate to help the most animals you can, you’d be similarly hard pressed to beat Animal Charity Evaluators’ recommended charities.
What is the best overall action to take to improve the world, considering all possible recipients and different plausible philosophical presumptions? No one knows. Accept mystery. Reality is complicated.
Indeed, if everyone was on board with this approach, “the EA approach” wouldn’t really be considered a distinct thing. If you’re interested in a polemic explanation for why you should take this approach at all see this piece by Dylan Matthews on why saving the lives of kids should take precedence over art.
Imagine seeing the headline: “New meta-analysis of thought experiments conclusively shows utilitarianism is true.” More seriously, it’s unclear what it would even mean to definitively prove that there is a correct normative theory. Something akin to a formal mathematical proof seems like a category mistake and nothing remotely like a scientific theory backed by careful, repeated randomized experiments makes sense either.

