2020-07-25

EA ideas 1: rigour and opportunity in charity

2.2k words (8 minutes)

Effective altruism (EA) is about trying to carefully reason how to do the most good. On the practical side, EA has inspired the donation of hundreds of millions of dollars to impactful charities, and lead to many new organisations focused on important causes. On the theoretical side, it has lead to rigorous and precise thought on ethics and how to apply it in the real world.

The intellectual work that has come out of EA is valuable, especially in two ways.

First, much EA work is exceptional in the breadth and weight of the matters it considers. It is interdisciplinary, including everything from meta-ethics to interpreting studies on the effectiveness of vaccination programs in developing countries. Because of its motivation – finding and exploring the most important problems – it zeros in on the weightiest issues in any particular area. EA work is a goldmine of interesting writing, particularly if you find yourself drawn in a discipline-agnostic way to all the biggest questions.

Second, EA often has a scientific precision of argument that is often missing from discussions on abstract things (e.g. meta-ethics) or emotionally charged issues (e.g. saving lives).

This post explains the motivations behind EA, and has a table of contents for this post series.


Altruism, impartial welfarist good, and cause neutrality

I will have more to say in a later post about specific philosophical issues in defining what is moral. For now I will hope that the idea of an impartial welfare-oriented definition of good is sufficiently defensible that I will not be mauled to death by moral philosophers before that post (though if it doesn’t happen by then, it will certainly happen afterwards).

Impartial (in the sense of considering everyone fairly, and giving the same answer regardless of who’s doing the judging) and welfare-oriented (in the sense of valuing happiness, meaning, fulfilment of preferences, and the absence of suffering) good is an intuitive and fairly unobjectionable idea. Yet if we take it as a goal, it points towards a different idea of charity than the current norm.

Most charities are single-issue charities. This generally makes sense: better to have one organisation be really good at distributing malaria nets and one really good at advocating for taking nuclear weapons off high alert, than to have one organisation doing a mediocre job at both (malaria net delivery via ICBM?).

But the siloing of causes often goes further. If the effectiveness of an intervention is considered, it is often after choosing a cause area. To weigh cause areas against each other, to judge the needs of African children against, say, factory farmed pigs, seems like a faux pas at best, and a sin at worst (for a particularly incendiary tirade on the topic, see this article).

However, if we hold ourselves to an impartial welfarist idea of good, this judgement must be made. An artist might choose what to paint based on how they want to express themselves or on a sudden flash of inspiration. A would-be altruist refusing to weigh causes against each other and instead selecting them on the basis of passion or inspiration is acting like our artist. In the artist’s case it doesn’t matter, but the altruist, in doing so, implicitly values their own choice and/or self-expression over the good that their actions might do. This is not altruism by our definition of good.

Of course, people differ in their knowledge and talents, and these tend to align with inspiration. In the real world, it may well be that your greater ability, drive, and/or knowledge in one area outweighs the greater efficiency at which results convert to goodness in some other area. We will also see arguments for not placing all our bets on the same cause, and explore the enormous uncertainties that come in trying to compare causes. But the idea of cause-neutrality – that causes are comparable, and that making these comparisons is an important part of the job of any would-be altruist – remains.


Effectiveness

Focusing on the idea of impartial welfarist good also makes it clear that, in trying to do good, we should focus on the good our actions result in. This may seem like an obvious statement, but it is not true of much charitable work.

For example, we tend to emphasise the sacrifices of the donor over the benefits of the recipients. Consider old tales of people like Francis of Assisi. Their claim to virtue (and sainthood) comes from giving away all their possessions, but the question of how much good this did to the beggars doesn’t come up. This attitude continues in the many modern charity evaluators that focus on metrics like percentage of money spent on overhead costs. Paying big salaries to recruit the best management and administration may genuinely be a cost-effective way of increasing the total good done, but it conflicts with our stereotype of self-sacrificing do-gooders. Of course, there is virtue in selfless sacrifice, but we should remember that the goal of charity is to make recipients better off, not to rank donors.

As with many things humans do, acts of charity often aren't based on rational calculation. Some consider this a good thing: altruistic acts should come from hearts, not spreadsheets. This is wrong – if you care about impartial welfarist good.

It is a fact about our world that good charity is hard, and that charities have vast differences in cost-effectiveness. When one charity results in ten or a hundred times more healthy years of life per dollar spent than another, boring details of statistical effectiveness become important moral facts. (This is true not just of charities, but most kinds of projects that might impact many people – government policy, activism, and so on.)

When the difference in effectiveness between different interventions is often greater than the difference to doing nothing at all, and when these differences are often measured in lives, effectiveness considerations are critical in any attempt to do good.

There is a role for simple, comforting altruism, but this role isn’t making big decisions over how to benefit others. These decisions deserve more than goodwill. They deserve to be made right.


Opportunity

Debates over charitable giving often centre on questions of moral duty and obligation (a good example is Famine, Affluence, and Morality, Peter Singer’s classic paper that laid some of the foundations of what later became EA).

Another framing is to think of it as an opportunity. To someone who cares about impartial welfarist good, altruistic acts are not a burden but an opportunity to achieve valuable things. In particular, there are many reasons to think that we (as in developed-world humans of the early 21st century) have an exceptionally large opportunity to do good.

First, our values are better than those of people in preceding eras. This statement implies many philosophically contentious points, but for the time being I will not defend them, instead appealing to what I hope to be a common sense conviction that human morality isn’t nearly relative enough that it is impossible to differentiate modern secular humanist values from values that support war, slavery, and boundaries on personhood that exclude most people.

(Of course, this statement also suggests that our current moral views are far from perfect too. This is important, very likely true, and will be discussed at length in future posts. The fact that this is increasingly recognised is hopefully a hint that we are at least on the right track.)

Second, we have more resources than people in previous eras. There is also large variation in global income, meaning that if you happen to live in a rich country, you can help many others for cheap. A 2-adult, 1-child UK household with a total income of £30,000 is in the top 10% of the world income distribution and 7 times richer than the median global household.

Third, knowledge on what is effective has increased and technology make it easier to apply this knowledge. Today GiveWell’s thorough charity research can multiply the impact of giving. Twenty years ago, there was no GiveWell. Two hundred years ago, donation guidance, if it existed, might have consisted of the church telling you to donate to them so they can convert people and push their social values.

Fourthly, we may have an unprecedented ability to affect where civilisation is headed (for thoughts on this topic, see for example this link). The steepness of technical advancement increases the variance of possible future outcomes: in the next few decades we might nuke each other or engineer a pandemic – or we can set ourselves on a trajectory towards becoming a sustainable civilisation with billions of happy inhabitants that lasts until the stars burn down. Past eras didn’t have similar power, and if the future goes well humanity will no longer be as vulnerable to catastrophe as we are today, so people living roughly today might have exceptional leverage.


Common EA cause areas

The cause areas most frequently seen as important, and most specific to EA relative to what other charities focus on, are:

  • Global poverty, because the developing world is big, poor, and has many tractable problems with well-researched solutions.
  • Animal welfare, because it is largely ignored, and potentially huge in scope (depending on how much animal lives are valued).
  • Existential risk: focusing on avoiding human extinction or other irrevocable civilisational collapses, because new technologies (AI and biotech in particular) make them scarily plausible. (Sometimes this is motivated even more strongly by long-termism: specifically caring about the overwhelming number of happy future lives that may come to exist over the long-term future if we don't mess things up).

These are far from the only cause areas discussed in EA. Many EA-affiliated people argue either against some of the above, for the overwhelming importance of one relative to the other, or for entirely different causes.


Effective altruism in practice

In practice, EA can seem weird and theoretical.

The main reason for EA weirdness is that it casts a wide net. Everyone agrees that international peacekeeping is an important project, and also a serious one: it doesn’t get much more serious than world leaders intervening to get men with big guns to have big talks about their big disputes. On the other hand, the colonisation of space is important, but seems to have very little gravitas indeed; it’s something out of a science fiction novel. However, just as it’s a brute fact about the world that there are lots of violent people with big guns, it’s also a brute fact that space is big; both of these facts should be taken seriously when considering the long-run future. There might be a clear line between sci-fi and current affairs in a bookshop, but reality doesn't care about genre.

More generally, it’s important to keep in mind that every moral advance started out as a weird idea (for example, it was once considered crazy to suggest that women should get to vote).

Parts of EA are very theoretical. This, too, is by design. Future posts will show many cases where which way we resolve a very abstract issue has a big impact on what the right practical action is – and in many of these cases it is unclear what the right resolution is. Finding out clearly matters.

If EA seems too theoretical or mathematical to you, consider two points. First, whatever the field, doing complex things in the real world tends to involve (or be built on) theoretical heavy lifting. Second, most charity efforts don’t pay much attention to theoretical issues; EA is at very least a helpful counterweight, and likely to uncover missed opportunities.

Whenever the goal is to do good, it is easy to be overwhelmed by feelings of righteousness and forget theoretical scruples. Unfortunately we don’t live in the simple world where what feels right is the same as what is right.

The core of effective altruism is not any particular moral theory or cause area, but a conviction that doing good is both important and difficult, and hence worthy of thought.


This post series:

  1. Rigour and opportunity in charity: this post.
  2. Expected value and risk neutrality: a rational agent maximises the expected value of what it cares about. Expected value reasoning is not free of problems, but, outside extreme thought experiments and applied carefully, it clears most of them, including "Pascal's mugging" (high-stakes, low-probability situations). Expected value reasoning implies risk neutrality. The most effective charity may often be a risky one, and gains from giving may be dominated by a few risky bets.
  3. Uncertainty: we are uncertain about both what is right and what is true (being mindful of the difference is often important). Moral uncertainty raises the question of how we should act when we have credence in more than one moral theory. Uncertainty about truth has many sources, including ones broader than uncertainty about specific facts, such as our biases or the difficulty of confirming some facts. These uncertainties suggest we are unaware of huge problems and opportunities.
  4. Utilitarianism: while not a necessary part of EA thinking, utilitarianism is the most successful description of the core of human ethics so far. In principle (if not practice, due to the complexity of defining utility), it is capable of deciding every moral question, an important property for a moral system. Our moral progress over the past few centuries can be summarised as a transition to more utilitarian morality.


(More coming)

No comments:

Post a Comment