Effective Altruism Thinks You're Hitler
The real reason everyone hates people who give away their kidneys
Scott Alexander recently defended Effective Altruism from its attackers, who these days are coming from all directions. I agree with his article, and would emphasize the true moral atrocity of factory farming, which EAs are correct to focus on as everyone else ignores it.
What is lacking from his piece though is a good psychological theory of why everyone hates EAs, and I think Scott’s defense of the movement actually demonstrates much of the problem. As he points out, hostility to it can’t simply be a rational reaction to the actions of SBF, since people of all ideologies often commit fraud, and it is unclear why his crimes in particular should discredit an entire movement. Moreover, EAs have done a lot of things that even their critics would have to acknowledge are good. So what’s going on here?
I think EAs miss just how much their ideology offends people by its very nature. The surface-level criticisms are of course mostly ridiculous and not even consistent with one another; EAs are, for example, attacked for being both “woke” and “white supremacist,” which is an indication that each side is basically just throwing out the worst epithet they have at the group.
The main problem I think people have with EA is that it is a mirror. It tells you exactly what you are doing wrong, and why. This wouldn’t inspire so much anger unless there was suspicion among others that the critiques they make of standard ways of looking at the world are correct. Everyone accepts utilitarianism to some degree. In that sense, EA has the misfortune of being adjacent to all ideologies, while by its very nature calling them morally and intellectually deficient. There may be no way out of this dilemma, short of abolishing EA as a movement, which I don’t think would be a very good idea.
Effective Altruism as a Mirror
A few months ago, Scott Alexander wrote about donating his kidney. My reaction was that this isn’t for me, but still a very nice thing to do. No, I will never do anything similar, because I am selfish and don’t care about others and think that the people who need my kidney probably made bad lifestyle decisions anyway so whatever. Later, when he posted highlights of the comments to the piece, I was taken aback by how many people were actually angry about what he did, and how they came up with all kinds of nonsensical reasons for this that Scott easily picks apart.
I think this exchange is a microcosm of what’s going on when people say they hate Effective Altruists. Scott’s position is that look, this is a nice thing to do, you don’t have to do it yourself, but here’s my assessment of the risks and the story of my own experience. For most people, it’s very psychologically difficult to go “well, you’re pretty much right, but I’m not going to take off work for a week to save a stranger, even if they slowly die as a result.” That’s certainly my reaction, but there are rumors out there that I am psychologically abnormal. So when people hear about others donating their kidneys to strangers, they need to come up with ways to make themselves feel better. “You see, there’s something called the bodily integrity norm…”
I’ve written about why I eat meat, despite being convinced that the arguments that vegetarians make are basically irrefutable. Based on what EAs have written, I have replaced much of the shrimp and chicken in my diet with beef and pork, which they say gets rid of most of the harm, but I’m not going any further than that. Most people though either eat meat and dismiss the concerns of vegetarians, saying man has dominion over beasts or some such nonsense, or they accept them and try to limit their meat consumption.
If I’m going to be completely honest, I really don’t care that much about being a “good” person in the sense of not doing harm to other sentient beings. Sure, it’s one thing I value, but I balance it against things like my pride, ego, and aesthetic preferences. Murderers have always angered me less than people who don’t want to execute them, with the worst being family members who call for those who killed their loved ones to get a second chance. To be evil is bad, but weakness is what is truly unforgivable. Even my embrace of EA principles is at its heart an expression of my egoism. I am too masculine, smart, and brave to not see the world clearly and take arguments to their logical conclusions. I still think I’m good in a relative sense, since most people can’t even be bothered to eat less chicken, and are more likely to be entertained by the ad campaign of the illiterate cow begging for its life than horrified by it.
Another consideration is that I care about how good I am relative to other people. By eating more beef and less chicken, and writing articles about how the animal rights movement is completely correct, I think I am at least at the 90th percentile of “goodness.” That’s still pretty bad relative to vegans or how good I could potentially be, but I’m fine with that. I aspire to be closer to the 99th percentile of honesty, bravery, and social and political insight, and if I fall short on those qualities I’m more likely to work hard at being better. Why I have this particular combination of values, if that question is even answerable, is a topic for another time.
EAs do stuff like calculate the number of shrimp the average American tortures a year or what charity they should give to in order to save the most quality-adjusted years of life. You can’t announce that this is what you are doing without providing a challenge to others regarding how they approach moral questions themselves. In his defense of EAs, Scott writes,
Still not impressed? Recently, in the US alone, effective altruists have:
*ended all gun violence, including mass shootings and police shootings
*cured AIDS and melanoma
*prevented a 9-11 scale terrorist attack
Okay. Fine. EA hasn’t, technically, done any of these things.
But it has saved the same number of lives that doing all those things would have.
About 20,000 Americans die yearly of gun violence, 8,000 of melanoma, 13,000 from AIDS, and 3,000 people in 9/11. So doing all of these things would save 44,000 lives per year. That matches the ~50,000 lives that effective altruist charities save yearly.
People aren’t acting like EA has ended gun violence and cured AIDS and so on. all those things. Probably this is because those are exciting popular causes in the news, and saving people in developing countries isn’t. Most people care so little about saving lives in developing countries that effective altruists can save 200,000 of them and people will just not notice. “Oh, all your movement ever does is cause corporate boardroom drama, and maybe other things I’m forgetting right now.”
I don’t think anyone can read this without coming to the conclusion that Scott thinks people are really dumb for worrying so much about school shootings and terrorist attacks relative to other things, in addition to being immoral for valuing the lives of their fellow citizens so much more than those of foreigners. I think he’s right on these points. But just because you’re not explicitly telling people they are stupid doesn’t mean they can’t tell that is what you’re doing.
In the 2004 presidential election, terrorism was by some measures the top political issue in the country. Fox News still runs regular segments on terrorists flooding across the Southern border, despite the amount of terrorism in this country being pretty much zero. Liberals get more emotional about police shootings of criminals than they do about all the poverty in Africa. EA comes along, points to each faction in our politics, and says “stupid, stupid, stupid.”
It would be one thing to just argue that Americans overrate the threat of terrorism. Writers do that all the time, and it doesn’t anger most people that much. It’s another to build an entire movement based on the idea that everyone else is immoral and dumb.
Adjacent to All, Friend to None
Freddie deBoer argues that effective altruism boils down to the concept of “try to do good,” which no one would disagree with. This is true, and the implicit heart of Effective Altruism is the claim that people outside the movement are actually very bad at it. In this way, it’s got something in common with conservatism, which assumes that your average upper middle class do-gooder isn’t helping the world all that much, and in fact is if anything creating a lot of harm much of the time. Conservatism openly announces itself as the enemy of the journalistic and activist classes on that basis. Effective altruism likewise assumes that members of these classes are dumb and hypocritical, but pretends to come as a friend.
Imagine you are at a conference on ending global poverty. Two individuals shake hands, and realize that they both identify as “effective altruists.” What do they now know about one another? What distinguishes them from all the non-EAs at the conference? They would of course say they want anti-poverty programs to be effective. But everyone else says that too. At the same time, the EA label is not meaningless, as it helps you predict certain things about these two individuals. For example, they’re more likely to think of tradeoffs between different kinds of charity efforts and actually crunch the numbers behind what they’re doing. To identify as an EA is to basically say that I’m a do-gooder, but not of the stupid kind.
Often, ideologues will spend more time arguing with those closer to them than people who have a completely alien worldview. As different as conservatives and liberals or capitalists and socialists might be, they all believe that their preferred policies lead to more happiness and less suffering. Although I don’t really think Moral Foundations Theory is very useful, if you do you might note that everyone across the political spectrum is high on the care/harm foundation. That doesn’t mean that people don’t have other values, like liberty, equality, or living in harmony with the natural order, that they’re willing to trade off against utilitarian considerations. Only that believers in practically every ideology make some utilitarian arguments, and get defensive if you point out that there might be unintended harms of what they advocate for.
Everyone therefore feels close enough to EA to be shamed by it. Conservatives may appreciate its skepticism towards socialism and woke, but recoil at its secularism and opposition to nationalism and speciesism. Liberals love the idea of critiquing irrational prejudice and religious doctrine, but don’t like to be reminded that by their own standards of trying to make the world a better place they often fall quite short.
People often say that EA has a cold or autistic feel to it, but the irony of this is that modern Westerners with their WEIRD morality seem that way to much of the rest of the world. There’s a joke that I once heard that asked how strange it was that everyone who drives slower than you is responsible for jamming up traffic while everyone who drives faster is a maniac. Similarly, people generally find those who use more consistent logic than they do to be inhumane weirdos, while seeing those who rely more on intuition to be uncivilized savages. Liberals look down on Western conservatives, who in turn are shocked by the barbarism of the Muslim world. In the opposite direction, the Muslim scholar Sayyid Qutb was scandalized by the licentiousness of 1950s America and our own right-wingers obsess over how leftists are perverted and indifferent to “normal” human sentiments like patriotism. EAs sit at the extreme end of the logic-emotion spectrum, and are naturally targeted for that reason. They’re the WEIRDest of the WEIRD, and a reminder to the liberal establishment that in many ways they’re not all that different from the more conservative masses that they detest.
Why Hitler Was Bad
If you ask people why Hitler was bad, most of them will say something like he killed a lot of people and created a great deal of suffering in the world. But if those are our criteria for what makes a good or bad person, then it is hard to square with our intuitions about who is or isn’t blameworthy.
In the comments of my article on eating meat, Željka Buturović writes,
If factory farming is really the worst crime in world history, then every single person, except for a handful of vegans, is (many times?), literally and metaphorically, worse than hitler, ted bundy, idi amin, nero etc. and when philosophical exercise leads you to such a conclusion, it's much more likely that are you operating beyond the limit of usefulness of your theoretical framework (e.g. utilitarianism), than that you are grasping big truths.
One response to this is that few if any individuals have as much responsibility for the factory farm system as Hitler did for the Holocaust or the Second World War. So if factory farming is as bad as the Holocaust, and there are say 6 billion meat eaters in the world, you might be responsible for 1/6 billionth of the Holocaust, which isn’t that big of a deal.
But there’s a larger point in the sense that modern societies are collectively as responsible for terrible atrocities as Nazi Germany was. And if you want to single out the CEO of Tyson Foods, one might say that we have something close to a Hitler walking among us, and what kind of monsters are we to build a society where such an individual becomes a millionaire rather than get sent to prison for life? One might also throw in the low-wage worker who decides to take a job at a factory farm instead of say picking fruit for $2 an hour less. Maybe he’s not Hitler like the CEO of Tyson Foods, but we might call him a mini-Saddam or a Castro. To be honest, it would have been more accurate if I called this article “Effective Altruism thinks you’re all collectively Hitler” but that would not have been as catchy (again, honesty is one of my greatest values! But I also care about popularity and can justify exaggerating in a headline if it will get me more clicks as long as I explain to you what I’m doing).
I don’t think our problem with Hitler is simply that he was responsible for a lot of suffering, but rather that he directly caused suffering in ways that are unacceptable given the moral values and norms of our society.
I think an example can demonstrate this point. Let’s say that you found out that your neighbor keeps pigs confined in tiny crates in his basement, never lets them see sunlight, cuts off their tails without anesthetics, and basically leaves them living under torturous conditions until he finally eats them. We would all agree that this is a bad person we’d want to avoid.
It’s obvious what the next step in this argument is, since almost everyone who eats pork does something similar through their purchasing decisions.
If you insist on being speciesist and saying humans matter and animals don’t, I can make the same point by asking about someone who personally tortures household pets and contrasting them to an individual who simply eats meat. If you’re like most people, your view of your neighbor is more shaped by how he treats his dog than how many animals he harms through what he buys at the supermarket.
Now maybe this is a “limit” of utilitarianism, whatever that means. But if we want to say Hitler is bad, and so is the guy who kills cats for fun, but the guy who slits the throats of chickens at the factory farm for a slightly higher wage isn’t, we need some standard other than the amount of pain and suffering an individual causes.
Now, while ethically our intuitions make little sense, they do have a perfectly reasonable evolutionary explanation. A person who commits atrocities in the ways that are acceptable for his society is psychologically normal. You would probably not be worried if the CEO of Tyson Foods moved next door to you, while you'd be creeped out if someone who killed animals for the thrill of it did, even if the former is responsible for a lot more harm. And this would be rational in terms of self-interest. You wouldn’t want the animal killer to marry your daughter, but probably wouldn’t mind if the son of the CEO of Tyson Foods did. This is the same reason that contemporary Nazis freak us out a lot more than Marxists.
One might say that the key difference here is whether you kill someone directly or from a distance, and therefore the animal torturer is worse because he sends a signal to us about what he is capable of. But I don’t think that’s it. While in power, Hitler didn’t kill people with his bare hands. And although the factory farm worker might personally slaughter chickens, I think most of us don’t feel like he’s that much of a worse person than someone who works at a call center.
The point of all this is that we claim to judge people to a large extent based on how much good they do in the world, because that’s what sounds good. In reality, we care more about how much they conform to social norms (with the caveat that breaking norms can itself be attractive, see peacocking and all that).
Again, effective altruism doesn’t just point out our shortcomings. It builds a whole ideology out of it. After telling you that you are a hypocrite, and completely deluded about your own moral intuitions, it then throws in a caveat: “Now, I’m not saying that you’re a bad person, you’re free to do what you want, now let me just tell you about how I saved a life for basically no cost, which is something that makes complete sense to do if you care about your fellow humans, which you claim to do. But again, you do you.”
What Should Effective Altruism Do?
It’s true that factory farms are really bad, and Effective Altruists are the only ones who care even a little bit about this, while you are debating pronouns and how to increase the living standards of people who are already so wealthy they are eating themselves into an early grave. The movement is necessary and mostly correct, and I’m psychologically suited to take what I think is valuable from it while casting aside the rest.
But most people seem unable to do that. There may simply be no way out of the underlying dilemma. Doing good and publicizing that fact will by necessity remind people that they’re not doing much good themselves, and that they completely lack self-awareness about their own motivations. This will inspire anger, and attempts to put together philosophical justifications as to why torturing animals because you like how they taste is actually fine, but mutilating cats in a back alley is still very, very bad. EAs may be able to do nothing but press forward.
Another potential strategy for EA to resolve this tension is by simply owning the anti-egalitarian implications of its philosophy. Communism changed the world by appealing to intellectual elites who self-consciously saw themselves as a vanguard moving society forward. One might have an esoteric and exoteric version of EA, which to a large extent exists already. People in the movement are much more eager to talk to the media about their views on bringing clean water to African villages than embryo selection.
Both approaches – soft pedaling the dismal view EA takes towards the moral and intellectual capabilities of most humans or owning it – come with potential risks and rewards. And I’m not even sure if being aware of this dilemma is a good thing. I suspect Scott made the best possible defense of EA when he pointed out that it saves more humans than preventing 9/11, etc., and the one that is likely to appeal to the most people. But I still think it rubs many the wrong way, and will make others hate the movement even more. The guy can’t even give his own kidney away without people in the comments yelling at him for it.
Perhaps the real lesson here is that people should adopt effective altruist causes without being part of a self-conscious movement. Animal rights and stopping malaria don’t anger people and, as deBoer points out, one can argue for these things by appealing to widely shared principles and beliefs. You can always criticize charities for doing a poor job of serving their stated missions. People do that all the time without needing to call themselves Effective Altruists.
When you put all these ideas – animal welfare, giving away kidneys to strangers, applying cost-benefit analysis to charity, etc. – together under the umbrella of “things you should be worrying about but you’re too stupid to,” it naturally creates a backlash. Perhaps there should be effective altruist philosophers, but not an effective altruist movement.
People can read ACX, Peter Singer, and Derek Parfit by night, and wake up the next morning and go back to being conservatives and liberals. One might in that way shave off the most irrational aspects of the major ideologies that actually motivate people. You won’t get conservatives to stop caring about Muslim terrorism, but may perhaps temper their natural tendency to overreact to the threat by pointing out how statistically small a problem it is compared to normal street crime. One might convince a moderate too selfish to be a vegan to replace chicken with beef. Liberals will likely always care about inequality, but it shouldn’t be impossible to turn their focus away from protectionist measures trying to help poor people at home to supporting free trade agreements that improve the lives of those that are even worse off in other countries.
At the same time, there is some signalling value in the Effective Altruism label, and this may make it worth holding on to. The movement is what Scott calls a “social technology.” How else are the non-stupid people supposed to find one another? If I care about animal welfare, and I’m actually numerically literate enough to know that all my focus should be on stopping factory farming, the EA label tells me which meeting to attend. It beats just showing up to a PETA protest, waving your arms in the air, and asking if there are any other rational people around in this sea of idiots.
The fact that EA takes utilitarianism so seriously is both its strength and its weakness. It can potentially influence those of every other ideology at the margins. At the same time, the movement also poses a challenge to nearly everyone who comes into contact with it, and if they’re not converted they easily become enemies. In the interest of effectiveness, EAs might want to consider how to confront this reality.
I disagree that it's immoral to value the lives of their fellow citizens so much more than those of foreigners. If I help a homeless person in my home town get a job, I have benefitted myself by making my own environment safer and more productive in a way that is not true if I do the same for someone halfway around the world. So it seems to me clearly rational to value my neighbors and fellow citizens more than foreigners. You might then say that even if it is rational, it is immoral, because one should value all lives equally. But that strikes me as circular, since the very question is whether it is immoral not to value all lives equally.
I am not making an abstract point. To me, it seems clearly true that I should love my wife more than my neighbor, my friends more than strangers, my community more than my country, and my fellow citizens more than foreigners. Is there anything inconsistent or illogical about that belief?
I think Richard's error here is to conflate the idea that everyone is to some degree a utilitarian with the idea that everyone engages in and approves of atheistic moral philosophizing.
Most people rely on some combination of moral intuitions (i.e. going with their gut, some combination of inborn disposition and societal conditioning) and religious/spiritual thinking (not necessarily anything resembling an orthodox religion -- can easily be something as vague as "karma" in the pop New Age sense). "Everyone is a utilitarian" is really describing a moral intuition: if we can help two people instead of one with an equivalent effort, then sure, let's help two. This is distinct from the atheistic moral philosophy of utilitarianism. Which, ironically, is without utility for most people.
Only a small slice of the population places any weight on atheistic moral philosophy: the idea that, although there is no ultimate meaning to anything, it's still a worthwhile exercise to start with your moral intuitions but then construct elaborate mental frameworks around what is good and bad -- even when this takes you to morally unintuitive places. For what reason? None, really -- just a semi-autistic, systematizing personality that has rejected religion and spirituality while at the same time not being content to rely entirely on intuition.
Naturally, most people really dislike this exercise because most people dislike weird, autistic things. They also dislike when people promote things that strongly contradict their moral intuitions, like having sex with dogs.
If you have the atheistic moral philosophizing personality type, then what you're doing seems clearly better than what the normies are doing: you're developing internally-consistent systems! More internally consistent = more true! But this isn't necessarily true when we're talking about systems that are ultimately meaningless. Since the only reason non-religious people care about morality is because they don't want to contradict their intuitions (i.e. their conscience) and feel bad, and since systematizing requires approving of morally unintuitive things, then clearly systematizing is worse for them. Plus it requires a bunch of extra work.