158 Comments

I disagree that it's immoral to value the lives of their fellow citizens so much more than those of foreigners. If I help a homeless person in my home town get a job, I have benefitted myself by making my own environment safer and more productive in a way that is not true if I do the same for someone halfway around the world. So it seems to me clearly rational to value my neighbors and fellow citizens more than foreigners. You might then say that even if it is rational, it is immoral, because one should value all lives equally. But that strikes me as circular, since the very question is whether it is immoral not to value all lives equally.

I am not making an abstract point. To me, it seems clearly true that I should love my wife more than my neighbor, my friends more than strangers, my community more than my country, and my fellow citizens more than foreigners. Is there anything inconsistent or illogical about that belief?

Expand full comment

An interesting pattern among EA types is not seriously considering all the various local maxima of the utilitarian social welfare function, and how on the margin you make various trade offs about which of them you can reach, but only really talking about maxima and marginal effects that appeal to certain broadly "Liberal" inclinations.

Which of course from a utilitarian stand point is sort of bizarre, but once you realise that a large part of the motivation to become a "utilitarian" is the core moral intuition that donating X amount to the AMF to save Y number of foreign children is more moral than saving your own children or family etc. It's no wonder why "utilitarians" have ended up so liberal, and therefore why various utilitarian thought leaders are so coy about stuff like embryo selection etc.

Expand full comment

Another practical point they seem to miss is that I have a lot more information about the efficacy of local giving. It is notorious that many charities are inefficient at best, or grifts at worst. My wife and I are both active locally and between us, I have a pretty good idea of how effectively my donations will be spend, and I can target charities that I know are well run. Re foreign charities, the EA types want me to spend time getting information directly - why should I do that just so I can give effectively to foreign charities, when I already have all the information I need to give locally? Or they want me to trust them that eg mosquito nets in Africa are good - but why should I believe them? That they call themselves altruists is not enough.

Expand full comment

Because “they” have done extensive reasearch and published it.

Expand full comment
Dec 4, 2023·edited Dec 4, 2023

EAs don't believe that we should not, in our own lives, value those closest to use more thn others. I admit it might be seen as a philosophical truth that all lives (to the extent they are capable of valuing their experiences) are equal in importance, from behind the veil of ignorance. However EAs also realize that practically this does not work for people who want to do good.

EA as a culture is very overt that people should take care of their loved ones, like it's well understood that people who have to care for sick relatives etc can't donate much. In fact, EAs (or aspiring EAs, or departing EAs) in that predicament write their thoughts in private support facebook groups and slacks all the time, and everyone is very supportive. And 80k advisors explicitly recommend you not donate more than you can afford, and you should save for retirement etc, and keep hobbies and connections to your community. Seriously, all the high-up EAs I know live a well-rounded life. Most even want to have kids (despite this taking a solid cut from potential future donations).

EAs don't say that donating elsewhere absolves you of a responsibility to those close to you, rather the idea is that *if you are the type of person who believes in moral responsibility* (which it sounds like you might be) then you should consider that trouble close to you does not necessarily absolve you of the responsibility to help at least some others who are not close to you, depending on your means. Alternatively, if Helping Others (TM) is something you value highly, you can probably accomplish that better by looking at opportunities that aren't close to you, or aren't intuitive, just an FYI.

Additionally, most EAs are pro-taxation and paying a government to create social safety nets for their neighbors. Yes, they would be happy to do this out of their own nonprofit paychecks, with the understanding that this means nonprofit budgets need to be higher than in a world without a strong government. Despite the badmouthing you read online, very few EAs are libertarians. In EA surveys, 85% are on the left while only 15% or so are on the right/conservative. I'm not even confident most of the rationalists in the bay, let alone the EAs in the bay, are libertarian. And then most EAs live outside of the bay (NY, DC, Boston, Seattle, London, Oxford, Berlin, Netherlands, Australia, India) in places that take social welfare really seriously, culturally.

Most EAs, if asked, would prefer a tax increase on the wealthy over marginal extra donations to AMF, for example (Bill Gates wants this too). But they think that isn't the most tractable and neglected thing for them to work on. I think you'd agree.

FWIW EAs also don't believe that everyone needs to dedicate their lives to effective altruism, say by eradicating malaria or some other neglected problem. I think most EA's think that if 10% of the population was working and thinking in this way, and almost everybody else had it in mind at least whenever they *want* to be intentional about "doing good", that would be sufficient to accomplish major change and solve most of today's most egregious problems. There should still be people enjoying and living life and supporting the economy in standard ways. It is morally inconsistent and hypocritical to care about human potential and happiness, and then pass the buck by saying "no regular happy life for me or thee". EA's aren't as dumb as people think ;)

Expand full comment

> Most EAs, if asked, would prefer a tax increase on the wealthy over marginal extra donations to AMF, for example

How is this compatible with the "effective" part of EA, though? Government taxes are notoriously both diffuse and highly inefficient ways to get what you want. Like, that tax increase might just go straight into weapons manufacture, or subsidizing loss-making operas or increasing the wages of civil servants or one of a million things that isn't malaria nets for Africa.

I'd have thought actually that being libertarian is basically a requirement to call yourself an EA, as government spending is usually "local", so if you think the most effectively altruistic thing you can do with your money is spend it on poor third world countries, you should be arguing for as much tax cuts as possible and to keep government out of things.

Hearing you say that it's the opposite, moves me closer to the position espoused above where it's claimed that EA is just liberals pretending to be something else, which makes sense because insisting that everyone should care about the problems of far away third world countries more than what's going on in their neighbourhood is a very lefty thing to do.

Expand full comment
Dec 5, 2023·edited Dec 5, 2023

Yeah I probably spoke too strongly there and confusingly and realized that as I was writing it but just went with it. Sorry. I should have added "with at least some data backed or well reasoned spending by the govt, not a govt that is just spinning its wheels". Just one more thing that makes that a pipe dream.

Firstly I meant to refer specifically to replacing the majority of EA funding efforts. I don't think EA would shut its doors but would evolve to be more of a government focused think tank branch. And yeah maybe this wouldn't be good for more experimental charity startup funding.

The thing is that if we could get all billionaires to give their counterfactual tax dollars to effective causes, or even 25% of those dollars, then sure taxation would not be effective in that world. But after over a decade of trying, EA has only convinced one more billionaire to do this who wasn't already, who is still funding EA stuff (so not counting Gates or SBF). So I'm not confident we will ever convince the majority of the ultrawealthy to give effectively. There are efforts like the generation pledge and founders pledge to try to change the culture toward giving for the ultrawealthy. And Gates and Buffett have their initiatives among their peers. But extra tax dollars would be thousands of times more dollars than this, and it's hard to imagine that quantity of dollars would not be cost competitive in many frameworks.

Tax dollars are often spent poorly and are rarely spent in terms that EAs would see as *highly* effective. But sometimes they are spent okay!

The US international aid budget is much bigger than EA's funding. And in addition to global health, local and federal governments in the US and EU can and have spent toward nonGH EA cause areas, like green energy, biosecurity and vaccines, stimulus for clean protein, stimulus for farmers to transition to cage-free egg production methods, and recently AI Safety. One can imagine how much more govts might spend on such things with greater budget. Some countries may have reached maximally-useful progressive tax rates but probably many have not. And in that world with higher rates for the ultra-wealthy, those who wanted to give to effective charities could still do so. Compared to EAs tirelessly fundraising for relative peanuts, this is appealing.

I think that is how Gates feels too, if he and others like him were greatly taxed, he would probably find it relatively easy (in comparison per dollar that gets spent) to lobby for the part of the suddenly-much-larger international aid budget to go to malaria nets and malaria vaccines. And you get a benefit of raising the waterline for your own people too of course.

Obviously libertarians will feel differently, and probably 10 percent of EAs are libertarian (many progressive EAs agree about govt regulation but not taxation, govt services, or social safety nets). But even many non-libertarian conservative EAs *might* agree that, say, in cases where tax dollars are used for things which are proven to give economic stimulus, or increasing national security from AI and bioterrorism threats, that increased govt spending might yield more good for both Western and nonWestern countries (complete moral circle), than either would have gotten if EAs just fundraised and "proselytized" for more effective charity as we have tried to do, **knowing that hardly anyone is listening to EAs and EA funding remains so very tiny despite our best efforts**. And they might even think it'd be tractable to improve govt spending if that time came.

My only point was that you can see the morality of care doesn't exclude local communities. That it is fine or good in most EAs eyes that taxation happens as well as other things that keep their own communities spinning. Most EAs would definitely not choose to abolish social services out of some moral purity about sending as much as possible to neglected causes. That would be heartless and strange and would mess with success of the West, which EAs do think is important. I hope it is still evident how EAs are different from general progressives because of counterfactual reasoning, data focus, and actually doing the things.

Expand full comment

Is Gates really effective, I wonder? I know he's super into malaria stuff but I talked a couple of years ago to a malaria researcher who painted a very dark picture of the effect GF funding had on the malaria world.

The gist of it was that Gates only really cares about eradication efforts, not mitigation/control. A lot of malaria researchers don't think eradication is actually reasonable, but hide their own views and even lie about what they think to get access to Gates funding. This leads to a lot of risky and ethically dubious grants, for example, initiatives to spray whole areas with anti-malarial drugs. If it works, you kill all the mosquitos in that area and malaria disappears for a while (until they move back in from other areas). If it fails, well you just created drug-resistant malaria and now your best weapon in hospitals for genuinely sick people is gone. But because Gates only wants eradication, that's the sort of thing that happens anyway.

Also apparently a lot of the researcher there is fraudulent for the usual reasons.

So this is the sort of problem with the "effective" part of EA. Especially for things that happen far away, at the end of a long chain of people, it can become hard to know if you're really improving things or actually making them worse.

Expand full comment

It's not inconsistent in that it's a way you can act, but the world becomes worse when everyone decides to aggressively favor their tribe above everyone else.

Expand full comment

That's why people don't like EA types - I say "I love my wife more than a random stranger" and they say "You are aggressively favoring your tribe." That sanctimony that turns people off.

Expand full comment

Who does that? Certainly not Peter Singer, who has been asked many times whether he favors his children and grandchildren, always acknowledges that he does, and admits that he doesn't have a great ethical justification for this.

Expand full comment

And that's exactly the problem. Any gathering whether people are seriously discussing whether they have a "great ethical justification" to care more about their own children than strangers isn't a movement, it's an insane asylum. You might as well be discussing whether there's an ethical justification for walking on your feet instead of your hands.

Expand full comment

And I'd say that anyone who puts a naturalness constraint on moral conclusions doesn't understand what sort of enterprise moral philosophy is, like, at all. I'm not sure how we could have arrived at the wrongness of, e.g. spousal rape, with such a constraint.

Expand full comment

My point is not that you're wrong for having an affinity for friends and family. I prioritize my friends and family over other people too (partly because they're not going to be prioritized by anyone else, partly because it's just a normal human value). But obviously the world would be a lot better if people cared about strangers as much as they cared about their own kin, so it makes sense as a sort of moral aspiration.

Expand full comment

Yes, but I think the EA would argue that the benefit of the job to the homeless person (and the added gains to you) are still outweighed by the much larger size of the benefit experienced by someone(s) in poverty on the other side of the world as would be generated by the same amount of effort you expended helping the homeless person. Consider for example two levers. Both take the same amount of effort/money to pull. Pulling one lever gets your homeless person a job. Pulling the other lever saves three lives from death, but you don't know these people and never will.

They then might ask you if you *had* gone and helped any homeless people get jobs lately - the point being they've made it very easy to do a lot of good. Your issues in the later comment about not believing them sound a bit crankish also, they document their reasonings and rankings very well.

Expand full comment
Dec 5, 2023·edited Dec 5, 2023

There's certainly nothing inconsistent about our behaving largely as you describe, nor illogical in most senses. I guess I'd just say it sounds like a part of decision making that is not moral philosophy. Moral theories (whether consequentialist, deontological, or virtue theoretic) aim to map the gaps between what comes completely natural to us, and a less natural but more justified basis for our choices.

Expand full comment

I think Richard's error here is to conflate the idea that everyone is to some degree a utilitarian with the idea that everyone engages in and approves of atheistic moral philosophizing.

Most people rely on some combination of moral intuitions (i.e. going with their gut, some combination of inborn disposition and societal conditioning) and religious/spiritual thinking (not necessarily anything resembling an orthodox religion -- can easily be something as vague as "karma" in the pop New Age sense). "Everyone is a utilitarian" is really describing a moral intuition: if we can help two people instead of one with an equivalent effort, then sure, let's help two. This is distinct from the atheistic moral philosophy of utilitarianism. Which, ironically, is without utility for most people.

Only a small slice of the population places any weight on atheistic moral philosophy: the idea that, although there is no ultimate meaning to anything, it's still a worthwhile exercise to start with your moral intuitions but then construct elaborate mental frameworks around what is good and bad -- even when this takes you to morally unintuitive places. For what reason? None, really -- just a semi-autistic, systematizing personality that has rejected religion and spirituality while at the same time not being content to rely entirely on intuition.

Naturally, most people really dislike this exercise because most people dislike weird, autistic things. They also dislike when people promote things that strongly contradict their moral intuitions, like having sex with dogs.

If you have the atheistic moral philosophizing personality type, then what you're doing seems clearly better than what the normies are doing: you're developing internally-consistent systems! More internally consistent = more true! But this isn't necessarily true when we're talking about systems that are ultimately meaningless. Since the only reason non-religious people care about morality is because they don't want to contradict their intuitions (i.e. their conscience) and feel bad, and since systematizing requires approving of morally unintuitive things, then clearly systematizing is worse for them. Plus it requires a bunch of extra work.

Expand full comment

Agreed. If I remember correctly, the big three systems in academic philosophy are utilitarianism, deontology, and virtue ethics, and all run into problems fast. We all pretty much subscribe to an incoherent mix of the three, because it is impossible to do otherwise.

Expand full comment

"We all pretty much subscribe to an incoherent mix of the three"

I suppose I'd go further and say that people hold plenty of heuristics that don't really belong in any of these categories. Though sometimes people might use arguments from those systems to rationalize and sound better than what they really believe.

For example, utilitarians have tried to compute the value of animal life based on neurons, with no special preference given to humans. Which of course can lead to some counter-intuitive conclusions no one wants to accept (e.g. there is some number of maggots that is equal to the life of a child).

Normal people have a number of heuristics to consider the value of animal life, but a common one is that it's better to kill ugly things than cute things. Which is an idea that you could force into one of the philosophical systems if you had to -- is animal cuteness a virtue? does it convey utility? -- but forcing is indeed all you're doing there.

Expand full comment

This is really true of any standard, a funny example might be the Kaldor Hicks criterion, and the whole Caplan vs Hanson efficiency debate, https://www.econlib.org/archives/2009/04/are_grotesque_h.html , similarly a more self aware utilitarian etc. might present arguments based around how it might be useful to have smart people coordinating around some standard, even if taken seriously it's ridiculous.

Expand full comment

Well. you can view morality as a social tool - to judge how much effort others put in to make the world a better place. We have strong intuitions about this at the local level, religious or not. So trying to tie that to what will actually make the world a better place - based on rationality rather than religion - doesn’t seem to need much further justification

Expand full comment
Dec 5, 2023·edited Dec 5, 2023

"Atheistic moral philosophizing" is necessary to actually follow simple moral intuitions many have, like "don't hurt people", "think critically about your behavior" and "help others the best way you can".

Expand full comment

> Based on what EAs have written, I have replaced much of the shrimp and chicken in my diet with beef and pork, which they say gets rid of most of the harm, but I’m not going any further than that.

I think this argumentation is totally mistaken and the net effect of your actions has sadly been to increase, not decrease, suffering.

As I argued here https://akarlin.com/animals/, it is likely that capacity to suffer is correlated to cognitive capacity. Consequently, even taking bulk into account, it seems that it is better to eat beef or chicken than pork, pigs being about as smart as the smartest dog breeds, and far better to eat fish (salmon in particular mostly die bad deaths anyway so I don't think catching and eating them even increases suffering over the fact of their existence), and one can destroy and consume arbitrary amounts of crustaceans (lobsters have 100,000 neurons, that's less than the ant's 250,000) and be ethically almost indistinguishable from a vegetarian. After all the vegetarian still ends up swallowing some bugs.

As you probably gathered from the above, I am mostly a pescetarian, although I do eat chicken and beef because of protein and taste, respectively, as well as cooking convenience. If fish could be made relatively more convenient, then I would be happy to switch over entirely until the development of lab meats.

Expand full comment
author

Thanks, this is very interesting.

Expand full comment

Most of salmon available for food is farmed, and has pretty bad life. For wild caught salmon it's a different story.

Expand full comment

Agree with almost everything except the speculation around whether EA ideas would better propagate without a movement.

It's true that many EAs are former bleeding hearts but a non trivial number are people like me, who don't score particularly high on empathy or compassion but were convinced by the arguments. And most people who fall in this camp, even if often neuroatypical, aren't autistic enough to make decisions or sacrifices based on these arguments if there isn't some mechanism for social rewards on this basis. The intertia is massive - in my example, working in biosecurity is something that no one else in my MBA class even considered. The reason I did was because in addition to being convinced some lunatic or Jihadi group could very plausibly kill millions of people with an engineered pandemic because of our collective stupidity, I knew it would give me a professional and social community that I could go back to if this weird bet didn't pay off. In other words, It's good that we have a community where status somewhat tracks how much good you do. Now EA doesn't do this even close to perfectly (something I want to write about another time) but it does it better than any other place does.

Expand full comment

THE VALUE OF A LIFE VARIES DRAMATICALLY

"people are really dumb for worrying so much about school shootings and terrorist attacks relative to other things"

- Ok, but these things aren't obviously other crimes, and definitely not poverty in Africa. What's happening here is that people, even when they won't admit it, value the quality of the victims not just their number or raw amounts of suffering.

The average homicide victim has racked up a crime spree of his own and was a net negative to society. The average victim of a murderous husband, wasn't exactly an innocent bystander but had any number of opportunities to leave, and maybe even cooperated with his abuse of their children. And if you look at the demographics most affected by murder in general, you quickly realize that these people are not exactly generally valuable.

Most school shooting victims are in fact innocent, and of relatively high human capital. Few places could have more human capital than the world trade center on 9/11. How many average lives, not to mention average murder victim lives, were these people worth?

Expand full comment

This is fair. In fact, let me go further than you do and claim that a life in the developed world is generally worth more than a life in sub-saharan Africa, both by quality of life metrics and how much we can expect them to contribute to the world. This is an argument for incorporating that into your calculations. But good luck because it's not going to change a damn thing since its soooo much mroe cost effective to save 1 life in a poor country .

The median person in the US - even a kid killed in a shcool shooting - is not worth more than a few lives in a poor country. If you want an intuition pump for this - would you sacrifice 1 person in your immediate family so the rest of you could live in America or would you choose to all move to Africa. If you thought life in America produced two times as much value, you'd easily kill your family member just for you to move there. The fact that most wouldnt providdes a hint. The number is somewhere between 1 and 2x but certaintly not >5x. You'd still comfortably endorse neglecting school shootings as a cause.

Expand full comment

What if I simply don't value the lives of people on the other side of the planet who I have no connection to, at all? I don't wish harm upon them, but I simply don't care about them or what happens to them. I feel no obligation to go out of my way to do anything about them, in the same way that I feel no obligation to care about the squirrels who live in the trees around my town, and don't notice or care what happens to them.

Expand full comment

If you don’t care about morality or think it should just be based on how you feel , that’s fine. I don’t wish you any harm. But I’m not going to pacify you about it being a well reasoned moral arguments. If you don’t care just say you don’t care like richard alludes to in the article. Don’t go around looking for a rationalization for your position.

Expand full comment

What is the "well reasoned argument" for why I am obligated to think and care about people on the other side of the world who I have no connections to? You are taking this for granted, "based on how you feel," as far as I can tell.

Expand full comment

There's a lot of well-reasoned arguments, but you could start by looking into morality based on phenomenological consciousness

Expand full comment

I mean have you read the accounts of European travelers on Africans offering their wives or children as slaves for trifles? I think you are projecting here.

---

Look at the list of achievements of Africans, or rather it's non-existence*, and it's really hard to come up with any justifiable exchange rate on that basis; but it's certainly higher than 5X on a purely genetic potential basis, and might be 100X if you consider the possibility of genetic potential being realized.

*addendum: Ok that's unfair, they punch above their weight in music.

Expand full comment

"100X if you consider the possibility of genetic potential being realized"

This whole "every white man is just a Beethoven or Leibniz manqué" belief is an embarrassing, unprovable, etiolated egalitarianism. If you're going to feign being the hard teleological meritocrat at least do it correctly, don't soften it with bad evopsych.

Expand full comment

Looks like you’re actually just a piece of shit so I won’t bother.

Expand full comment

There seem to be hard limits to the rationalism of Effective Altruists, and they appear to be pretty much identical to the prevalent moral taboos in elite U.S. society. Weird.

Expand full comment

Lol did you read my comment explicitly comparing the value of lives ? But yes, trying to claim a genetic potential of 100x is really baseless.

Expand full comment

Yes. From your 'intuition pump' it is clear that you believe that the only reason Africans have less value is that they are physically resident in Africa (otherwise it makes no sense). But that is not the main point. We should be able to hash out the figures without you getting emotive, and we can't, and we all know why we can't.

Now, for my part, I don't just believe in utility, I believe in *marginal utility*. I wouldn't want to live in a world with zero bantus, but I also think the incremental value of each new bantu is very close to zero. Sue me.

Expand full comment

I mean your on the Richard Hanania forum what did you expect?

Expand full comment

Where did you get 100x genetic potential from?

Expand full comment

Not 100X genetic potential, but 100X chance of it being realized given current systems. There is little chance of Europe colonizing Africa again, though I suppose China might do so and there seems to be a minimum required concentration of competent and intelligent people for any of them to actually produce anything valuable.

But I suppose you might be right that I haven't valued subjective experience enough. My thinking is that our species is still in it's earlier stages and that the primary objective value of any of us is in creating a world, or galaxy, where hundreds of billions can thrive, and that as such the subjective experience of people who one cares about for purely hedon maximization reasons is not a primary or even secondary concern. But this seems like an easy way to get oneself derailed into total insanity.

Expand full comment

I think your thought experiment suffers from the fact that most are to some degree influenced by Copenhagen interpretation of ethics - actively sacrificing someone is worse than passively allowing them to die. And in a right reframing that obviates that, intuitions are much less obvious.

Expand full comment

Ya fair criticism. However, alleviating disease and suffering is much easier to argue for than the value of a life. And even in the west, you’d give up a lot of happiness and comfort to avoid the worst types of suffering. Which again points in the direction of eliminating lead exposure and giving money to the poorest in the world, even if it doesn’t straightforqRdly point to preventing malaria

Expand full comment

Wow. Who appoints "life value police"? Same trouble here as who appoints "thought police."

Expand full comment

If you can't assess the relative value of different forms of life, how could you do effective altruism? I can just say that my enjoyment of a steak is more valuable than infinity cows and you can't contradict me.

Expand full comment

Simple. I don't do effective altruism.

Expand full comment

The factory farming thing is not exactly new or some kind of innovation by EA. PETA has been around forever. The reality is people care more about people than animals. There isn't any kind of logical argument you can make against that. The fact that EAs both treat this as some kind of novel problem they spotted for the first time, and then just take it for granted that you are obligated to value animal life as much as they claim to in the same way that everyone hates vegans for, really does not endear them to people.

Going back to PETA, people don't hate PETA because PETA is some kind of moral mirror that actually makes them feel bad. They hate PETA because PETA are annoying, obnoxious moral busybodies wasting everyone's time with a problem that, to be frank, most people don't care about and don't consider a problem at all. EAs are less overtly meddlesome, but the same basic thing applies.

Expand full comment

Although PETA is not EA and (unlike EA) seems to be interested in animal rights as much as animal welfare, PETA's founders were inspired by Peter Singer's book. https://en.wikipedia.org/wiki/Ingrid_Newkirk#Founding_of_PETA

Expand full comment

Effective Altruists are basically utilitarians who have finally realised that utilitarianism is actually really complicated and thus requires a lot of math. This is a step forward, but they are still massively underestimating how complicated utilitarianism really is. They are analogous to socialists who believe that AI can solve the economic calculation problem.

Some of their recommendations are good, like that you should try to eat only free range meat. But all of their good recommendations have been made by others already, and also some of their bad ones (note that SA cited EA as influencing the British government to abandon its correct pandemic policy and embrace its failed lockdown policy).

Expand full comment

It doesn't matter if your calculations are imperfect, they just need to be better than the alternative. In the case of communism, the alternative is the calculation done by the prices mechanism which is mostly quite good, we don't have a prices mechanism for morality. What normally happens in the world is mostly determined by power dynamics and outdated moral intuitions meant to function in small tight knit hunter gatherer societies. Even a naive utilitarian calculation can outcompete this reality in the average case.

Expand full comment

Well, I've already cited an example, according to Scott Alexander, in which his gang got their calculations wrong and encouraged Britain to lock down three times.

I think EA works well, relative to moral intuition and traditional ideas, in a case like factory farming, which didn't exist until recently and where people do not have direct exposure to what is happening. Buying meat from a butcher seems to be much the same as it ever was even though actually it's totally different.

Expand full comment

When did Scott claim credit for lockdowns?

Expand full comment
Dec 5, 2023·edited Dec 5, 2023

In the first line of this article, RH links to SA where he writes that EA 'positively influenced some countries' pandemic policies' with a link to a tweet by Dominic Cummings in which he credits reading EA blogs with changing his mind from a herd immunity to a lockdown policy (elsewhere Cummings has written about being the de facto swing vote in the cabinet).

In addition to the general wrongness of lockdowns, it needs to be said that it was especially disastrous to move to a lockdown policy on the fly, after a pandemic had already started, and just ignore decades of pandemic planning. And it was especially, especially disastrous to try and do this in a country like Britain with highly eroded state capacity and famous inability to control costs. This is how you spend 35 million pounds on a tracing app that was *literally never even used*. How's that for effective altruism?

Expand full comment

This is the tweet:

https://twitter.com/Dominic2306/status/1373333437319372804

It doesn't mention herd immunity or lockdowns at all. If you go to the post he was responding to:

https://putanumonit.com/2020/10/08/path-to-reason/

It also doesn't mention those things. Instead it's about being ahead of the curve on how big a deal COVID would be. And I remember how people drastically underestimated how many would die (leading to Greg Cochran winning all his bets against such people).

Expand full comment
Dec 5, 2023·edited Dec 5, 2023

'Being ahead of the curve on how big a deal COVID would be' is not a policy. Lockdown is a policy, not doing anything is a policy, mitigation strategies are policies, focussed protection is a policy.

The fact is that Britain already had pandemic plans before EA dorks got their way. If they had followed these plans everyone would have been better off. The key figure in choosing lockdown was Dominic Cummings. This is all public record, and he says it was EA that motivated him to do it.

To reiterate, lockdown was the wrong policy everywhere, but adopting it on the hoof was even worse, and doing so in a dysfunctional country like Britain was even worse. If the government's Covid policy had been burning a billion pounds in public while randomly machine gunning passers by it would have been less harmful than what they actually did.

Expand full comment

This is a fully general argument against having any ideological bent at all.

Expand full comment
Dec 5, 2023·edited Dec 5, 2023

Strongly feel like EA people can't seem to figure out why people don't like their philosophy because they seem unable to conceive of a moral philosophy outside of their own. Even when they try to understand others, like you've done here, they can't get outside the window of their own philosophy, so they never see why anyone is disagreeing with them and think it's just deficiencies on the parts of their counterparties -- feeling dumb, feeling defensive, feeling imposed upon. When in fact the most common moral stances are totally unrecognizable by utilitarianism, and no amount of understanding them *in terms of utilitarianism* will ever be able to comprehend them.

No point in disagreeing with the article line-by-line; you'll get the idea from the intro:

> I think EAs miss just how much their ideology offends people by its very nature.

I agree with this wholeheartedly.

> EAs are, for example, attacked for being both “woke” and “white supremacist,” which is an indication that each side is basically just throwing out the worst epithet they have at the group.

This is just dumb; different people say different things for all reasons, and the only thing less useful than averaging over the opinions of a large disparate group is averaging over the opinions of the complement of that group.

> The main problem I think people have with EA is that it is a mirror. It tells you exactly what you are doing wrong, and why.

Completely disagree. The reason they dislike it is that it strikes them as profoundly immoral. Especially immoral is the fact that EA can't comprehend how EA could be immoral.

> Everyone accepts utilitarianism to some degree.

The degree to which they *don't* is what you're entirely missing.

Expand full comment

I stopped at "I am too masculine, smart, and brave..."

Expand full comment

The correct answer to animal suffering isn't to reduce the efficiency of our farming methods, but to breed dumber animals which don't suffer as much under the same conditions.

Expand full comment

You're just describing lab-grown meat.

Expand full comment

I pretty much agree with Richard throughout, but I still think that EA is basically just [vaguely Christian / Buddhist religious sentiment] x [highly educated elite]. The farther it goes down the rabbit-hole of its particular theology (in this case, utilitarianism + sci-fi novels), the less admirable it tends to become, just like any other religion. If you devote significant resources to stifling AI development because your religion tells you that algorithms running on computer processors will develop magical powers and cast an evil spell on all of humanity to make us kill ourselves, you're not much different from the church lady screaming about how you should remove the roof of your house because the Rapture is happening tomorrow, which is to say, a fucking loon. But that doesn't mean there isn't an intellectually defensible, highly admirable form of EA/Christianity/Buddhism/Islam/Judaism/whatever that focuses mostly on being a decent, pro-social person in your day to day life.

If the EA theologians really believed their own utilitarian bullshit, they'd be screaming at anyone who supported Scott Alexander's decision to donate his kidney, because it resulted in a 1% higher chance of Scott being incapacitated later in life, which would prevent him from proselytizing for EA quite so much, which would result in a 0.001% lower chance that some brilliant child would someday convert to EA and come up with the one true foolproof way to stop robots from doing a Skynet, which is the only thing that matters. The more an EA person tries to argue from suspect premises that have no support outside of their sacred literature (e.g., there's a thing called "superintelligence" that nobody has ever seen but is at least 78.6% certain to end the world some time later this year), the less I am inclined to view their adherence to EA as healthy or admirable. That being said: most people who would classify themselves as EAs seem very healthy and admirable! Just like most people who would classify themselves as Christians. All IME, of course.

Personally, I use utilitarianism as a heuristic when making certain types of decisions, while recognizing that it has plenty of limitations (which have been pretty well documented by moral philosophers over the last 300 years). And the older I get, the less useful I find it as a way of guarding against moral error. In fact, I find that it seems to lead otherwise smart people morally astray more than just about any other article of faith. The entire premise of utilitarianism depends on being able to predict all future consequences of one's choices (including how you talk to other people about those choices and how they respond to those choices), into eternity. So it only appears to works as a reliable moral guide for people suffering from overbearing hubris to the point of megalomaniacal delusion.

Expand full comment

I don't hate EA. I mostly scoff at it while appreciating some of its value. Obviously charity ratings are very useful; who among us can do that kind of diligent drill-down legwork. I review various ratings now and then over the years, as I evaluate charities and decide how to donate.

But EA types, IME, tend to wildly overstate the "objective" and "obvious" value of their simplistic (and inevitably "biased") numeric formulas, presuppositions, and resulting evaluative conclusions. Scoff. So I look well beyond that, when seeking input on the charities I choose to support.

That so many prominent EAers think their judgments are superior to others' because they've won the economic survivorship/success lottery is quite offputting, but I just shake my head, roll my eyes, and sigh resignedly at the foolishness of humankind. Then ignore a decent chunk of what they have to say.

Expand full comment

I really like EA and generally loathe the people who loathe it, but I wildly disagree with the overall assessment here. First, there's no "one" reason people dislike it, because the reasons are highly bifurcated by ideology. Rightists hate it because they oppose egalitarianism in general, and more specifically they hate the idea of forking over vast resources to save Africans, whom they regard essentially as some sort of mammalian locust swarm. Leftists, on the other hand, hate it because it prominently features people they culturally pattern match to "libertarian tech bros" who don't have the same political bêtes noires as they do (preoccupation with fighting racism/capitalism/transphobia/"justice"/whatever) and who don't consider revolutionary change to be desirable, much less absolutely mandatory.

Second, at least with regard to the latter camp (leftists), I think they dislike it in a sense because it doesn't shame or call people Hitler *enough*. That is, EA's are generally extremely chill, affectively positive, conflict-averse people who favor emotional carrots over sticks. But the most strident leftist critics of EA like Timnit Gebru are the polar opposite of that, and love nothing more than blaming problems of subalterns on the actions or indifference of heartless Westerners (specifically white Westerners).

If I were tasked with popularizing EA in some Leftist (with a capital "L") space, I'd advocate for all the same causes with all the same fervor, but I'd go out of my way to emphasize notions of guilt and complicity of Western societies over literally everything else. Africa is only poor because of centuries of ruthless colonialist resource extraction, which *you* personally benefit from every day from, you monster, so the very least you could do is donate 10% of your income. Call it "direct action" or "mutual aid" or "reparations" or whatever. Make sure to include a lot of historical anecdotes about affluent white people visiting Third World countries and behaving in some entitled manner in the midst of crushing subsistence poverty, and go out of your way to compare those people to complacent Americans who feast in blissful ignorance while the rest of the world burns. Drop the word "justice" at every possible opportunity, and loudly insinuate that anyone who doesn't participate hates "justice" and favors oppression. You get the picture.

Expand full comment

1. Richard, did you ever read the Caplan vs Huemer debate on animal rights?

2. Is it worse for a chicken to be eaten by a human than get torn apart in the wild by a fox?

3. “People often say that EA has a cold or autistic feel to it, but the irony of this is that modern Westerners with their WEIRD morality seem that way to much of the rest of the world.” Isn’t WEIRD just white? Won’t that disappear if the West is turned into the Third World demographically?

Expand full comment

No, WEIRD is not just white. WEIRD is about culture not ethnicity. Plenty of non white WEIRDS these days.

Expand full comment
Dec 11, 2023·edited Dec 11, 2023

2. Wild animal suffering is an EA cause area -- though not a very tractable one. Also most EA's don't mind killing animals (or raising them to be killed) if it's done painlessly.

These issues are crazy complicated and radical, so yes, most EAs haven't really thought through all of their implications and are often naive, but at least they're trying.

Expand full comment

Put more simply: EA's vibe like "experts." We're currently in an expert-aversion economy of ideas.

Expand full comment

EA, being basically extreme utilitarianism, has the same issue that all utilitarianism has: it requires you to be able to accurately evaluate the proper utility function, which is a hard problem in general. With deontology (eg "thou shalt not kill"), you are unlikely to get the most moral outcome imaginable, but you are also unlikely to cause disaster. With utilitarianism, maybe your master plan to save the world works out, and the ends end up justifying the means—or maybe you are Stalin, you deluded yourself into believing that communism could work, and now millions are dead for nothing. IOW, deontology is the allowance that utilitarianism makes for human fallibility.

EAs, having higher IQs, are less likely to make certain kinds of dumb mistakes when evaluating utility functions. But, being human, they still have many weaknesses, for example arrogrance, which can lead them astray if not checked. Scott Alexander is pretty humble, I am not worried about him; you, Richard, are not humble at all, so I trust your version of EA less.

Expand full comment

"IOW, deontology is the allowance that utilitarianism makes for human fallibility." Sounds a bit like R.M Hare's Two-Level Utilitarianism

Expand full comment