Rationality Requires Incentives: An Interview with Steven Pinker
One of the best parts of becoming (sort of) famous in the last year has been getting to meet and form relationships with some of my intellectual heroes. Seeing those I’ve looked up to for years not only become friends but in many cases return the admiration has been extremely rewarding.
Few have influenced my thought as extensively as Steven Pinker. As a teenager (I think? Maybe early 20s?), I read The Blank Slate, and it fully convinced me that genes are really important, while parenting isn’t. Nature versus nurture through most of history must have seemed like a philosophical or even theological question. What can we ever know? Well, through studying adopted children and twins, it turns out quite a lot. The laws of biology give us siblings with half of their DNA in common, and others who are identical. With some basic algebra, we can disentangle how much the extra similarity matters, and from that how much parenting and the rest of the environment do too. Of course, both nature and nurture have a role to play. But the two main lessons of behavioral genetics are that:
1) Nature matters a lot more than just about any government policy, academic field or mainstream political movement is willing to acknowledge, and
2) The part of nurture that matters isn’t parenting, but something else in the environment.
Of course, there are caveats, like the research generally looks at a restricted range of environments within one culture. Still, simply knowing that parents don’t matter within one community under most circumstances is a pretty big deal, especially considering that all social science findings break down under some conditions and few are anywhere close to this broad. Even with the limitations of the research, what we can call the default result from behavioral genetics–that most cognitive and intellectual traits on which people vary are approximately 40-60% nature, 0% parenting, and 40-60% everything else–is replicable and important enough to be in my mind the most underrated finding in the history of the human sciences. Pinker didn’t himself discover this, but his brilliance and commitment to scientific truth led him to recognize the fundamental importance of what behavioral genetics has found, and he has for decades been perhaps the most prominent and consistent voice within our culture trying to give humanity a deeper understanding of itself.
If The Blank Slate was his only book, it would be enough to consider Pinker one of the most important intellectuals of our time. Yet every few years he puts together a beautifully written and accessible work on a fundamentally important issue. The Better Angels of Our Nature makes the authoritative case for human moral progress, and shaped how I’ve understood international relations perhaps more than the work of any specialist in the field. Further away from my main interests, I’ve also enjoyed several of his other books, particularly The Language Instinct (tweeted about here) and The Stuff of Thought.
Pinker has a new book out called Rationality: What It Is, Why It Seems Scarce, Why It Matters. I was delighted to interview him about it for the CSPI Podcast. In addition to the book, we talk about how the conversation about human nature has changed since The Blank Slate, differences between irrationality on the right and left, how democracy might work better, his attempted cancellation (which I wrote about here), and advice to young scholars. A transcript of the discussion is below, lightly edited for clarity.
(beginning of transcript)
Richard: Hi, everyone. Welcome to the CSPI Podcast. I’m here with Steven Pinker, a professor at Harvard and author of many important books that you should all read. Hey Steve, how are you doing today?
Steven: Fine, thanks. Thanks for having me.
Richard: We’re here to talk about your new book, Rationality. What’s the full title? Rationality: What It Is, Why It Seems Scarce, Why It Matters.
When I found out you were writing this book, I was interested because I thought, “How are you going to approach this question?” It’s such a broad topic and you’re no stranger to broad topics. Previous books are How The Mind Works, The Language Instinct, etc. How did you handle that? How did you sort of narrow the question and understand the concept of rationality in the book?
Steven: The heart of the book is a set of chapters on tools for rationality that I think many scientists and academics and psychologists and social scientists feel should be part of everyone’s mental toolkit. We should all understand the basic idea behind Bayesian reasoning, that is evaluating beliefs in the face of evidence, distinguishing causation and correlation, logic, critical thinking, probability, game theory, that these are just essential to being a rational educated person. But I was not aware of any source that put them between two covers and tried to explain them to an intelligent, literate audience.
At the same time, each one of them provides an opportunity to discuss some of the very interesting findings from my own field of cognitive psychology, namely, what are the ways in which people tend to be irrational, the fallacies and biases and heuristics and errors that have been made famous by the work of Daniel Kahneman and Amos Tversky and behavioral economists.
So I wanted to contrast the normative benchmarks for rationality, with ways in which we tend to backslide away from them. I wanted to make the case for rationality, that being rational doesn’t mean being a cold, boring, nerdy Spock type, including its role in moral progress. And unavoidably I had to take on the question of why does the world seem to be losing its mind? Why is there so much fake news, quack cures, conspiracy theorizing, post-truth rhetoric, paranormal woo-woo? It would seem that in an era in which we are on the one hand technologically and scientifically more sophisticated than ever, that we’re drowning in malarkey and flapdoodle.
Richard: Yeah. So what is the case for rationality? What is the case against someone who just says, “I don’t want to be rational in the first place. I think life is richer if I live in sort of a mythical universe where I walk around and indulge my instincts and just have fun.” What is the argument for rationality as a normative principle?
Steven: It’s a peculiar argument, because as soon as you ask for the argument for anything you’ve already committed yourself to rationality. It’s a debate where you show up, you lose, because the very idea that I’m going to try to persuade you of anything, including the case for rationality means that we already signed on to the idea that you should believe things because there are reasons, as opposed to say being bribed, being threatened, being intimidated. So you’ve already signed on to it.
Once we’re having the conversation, which means there’s a sense in which you’ve already lost it, then you can also point out that rationality is not opposed to pleasure, enjoyment, emotion, human relationships. The rationale is always in the pursuit of a goal. It’s not just spinning out true thoughts. You could crank out the digits of pi till the end of time. You could crank out incredibly trivial and boring true statements generated by the laws of logic that wouldn’t be counted as rationality. We deploy our smarts to attain goals. And those goals certainly can be pleasure at the joys of life.
I think when people contrast reason and emotion, thinking and feeling, what they often have in mind is a conflict between different goals like feeling good in the moment versus how you feel the morning after, or the week after, or the year after. Namely, how do we trade off immediate gratification to longer-term goals, and also, how do we trade off goals that conflict among people where there’s a zero-sum aspect to some of our interactions? Namely, there can be goals that an individual pursues that conflict with goals that other individuals pursue. And that’s another case in which it seems as if we’re... that there are limits to rationality, but it’s ultimately rationality that allows us to grapple with those trade-offs.
Richard: Yeah, I think that’s right. I guess, a different way to ask that question, is there a rationalist case against rational irrationality as far as, “Okay, I accept your arguments. I participated in this project of having a discussion and giving reasons for my belief.” But when it comes to religion, when it comes to politics, when it comes to my ultimate views of the universe, there’s no instrumental reason for me to believe in the truth. So I’m going to indulge in whatever I feel. And most people don’t think like this, but ...
Steven: Actually I think most people do think in that monologue that you just shared…
Richard: Not consciously …
Steven: Not consciously, but I think, in fact, I think that’s a huge part of the answer to one of the puzzles that motivated the book. Namely, why does it appear that humanity is losing its mind? How could any sane person believe in QAnon or chemtrails, the conspiracy theory that jet contrails are really mind-altering drugs dispersed by a secret government program? And part of the answer is that people are in fact, most people, most of the time are rational about their day-to-day lives, about holding a job and getting the kids to school on time. They have to be. We live in a world of cause and effect and not of magic. So if you want to keep food in the fridge or gas in the car, you pretty much have to be rational.
But then when it comes to beliefs like cosmic, metaphysical beliefs, beliefs about what happened in the distant past and the unknowable future, in remote halls of power that we’ll never set foot in, there people don’t particularly care about whether their beliefs are true or false, because for most people and for most times in our history you couldn’t know anyway. So your beliefs might as well be based on what’s empowering, what’s uplifting, what’s inspiring, what’s a good story. And people divide, I think, their beliefs into these two zones. What impinges on you and your everyday life, and what is more symbolic, mythological?
It’s really only with I think the Enlightenment more or less that the idea that all of our beliefs should be put in the reality zone, should be scrutinized for whether they’re true or false. It’s actually in human history a pretty exotic belief. I think it’s a good belief, a good commitment, but it doesn’t come naturally to us.
Richard: Yeah. That all makes sense. I mean, it’s a subtle point and it’s an important point because you’re right. You see somebody with, if he believes in QAnon and some of these crazy conspiracy theories, you think, “How are they just not walking into walls? How are they connected enough to reality to live their lives, sometimes be successful in whatever career they’ve chosen?” And that all makes perfect sense.
The question is, I mean, if we here, we come to the conclusion that they are being rational in being irrational, it seems like they’ve got us. What do you say to the QAnon supporter or the vax denier? Well, the vax denier might actually, it might come back to hurt them. But the QAnon person or the chemtrails person who finds this satisfying, and says, “There’s no rational reason to be rational about this.”
Steven: Well, they might actually insist that they are being rational, they show the research, follow the evidence, it’s everywhere. And of course, believers in conspiracy theories can keep their beliefs well protected against falsification by kind of meta theories of how that’s what they want you to believe, this just shows what a diabolical conspiracy it is, that the truth is so well hidden. And it is one of those families of beliefs that are by their very nature resistant to falsification, which makes it all the harder to convince someone out of it.
I don’t think we have an algorithm for doing it because often the benefits that accrue to holding these beliefs are social. You’re part of a community that offers you warmth and support and succor, kind of like joining a cult. And it takes a lot to get people to part with those. In the same way that talking someone out of a religious belief is often difficult because it’s everything that makes their lives meaningful.
But you can nibble around the edges. There is generational turnover that new babies are being born every minute, and their beliefs have yet to be shaped. There are people who are on the verge. They’re not all in, and they can be persuaded. There are people who bump up against reality, like the vax deniers who come down with COVID. Or sometimes different social affiliations can come into conflict and yank you from one belief system into another.
I had a conversation with a journalist named Ellen Cushing, who as a teenager got sucked into the belief in the Illuminati theory. And she had a high school teacher who amazingly promoted it in class, and it sounded intriguing to her. And she then tried to convince everyone she knew. She grew up in Berkeley but it soon became clear that the Illuminati theory is deeply anti-Semitic, which in Berkeley is deeply uncool. And she had to decide, is she going to be an anti-Semite or she’s going to be able to … One social norm pulled her out of another.
We don’t have an algorithm to cure people of cult beliefs, but it has to be approached from a variety of directions.
Richard: Have you thought about the way some people get into comic books or they get into Dungeons and Dragons? So they seem to indulge this, there’s this need for a fantasy world that they can live in. And they form communities based on that. And then some people believe in chemtrails or QAnon or something. Do you think that’s speaking to the same human need? It seems like if they are, maybe we try to subsidize comic books or something to give them what they need.
Steven: Absolutely it is. And it’s been noted that QAnon is like, kind of an online gaming platform where you spot clues, you get rewarded for ever more ingenious deconstructions. People avidly follow each other. So I suppose, yeah, constructive alternatives might be one way. But the thing is that in such a vast society of so many diversions, how do you offer one source of competing amusement that will suck people out of the destructive ones as opposed to Marvel Comics or Dungeons and Dragons?
Richard: Yeah, I think that has to be part of an approach because, yeah, it may be a great world where everybody could get into like healthcare policy or like the specifics of global warming and carbon tax and what would be a good policy, but that would be like an idealized democracy. That’s probably unrealistic. So maybe we have to think about what kind of institutions and things would actually appeal to people that can be sort of a counterbalance or a substitute for these other harmful, more destructive beliefs.
Steven: Yes. Well, ideally it’s very hard to engineer this from the top down, but there are ... There was a time, there still exist these things called service organizations, the Rotary Club and the Lions Club and Kiwanis where they, it offers people the community and warmth of traditional religions or cults for that matter. It often directs their activity toward constructive activities, building burn units for children in the case of the Shriners, eliminating eye disease in Africa in the case of the Rotary Club. The thing is that they are deeply square and uncool, which is a shame because they have been kind of mobilized to do good work. There was a time, at least in the ‘60s, when joining the Peace Corps was pretty cool. Then, granted, its own track record was mixed. Ideally there would be outlets for our need for affiliation, meaning, purpose that really do lead people to do good things, maybe effective altruism clubs where the rationality community can serve that function. Or for that matter, a lot of religions in practice. Even though I started by talking about how religions themselves are often based on fake news and conspiracy theories and paranormal, I mean, what else are reports of miracles in the Bible, but fake news of the paranormal phenomena? But there’s been a huge trend where religions, many religions have liberalized, secularized. They’ve reformed Judaism and Liberal Protestant denominations, the Ecumenical Council of the Vatican, the Mormons periodically get revelations like maybe African-Americans don’t bear the mark of Cain. Maybe polygamy is not such a good thing.
So religions can change expediently, but the direction tends to be away from literalism, fundamentalism in many religions. And so they could in theory be, kind of morph into service organizations and then do-gooder clubs.
Richard: Yeah. I mean, that reminds me because one thing in your book that, it’s obviously pro-rationality, says we need more of it. But the process you describe in religion seems like a little bit of a flight away from rationality, because you point out here and you also pointed out I think in The Better Angels that the most rational Christians were the ones who tortured people and told everybody to stop being heretics to save their souls, which from a utilitarian perspective makes sense.
I think moving away from that, you obviously think it is a good thing, but it was a case of moving away from rationality. Maybe we need more rationality, but maybe we also need more compartmentalization. Like the people who are into Dungeons and Dragons have compartmentalized that from the political world, right? While the people of QAnon are sort of running campaigns on it.
Steven: Yeah. No, I think that’s exactly right. I mean, that really kind of zeroes in on the psychology that’s involved. In both cases, the pleasure and fantasy and hero tales and morality tales is seductive. And the question is, can you keep it in a zone where you don’t literally believe in it? And when you do it can cause real trouble. The crusaders and inquisitors ... what have we got to say for them? They took their beliefs seriously, literally, which was a big problem.
Now, of course, everyone could become a secular humanist, but until that day happens, if it ever will, there can be a benign sequestering of certain beliefs into a kind of mythology zone where you, if you’re asked, you say you believe it, but you don’t act as if you really believe it.
Richard: Yeah. That’s one problem with people adopting beliefs that are irrational, bringing them into the political realm. Do you also worry about belief in science and rationality slipping into scientism? Because I see that the ... I read these newspaper articles sometimes mostly in the prestige press they say things like ‘experts say,’ ‘scientists say,’ and it’s never based on a meta-analysis or anything. Sometimes it’s true. Like the sun rises in the east and sets in the west. Sometimes though it seems to be highly contentious stuff. And it seems that it’s often ideologically convenient for the outlet that’s putting the ideas forward. So do you worry about that? And how do we guard against this problem too, of just calling whenever we like science or rationality?
Steven: Absolutely. I mean, it’s a built-in danger, and I don’t think it’s an indictment of rationality or science itself because the power of rationality is it can always step back and look at itself or look at some application of rationality and say, “Those people are claiming to be rational. Are they really? Or are they in fact irrational while claiming to be rational?” Likewise, in science, at least when it’s done right, someone who says, “Well, this is the scientific ... I’m speaking the scientific truth.” And someone else can say, “Well, no, you’re not.” That’s what happens when science works well.
Now, when it doesn’t work well, is when there isn’t an enforced consensus, often when there is a kind of political and intellectual monoculture in science. There’s reason to believe that that happens in big swaths of science. And another is, I think you’ve also pointed to a kind of an illness of science journalism, I think science journalists are waking up to it now, which is to favor the cute counterintuitive revolutionary, “everything you always thought was wrong” finding.
Often a single finding, one experiment that was published in Science or Nature yesterday, that’s the way journalism often works. It’s about news. It’s about the surprise of it. If it doesn’t surprise you, it’s not news. But that’s really inimical to the way science ought to be done, which first of all builds cumulatively.
So a lot of good science is, reinforces a good theory rather than constant, what was it, Mao’s term? Continual revolution? And you noted that a lot of the attention-getting claims in science journalism are not from meta analysis, but they really should be. Instead of reporting something from the newsfeed of Science and Nature. By the way, Science and Nature are guilty of promoting this and contributing to the replicability and credibility crisis in science. It really should be the meta-analysis that gets the headlines, not the cutesy finding from yesterday, which probably eight times out of 10 is wrong.
And I actually talk about it in the book why these cute findings are wrong based on Bayesian reasoning, namely evaluating ideas in the face of evidence. Maybe we should start with the priors. Based on everything that we know so far, our understanding of how the universe works, how should we evaluate this new claim? And if it’s a revolutionary claim, if it’s an “everything you always thought was wrong” claim, then it might be wrong. Unless the strength that is in evidence is overwhelming, the presumption should be that the entirety of our knowledge accumulated up to this point is not worthless, it’s not going to be overturned by one study done by someone somewhere, but that kind of consciousness of Bayesian reasoning in science itself has been slow to penetrate and it is one of the reasons for the replicability crisis.
Richard: Yeah. I mean, one of my favorite stories about this, and I think you talk about it in the book where the journals found that there was a psychic ability where people could read minds and it was …
Steven: Yeah, they could predict the future. This is a result published by the eminent social psychologist, Daryl Bem, he’s been a fixture in social psychology for 50 or 60 years, published in one of the prestigious journals. It passed peer review, which should tell us a little something about your prestige in social psychology and fancy-schmancy journals and peer review. We got a problem there. If an experiment that claimed to show that some undergraduates could predict a random choice by a computer of where it’s going to place a pornographic image behind a screen, they could predict it before the computer actually selected the image. Well, maybe there’s something a wee bit wrong with the study, even if the results suggest that your typical undergraduate does have pre-cognition. So that indeed was a failure of Bayesian reasoning in science itself.
Richard: Yeah. In your chapter, Correlation and Causation, I think that’s what it’s called, you seem to say something that I want to see if this is your view, because it’s something that I’ve thought about, I’ve come close to believing. You seem to argue that randomized controlled trials and natural experiments are basically the gold standard of establishing causation. And then you have regular, I think what you call regular regression, just looking at something causing another and putting in a bunch of controls, and seeing what happens statistically.
How big is the gap between the RCTs, the natural experiments, the regression discontinuity design, and just the regular ordinary least squares regression type of analysis. Do you think that’s a big gulf, or are they more on the same level?
Steven: Well, the ordinary least squares, it’s often the best that we could do. So this would be you measure an awful lot of things. You’re interested in a particular outcome. You add up and weight all the putative causes. And you see if each one of these variables can account for some percentage of the variance in what you’re interested in, holding all the others constant. The thing is that often it’s the best we can do just because society itself is not a lab in which we can run randomized controlled experiments.
If we want to see what the effect of social media is on political polarization, we can’t take one city and deprive it of social media for a year and another one and let them have access to Twitter and come back a year later. People are going to have Twitter if they want to have Twitter. And unless we have powers like Mao Zedong, we can’t impose a randomized controlled trial on a country as a whole. So sometimes the combination of ordinary least squares, with all kinds of cleverness like regression discontinuities, or natural experiments, the kind of stuff that Freakonomics made famous, is the best we can do.
Richard: Yeah. Well, I mean, would you consider the possibility that sometimes the best we can do is worse than nothing because if we do nothing, we don’t have any certainty about what we know? While if a method is not very good, we end up with hundreds of studies all saying the same thing? And then they say “experts say ...” And then people take that as a real scientific finding. Is it possible that maybe just the costs of doing bad research outweigh the benefits?
Steven: Well, I think what it would call for is being able to comment, bracket, put into perspective, maybe even discount bodies of research if there’s reason to think that they’re systematically misleading. Or if they are looking for the keys under the lamppost because that’s where the light is good. We should be aware of that. Now I suspect that knowing them and knowing their limitations is better than just flying blind and being completely ignorant. But that having been said, it is possible that the only quantitative method available to us is so misleading that we’d be better off not knowing it at all. But if so, there have to be reasons for stating that. And we should therefore state that reason. Namely, this whole line of research has unearned claims to precision. One ought to discount it. You can say that if one has good reasons to say that.
Richard: Yeah, I think that’s right. I think that’s right in theory. And if you’re just a person trying to understand something about the world, something is always better than nothing. And knowing the limitations of that one thing is good. I worry that in the real world, in the wild, if you have 200 studies, and hundreds of experts saying the same thing, and they’re all based on the same false methodology, I just think the real world effects of that tend to mislead people as often or more than it enlightens. But it’s a problem. Yes.
Steven: It is a danger, I mean, when that happens. So yeah.
Richard: Right. So one thing I was interested in, one thing I really liked actually in the book, going back to this zone of the reality mindset and the mythology mindset, and how we compartmentalize. When I started training in political science, they started us in statistics. And we learned Kahneman and Tversky, and all the cognitive biases people have and all the mistakes they make. They didn’t actually teach us the other part of that, which is that people suddenly can become rational once the structure of the problem stays the same, but it’s relevant to some real human problem.
So you talk about this one experiment in the book where people are looking at coins, and they’re not very rational. And then they’re trying to catch people, see if they’re following the rules when it comes to using stamps. Can you explain that real quickly? Because I just think that’s a beautiful demonstration of how people can be rational with the same structure of the underlying problem.
Steven: Yeah. This is a real chestnut in psychology. It goes way back. This one’s called the card selection task. And it is a failure of logical reasoning. That is if you just give people four cards, say, and you see there’s a D on one side and a three on the other. Test that rule with these four cards. [NOTE: the setup to the problem also requires that each card has a letter on one side, and a number on the other] Do these cards actually obey the rule? And you give them a D, a three, a seven and an F. So remember, the rule is if D then three. Which cards do you have to turn over? And the highly replicable result is people turn over the D, or they turn over the D and the three. The correct answer is they should turn over the D and the seven. Because everyone agrees you’ve got to turn over the D. That’s the easy part. But you really don’t have to turn over the three. The rule says if D then three, not if three then D. And you really do have to turn over the seven because, if it had a D on the other side, then that rule will be dead. But it doesn’t occur to most people to turn it over. So that’s the classic finding, sometimes explained as confirmation bias. Not exactly accurately, but good enough for now. Many people seek out evidence that confirms their beliefs, and don’t seek out evidence that might falsify them. It’s a real result. It replicates up the gazoo. But it turns out before we write off people as hopelessly irrational, it turns out that there’s an important twist sometimes called content effect. Mainly if you replace the D’s and threes, which are kind of pretty abstract and boring, with socially relevant contingencies, especially permissions and precautions and privileges and rights, then suddenly people turn into logicians.
So let me be concrete. If a bar patron is drinking alcohol, he must be over 21. Now, who do you check? Do you card the guy with the glass of beer? Do you card the guy with a glass of Coke? Do you look into the glass of someone who is clearly over 21? Do you look into the glass of someone who’s clearly under 21? Now everyone gets the answer right. Well, of course you card the guy drinking beer, and you look into the glass of someone who’s under 21. And that is the logically correct answer. So it’s hasty and glib to say people are irrational.
What people don’t have is a general purpose, abstract, logical rule that they can kind of deploy like a gunslinger in any circumstance. No matter what the content, they know this is a rule of logic and you apply it. It’s that our logical thinking is kind of baked in with our subject matter knowledge. Which in a way is pretty rational. It’s not logical, but for living your life, you often don’t need to apply abstract rules of logic, like modus ponens, the law of contraposition. I think it’s that with the Enlightenment, with university education, with tools of logic and statistics and math, we do have these general purpose content free tools, and it is a good thing to know them. But they don’t come naturally. So the conclusion is not that we’re illogical. The conclusion is that our logical thinking is married to particular contents. And it’s not a general across-the-board tool.
Richard: Yeah. So is the key to that one, the fact that involves, the difference between flipping over the cards and people drinking alcohol at the bar, is it something related to we think about humans cheating on something? Is that the content that makes us so good at this? Or is there something broader or different going on there?
Steven: Well, that is a famous hypothesis by my good friend, Leda Cosmides, based on a prediction from evolutionary thinking, namely, that what makes us social as a species is the ability to enter into and enforce social contracts. That is we call cooperating with others because we know that if you take a benefit, you have to pay a cost reciprocally. I do a favor for you, you do a favor for me. And in order for me not to be exploited, I’ve got to make sure that if you take a favor, you have paid the costs beforehand.
And there is a debate within the literature of cognitive psychology, namely, what kinds of content turn people into logicians? It can’t be any old content. But certainly when it comes to detecting cheaters, which is equivalent to falsifying an if-then rule, the if-then rule in this case being if you take a benefit, you have to pay a cost. It sharpens the mind. People are very, very good at that.
Leda Cosmides has debated other cognitive psychologists who claimed that there are other circumstances that can turn people into logicians. I think she is pretty much right. I think she herself has also noted that there are other salient, evolutionarily relevant forms of reasoning that can also turn us into logicians. Detecting cheaters isn’t the only one. Taking precautions might be another one. We live in a world of dangers. Something like if you get on a bicycle, you should wear a helmet. That’s another category of if-then rules that people are pretty good at falsifying as well.
Richard: Yeah, that’s interesting. I was wondering, when I started reading the book and I looked at the different chapters, I was interested in that there wasn’t, I mean, this does come up a little bit throughout the book, but I was surprised that cost-benefit analysis didn’t get its own chapter. Because it seems to me that people, maybe this is not a strict point about rationality, but it seems to me in our public conversations we look at one side of the ledger. We look at the cost of something, or we look at the benefits of doing something. And we just don’t look at the other side, right?
Like you’ll often see a study, they’ll say masks work. And then people will jump to, okay, we should mandate masks everywhere. And the question is what are the costs of masks? How much do they work? So it’s just the beginning of the conversation to show masks work, right? It’s not the end of it. How do you think about cost-benefit analysis in terms of these other concepts? And how central do you think it has to be to having a rational understanding of politics and social issues?
Steven: Yeah. Well, I couldn’t agree more. And indeed it is a huge blind spot in a lot of our public deliberations. People will identify a danger, and say, well, we’ve got to zero it out without taking into account the costs. And it is folded into the book in the chapter on what I call rational choice, expected utility. Because the expected utility of an option is in fact the probability weighted some of the costs and benefits. But you’re right, I probably did not spell out the implications in the public sphere, as much as I did in the individual sphere.
It also figures into what psychologists call signal detection theory, more broadly, statistical decision theory, which is setting a decision criterion when there are costs both to missing an event, that is something happens and you act as if it hadn’t taken place, and false alarms, something that is not there, but you act as if it is. And that is kind of a hybrid of Bayesian reasoning, namely evaluating the likelihood of some event or outcome or idea, and rational choice, namely weighting up the costs and benefits of each kind of error. And so I could have played up more the relevance of cost-benefit analysis in public choices. But those are the two chapters in which that idea is explained.
So things like, we know about it in the case of legal decision-making. How do we trade off the danger of falsely convicting an innocent person versus falsely acquitting a guilty one? And where the cost and benefits there are reckoned partly in moral terms instead of in dollars and cents but the logic is the same. But yeah, in case of masks, there’s both. It really takes place at two levels. Or lockdowns or any other policy. What are the net benefits and costs? And the other is, if we get that wrong, how bad would it be? That is do we err on the side of caution? How much of a risk should we take? What’s the worst thing that could happen if we’re wrong in each of the two ways of being wrong?
Richard: Yeah, yeah. This influenced my thinking on COVID. Because at the beginning, you didn’t know what the worst case scenario was. We didn’t know. The total fatality rate numbers coming out of China were in some cases absolutely massive. And that was a possibility. As time went on, we had some experience. More people got vaccinated, more people acquired natural immunity. And so, yeah, it seems like to me the whole calculation changed. And just the amount of people that could be killed, and sort of the error bars just sort of contracted. And it seems like we never changed our thinking. We still stayed in the same mindset where we were at the start of this. And yeah, that’s an important point you have to look at…
Steven: Yeah, I couldn’t agree more. And one thing that I didn’t talk about at all in the book, but that could have been there if I had another chapter, is what they call public choice theory. What are the different incentives of the decision makers like the public health bureaucrats and everyone else? And because the signal detection or statistical decision theory, which I do explain in the book, has different payoffs for those different parties, the principals and the agents. There can be some unexamined irrationalities in our decision-making where we’re doing what’s best for bureaucrats covering their anatomy than for the public as a whole. And that should play a more prominent role in our decision making. Namely the experts have their own set of incentives. They ought to be acting in the interests of people, but they don’t necessarily.
Richard: Yeah. I mean, I like that because you could sum up the idea of why we have so little rationality when we need it, in terms of people are rational when the incentives lineup, and they’re irrational when the incentives don’t line up. Or they’re still rational when the incentives don’t line up... But I guess, or another formulation you could say something like, I mean, people are always being rational, right? Because they’re rationally irrational, right? They’re being irrational when it makes sense.
And so in that sense rationality is pro-social in some circumstances and it’s anti-social in other circumstances, when you’re believing in QAnon and voting on that to fit into some community. And the question is how we take rational humans as they are, and then we try to create incentives where there are more situations where their rationality can be put in a pro-social direction? Is that sort of a solution to this at the broadest level?
Steven: Yeah, absolutely. And that is a big part of the resolution of the puzzle of how can a rational species embrace so much codswallop and nonsense and conspiracy theorizing. Namely, rationality is always relative to some goal. This goes back to David Hume. The way he put it is that reason is always a slave to the passions. By which he didn’t mean we should go crazy and spend like a drunken sailor. I think what he meant was that we always deploy our rational faculties to try to achieve something, including objective truth, but not necessarily objective truth. And if someone is saying things that earn them prestige within their own clique, their own community, conversely, if they’re avoiding things that would make them a non-person in their social circle, there’s a sense in which that’s not rational for them, for that goal. That goal being be accepted by your buddies, the people who will determine your social fate. It’s irrational for the society as a whole if everyone acts that way. Because the higher order goal that we all ought to strive for is an objective understanding of reality. And collective rationality often comes from implementing rules, norms, institutions whose goal is objective understanding rather than local social acceptance or empowering stories.
Richard: Yeah. So I mean, one of the things that I was struck by is that there seemed to be this discrepancy between the way you described the problem, which I think is perfect and spot on, but then the solutions, it seems like they run into the basic problem of what you’re trying to solve, right? So you say create more social norms for people to be rational. Great. I mean, if we could do that, that’s excellent. The question is who has the incentive to do that? And you described the terrible incentives of, for example, voting or bureaucrats in the public sector, such that it almost seems insurmountable. And it almost seems like maybe we should be thinking about solutions that are a little more radical or outside the box. Do you have any thoughts on that?
Steven: Yeah, I don’t think it’s intractable and hopeless because we do have institutions that are better than the alternatives, which don’t mean that they’re perfect. But in general, science has not done a bad job at discovering the truth, although with obvious lapses and reversals and blind spots, which themselves can get overcome over time. So science is doing something right. Liberal democracies are pretty nice places to live compared to a lot of the alternatives. And so mechanisms of free speech and popular representation are probably better than theocracies or strong man societies. For all their flaws, some things are better than other things, which doesn’t mean that we can’t identify their flaws and try to make them better still.
And indeed, we know that a lot of voting systems are better than hereditary monarchies, but there are irrationalities that are built into voting. Especially when you have first-past-the-post kind of winner-take-all systems. We know what the irrationalities are. We also know that no system is perfect, but that some are better than others. And democracy in general, because you’re kind of empowering people with kind of no direct skin in the game, it can lead to irrationalities when people use voting as a means of self-expression rather than evaluating and opting for the best policies. That is best in sense of meaning achieving things that they themselves agree are good. Like safety, like affluence, like peace, like health.
And there may be alternatives that we’re slow in exploring, especially in the United States, which is such a kind of lumbering government and society. Maybe because we’re so big, we tend not to be as innovative as some of the smaller countries. But maybe citizens councils, where some people are drafted to work out a policy. They’re politically diverse, but they have to sit together in a room and recommend something having studied the problem. That might be conducive to better policies than just having everyone walking into a voting booth once every four years and ticking a box.
Richard: Yeah. What do you think about say, I’ve seen people who are monarchists make the argument that the incentives are more aligned for a monarch who wants to have... In a modern version of this, I think you can just look at something like the system in Singapore, for example. Unquestionably democracy’s track record is better than dictatorship, as a whole. I think in the 20th century, a lot of that was confounded by the fact that so many dictatorships were communist. And I think that that relationship between democracy and economic growth has sort of broken down as dictators are no longer communists. I think that might’ve been one of the most important …
Steven: Well, true. No, I think that’s true. But there are an awful lot of fascist dictators and military juntas and strong man states. Some of them did gravitate toward communist. And you’re right that that’s a confound. And for economic growth, there’s another complication there, which is that kind of the deck is often stacked against democracies in that kind of comparison. Because if democracies are rich, and they tend to be on average, it’s harder to get 3% economic growth if you’re a rich country than if you’re a dirt poor country. So you start off, some pretty basic infrastructure improvements, like an electrical grid, can really jack up your GDP if you start off dirt poor. If you’re already Switzerland, to grow by another 3%, to expect exponential growth indefinitely is a pretty tall order. And that’s going to make democracy seem less efficient at economic growth than at least poor dictatorships.
Richard: Yeah. Yeah. I think, though, I’m not sure exactly what the literature says now, but I think often when you compare poor countries, dictatorships versus democracies, if you look at, for example, the two biggest in the world, China and India. China has been doing much better than India. I don’t know if that holds across all dictatorships versus democracies. But that’s like a third of humanity right there. So it’s an interesting data point.
Steven: But China starts off a lot poorer than Western Europe or the United States. And if you start off poorer, then it’s easier to get a certain amount of percentage-wise growth. I’m surprised at how little that’s pointed out. The expectation of constant growth kind of assumes that our wealth can grow exponentially. And maybe it can, and in fact, it has been, but it’s pretty remarkable expectation when you think about it, that no matter how rich a country is, it ought to increase its affluence by the same percentage as it has all along. And it’s kind of a miracle that it has.
Richard: Yeah. There’s two things, where if you’re comparing poor countries and rich countries at the same point in history, so in 2021, the poor countries, they can adopt the technologies that the rich countries have, so they have more potential return. I think people concerned about the slowdown in the United States are not saying, well, the US is growing slower than China. You would expect that because China is much poorer. I think they make the comparison when the US was on the technological frontier and leading the world in the 1950s and 1960s, they were growing at this rate, and now in the 2000s and 2010s they’re growing at a much slower rate, right? So that’s the question. Why was the US at the forefront of the world, and Western Europe, I think, to a lesser extent, and they were growing much faster in the 1950s and 1960s when today we’re growing slower, right? That’s the question.
Steven: Right, right. And it’s partly a technological question of whether there’s a technological slowdown. In others, the economists talk about secular stagnation, whether there are demographic or fiscal or monetary inefficiencies or barriers to growth that we’re kind of stuck with for awhile. Kind of... And a bit beyond my expertise. My level of expertise. Yeah. Or a lot beyond, I should say.
Richard: Sure. Very rational of you to recognize that. [laughter] So, yeah. So I’ve been talking recently to Phil Tetlock and Robin Hanson, and one idea I’ve been talking to both of them about is making use more of betting markets to get more of an understanding of what’s going to happen in the world, and also potentially as a selection mechanism to select elites. There was a story in The Economist not that long ago that the UK was having... that British intelligence had set up a forecasting website, and the hope is to take the top x number of people and maybe rely on them in case of crisis. Who knows if they actually do that. Do you have any thoughts about that, using forecasting tournaments, new kinds of technologies to identify better elites and predict the future?
Steven: Yeah, I think it’s a great idea. I think I’d incline more toward Phil Tetlock’s superforecasters and forecasting tournaments then prediction markets, which are still amazingly good. But I think... I’m not an expert on this academic literature, but there have been attempts to assess the accuracy of forecasting markets, and I think there are some built in irrationalities there, that the betters tend to overweight very small risks, for example, which of course would mean that there are openings for someone to make a killing in the prediction markets by exploiting the irrationalities that are built into them, but still they’re certainly a lot better than punditry. [laughter] And I think that superforecasters are better still, is my understanding of... At least so our friend Phil Tetlock claims.
Richard: Yeah. So just changing gears a little bit, with your first book, the first book I read by you was The Blank Slate. Was that your first book you ever published?
Steven: No, I think it was the fifth or sixth. I published two pretty technical books in language, language acquisition in children that were published by university presses and not sold in stores. And so I got tenure, but then I published The Language Instinct, How the Mind Works, Words and Rules, and then came The Blank Slate. And The Blank Slate, there actually is a kind of a logical progression, because I wrote The Blank Slate partly over the controversies that I had raised, I wouldn’t say obliviously, I knew that they were there, but some of the reactions to How the Mind Works and The Language Instinct, namely, why is the idea of human nature so incendiary?
Richard: Yeah. So I just wanted to ask you... so that was the first book I discovered. I’ve also, yeah, I’ve gone back and read some of the other ones. The Language Instinct, by the way, for anyone listening, is just a... I just found it beautiful. I think you’re a great writer, and you’re proof that rationality does not foreclose the ability to.. [laughter]
Steven: Oh, that’s kind of you. Thanks.
Richard: It’s very literary and great, sort of artistic in the way you put these books together. And so, yeah, I brought up The Blank Slate because I was wondering, how do you feel about the way the public discussion over these things has moved? Because I read that book, I also read Judith Rich Harris, who I know you’re a big fan of, and it seems like the science has advanced, the genome-wide association studies have then come along and sort of, I think, told a consistent story. And despite the attention that you got and Harris got, it seems like the public conversation has moved in the opposite direction if it’s moved at all. How do you feel about sort of how we think about these things and how it’s changed in the last 15, 20 years?
Steven: Yeah, it has, just in the last five years or so or less, the Great Awokening has involved a huge lurch back toward the blank slate after something of a window that I think it opened in the late nineties, first decade of the 21st century, where it was not as taboo to discuss genetic and evolutionary influences on our emotions and thoughts and then behavior. But that window is... there are people trying to slam it shut for sure. And it is extended... in The Blank Slate I try to identify why what ought to be an empirical question in psychology, namely, what aspects, what dimensions of human psychology are influenced by our evolutionary history, acting via the genes, and which ones are the result of ontogenetic learning and enculturation, but no one treats it as just an empirical issue. They get you canceled. With some exceptions, like the causes of homosexuality and now of being transgender, where the politically correct position there is all nature, no nurture. But in general, it’s all nurture, no nature. And The Blank Slate tried to explore what are the hot buttons, why does this empirical issue get people so roused up?
I identified a small number, four fears that human nature raises in people, largely on the left, but not entirely. The right has its own taboos as well. And they have been revived in the last few years, expanded to the point where, when it comes to racial differences, which is one of the big third rails, that is, one of the reasons that people want to believe that we’re blank slates is that it makes any differences between the races impossible. If we’re all blank then zero equals zero equals zero, and it would just be scientifically impossible for races to differ. And so you wouldn’t even have to have the debate. It would be impossible. Everything is due to learning and enculturation. Now the converse doesn’t hold. It could be that there’s a rich, universal human nature and no differences between races. We’re all humans. We all have the same human nature. But just to build a safe zone around the equality of all races, it’s often been expedient to deny that human nature exists at all. And that is expanded so that even culture now, which used to be the alternative to nature in explaining ethnic differences, has now been demonized. And if you even say that there are cultural explanations for the different fates of ethnic groups, you could be canceled. It has to be all racism all the time. That’s the only permissible explanation for ethnic differences.
Richard: That’s sort of the lay of the land. So it seems like, yeah, there was once a genetics versus cultural debate, and now the culture …
Steven: Yeah, forget about it. Not any more.
Richard: Yeah. That’s right. Right. Culture is sort of the far right of the debate and the genetics has fallen off …
Steven: Yeah, I know, it’s blaming the victim. It’s the culture of poverty. It’s the model minority. There are all kinds of stigmatizing labels within academia and journalism for that family of ideas which yeah, used to be the politically correct alternative to genetic explanations. Now it too is politically incorrect.
Richard: Yeah. So, I mean, all else equal, you’d think that the side with science on its side, would at least see things move in its direction. Well, that hasn’t happened. Do you have a good theory as to what’s changed, because those concerns were always there, right?
Steven: Yeah, they were always there, and it’s not clear what caused the sudden, just the awokening of the last two to three years. And I think it’s an interesting phenomenon to try to understand. And I don’t think we do understand it. I did identify four fears of human nature in The Blank Slate, and a lot of the features of what we now call cancel culture were there starting in the ‘70s. People promoting, advancing, especially certain taboo hypothesis such as innate racial differences were physically assaulted, they were fired, or in the case of my Harvard colleague, E.O. Wilson, who did not advance hypotheses about racial differences but just in universal human nature. But he famously had a pitcher of ice water dumped on his head at the meeting of the American Association for the Advancement of Science. A bunch of academics, left-wing academics, signed a manifesto denouncing him.
So the signs were there, as I say, Billy Joel was wrong. We did start the fire, we baby boomers. But it has really kind of burned out of control in the last three years. So why this sudden increase? It may be that some self-reinforcing dynamics such as, this was suggested by Scott Alexander. Let’s say your way of calibrating how egalitarian you should be is that you should be more anti-racist than the mean in your social group. So, however much people are opposed to racism, and they are, it’s one of the great triumphs that you really don’t have overt racism in the way you would have 50 or 60 years ago. What do you do if you’re trying to be less racist than everyone else, and they’re not very racist in the first place? Well, you start denouncing Halloween costumes. There could be a kind of arms race in anti-racism. That’s part of the explanation.
Another part is that the establishment of a cadre of bureaucrats in institutions like universities and increasingly corporations, means that the system itself rewards that. There’s a way of getting prestige and suffering no cost, you express a grievance to the diversity, inclusion and equity officer in your organization, and organizations are kind of built now to favor that kind of grievance. And there are probably dynamics that just play themselves out with different subject matters in different eras. And this is, I’m by no means the first to point this out, but the comparisons to the Chinese Cultural Revolution, to the Salem witch trials, to the European witch hunts, to Stalinism, to McCarthyism, where you can get a regime of competitive denunciation. Denounce lest you be denounced first, can kind of entrench itself. You can get spirals of silence where if you punish people who doubt something, and you punish the people who say that people should be allowed to announce something, that can kind of become entrenched and reinforce itself, and we may have stepped into that vortex.
Richard: Yeah, I was going to ask about the growth of administrators. I’m sure you’ve seen the chart where the number of professors sort of stays the same and then the administrators just, yeah, goes through the roof. I think another problem was, I think, the professors changed, and maybe this took a while to manifest itself, but a lot of disciplines were created for political reasons. A lot of the “studies departments” were created because students occupied some office and that’s not the normal way that academic fields develop. [laughter] And so those people became part of the faculty and they brought certain political commitments and ideas. Yeah. I think Zach Goldberg has argued that it was the rise of social media. So if you look at the time of the Great Awokening, it takes off around 2010. That’s about the time Twitter becomes popular, and you could see how Twitter could facilitate this, right? You have people just talking all the time, looking for the most outrageous thing possible, sharing the craziest things, confirming their biases. So yeah, there’s a lot going on here.
Yeah, I think that in the US in particular, it seems like the inability to close racial gaps I think is just such a persistent frustration of our culture. And it’s just something since the 1960s, it goes in cycles. We pay a little more attention to it or we pay a little less attention to it. So in the ‘60s we pay a lot of attention to it. In the ‘90s and the early 2000s maybe we don’t pay too much attention to it. In the 2020s now we’re obsessed with it. And it seems like that this is just a problem that doesn’t get solved, and then it’s always sort of there to activate all these other emotional currents. But people, they think about these other things sort of abstractly. But I think that the racial disparities between whites and African-Americans in the US has a huge psychic pull in our culture. Any thoughts on that?
Steven: Yeah, I tend to agree. And sometimes I think that instead of focusing on the statistical disparities, if we had a more kind of colorblind policy and colorblind science while auditing for racism itself with the fake resume studies and so on, so that if we do have evidence of racism, we ought to go after it, but to perhaps have some kind of benign neglect to ethnic and racial differences, that might be healthier. Because when you think about it, there are all kinds of ethnic differences that we could look at that we don’t look at, and it’s probably good that we don’t look at it. If you were to divide every social and economic variable by Jewish, non-Jewish, I think it’s kind of ugly, and I’m glad that we don’t. Or Muslim and non-Muslim. There are all kinds of ethnic differences that we don’t obsess over, we just don’t go there. And often they’re scientifically trivial anyway. But we’ve locked ourselves into an obsession over racial differences that may be the cause of some of our divisions. Yeah.
Richard: When you said benign neglect, it sounded like you were at the start of that saying, sort of the opposite. You have this department of rooting out racism, right? And then maybe that silos it, because you could just have the department of rooting out racism, it doesn’t have to affect the rest of science or society, right? I mean, that’s..
Steven: Well, Ibram Kendi did propose a federal department of antiracism at the cabinet level. Yeah. I don’t think that’d be... I think we’d burst out of the silo.
Richard: Okay. Yeah. There’s no convergence between you and Kendi on this.
Steven: Probably not. Yeah. [laughter]
Richard: So relatedly, you had sort of a controversy where people tried to cancel you on some stupid things. Some tweets that were just so stupid it’s not even worth responding to or justifying it with a response. I wrote an article on Quillette about it. Can you talk about anything that surprised you or anything you learned during that process, because was this the first sort of mobbing you’ve been through as an academic, or is this just one of many?
Steven: It’s not one of many. It wasn’t the first. There was an episode where at an event at Harvard on, something like the theme ‘did political correctness help elect Trump?’ And there is reason to believe that it did. That is a reaction to political correctness. And I pointed out that based on my own experience that the alt-right does not just consist of tiki torch wielding skinheads, but there are actually some smart, educated people who have gravitated to the alt-right precisely because academia has become so suffocating that they figure they must be suppressing some kind of truth. They just want an arena in which they can explore ideas and pick their thoughts and explore, and then see what’s true and what’s false. And the more that academia punishes certain realms of belief, the more intellectually curious people are going to go elsewhere, including some unsavory cliques.
That event, the video was taken out of context, so it made it sound as if I was thinking that people in the alt-right are intelligent and educated, and it led to... Fortunately, I don’t know if it would still happen, but it led to an op-ed in The New York Times the next day called “How Social Media is Making Us Stupider” by Jesse Singal. And so that was an early attempt. Fortunately he came to my defense then, and in the case of the petition by a bunch of linguistics grad students to the Linguistic Society of America, I had 14 different people coming to my defense publicly. So I was lucky. And again, I can’t complain because I don’t think I’m that easily canceled, and what they proposed doing in any case was to take my name off a list of media experts in linguistics and revoke my distinguished fellow status from the Linguistics Society of America, which no one cares about anyway. And even that didn’t happen.
No, the real fear is the message that it sends to people who are less powerful than me. I mean, I’ve got tenure, which is an amazingly... talk about privilege. Forget being white, having tenure is the ultimate privilege. I was actually skeptical of whether tenure really was defensible. I’m starting to see its merits. Namely, there’s just no way in the world I would write what I wrote if I didn’t have tenure, because I’d know that I would always be in jeopardy of losing my job. But the fear is the message that it sends to the non-tenured, to the journalists, to the post-docs, to the grad students, to the assistant professors, namely, say something that someone somewhere can interpret as a dog whistle for something bad, and your career might be over.
So one of the reasons that that petition upset me was the message of intimidation that it sent everyone else. And the other was just the sheer idiocy, the fact that people who presumably are smart are... I mean, I guess this is the left wing version of QAnon and chemtrails. Namely, if there is a moral crusade, then there’s just no bottom to the stupidity. You can deploy all of your rational faculties to completely idiotic conclusions if it promotes solidarity within a sect.
Richard: Yeah. Well, one thing I’ve been thinking about, I mean, just going a little off topic here, the difference between sort of what is now the right-wing sort of irrationality and left-wing... We’ve touched on a little bit in this conversation, but it seems like there’s a difference between QAnon or chemtrails. I don’t know if chemtrails is that associated with the right. I think probably …
Steven: I think not. No, that’s right. You’re right.
Richard: Yeah. Well, I mean, QAnon, the anti-vax, I guess is identifiably a right-wing thing now, just looking at polling and vaccination rates. And this seems to be just a sort of a platonic form of stupidity, right? It’s just sort of people believing things that are just not true, right? But the letter about trying to cancel you was not exactly that. They didn’t say that Steven Pinker kidnaps children or something like that, right? It was predictable, right? It was predictable from an ideology. And so there seems to be a kind of craziness that is just taking some premises that are wrong, like the blank slate of human nature, following those to their rational conclusion, so it’s rationality built on a terrible foundation, and we can maybe identify this with the left today, and then on the right, you have platonic irrationality, just things that make no sense.
Steven: Classic. [laughter] Yeah. I mean the closest maybe on the right would be the attempts to cancel the Never Trump Republicans, the Liz Cheneys and Mitt Romneys, to try to make it just outrageous that they should even be within that tent, this disagreement over whether Donald Trump can be the ringleader, the torchbearer of American conservatism. And it doesn’t seem to be a debate that they’re having, but if you doubt Trump then you’re radioactive. So that’s maybe the closest… But I agree that there’s a difference between holding crazy beliefs and systematically trying to disable the mechanisms of debate and deliberation. And that is an important difference because it is just those mechanisms that implement rationality at the level above the individual.
Richard: Yeah, I mean, even the cancellations for not supporting Trump, like even those seem to be pretty issue free. That makes it again distinguishable from the left. It’s just loyalty to one man, which seems more like kind of like human nature, what humans have generally been like, just tribal, worship a leader, don’t insult our symbols, while leftism seems to be, I don’t want to say a more advanced form of irrationality or hysteria, but it’s sort of like that. It’s sort of, you need this intellectual gloss to get this.
Anyway, so that’s a fascinating topic, to compare and contrast, you know, the right and the left, and their different ways they’re irrational. Which, I don’t think anyone does, because everyone is on side or the other, so I think nobody sees clearly what’s going on, on both sides.
Steven: There are a few. There is a... And I talk about it in social and political psychology, there is a tiny group of researchers, who actually try to compare these kinds of motivated reasoning, and myside bias, whether they’re stronger on the left, or the right.
For example, if you give people logic problems… Just, do the conclusions follow from the premises? It’s well known, that if the conclusions favor the right, or favor the left, then people on the left or the right either accept them or don’t. Even though the question is not, “Are they true?” Just, “Do they follow logically from the premises?” Similarly, do these data show the efficacy of gun control, or not? And whether the people looking at those numbers interpret the study as favoring one policy or another, depends on which policy they believe in before they even look at the numbers. And the more numerate you are, the worse off... The more liable to fallacies you are. So the question there, the meta question is, is that kind of fallacy stronger on the left or the right?
And, the studies that I cite seem to suggest that it’s pretty bipartisan. That is, there’s a... I talk about, there’s the “bias bias.” When you always assume that the other side is biased and you’re not. And then there’s what I call the “bias bias bias.”
Steven: Really, the claim from people on the left that the right are more vulnerable to the “bias bias” than the left is. And that seems to be a “bias bias bias.” Both sides are biased about how much they think the other side is biased.
Richard: Right. Exactly. Yeah. So you always see these studies, you know, “The right believes in more disinformation.” Or something like that. And then you look at the study, and it asks questions like, “Was the election stolen in 2020?” Right, okay. You didn’t pick that question randomly. You knew what you were doing there.
Steven: Yeah. No, that’s right. And often, as I say, studies that find that conservatives are more bigoted, will... It depends on the target group. If you substitute in, say, Christian fundamentalists, to Muslims, then you find that it’s people on the left that show the same degree of bias as assessed, at least by those studies, as people on the right. But if the study that you ran only had designated sympathy groups for the left as the targets, then it’s going to look as if people on the right are more biased.
Richard: Yeah, exactly. So, with all this political bias in academia, and what’s going on, and you talked about the letter as a means to sort of discourage young people from speaking out, so what advice would you have to somebody who’s thinking about academia, who wants to go, into say, psychology, or political science, or economics, but is afraid of the political climate, and if they’re able to do research the way they want?
Steven: Yeah, it’s an agonizing question. Part of it is, choose a program that is committed to viewpoint diversity, where at least you’ll have, kind of, a policy among your professors, and fellow grad students. Another is, wait until you have tenure before you express your most outrageous opinions.
Actually, and it doesn’t even have to be outrageous opinions, and in fact, the first piece of correspondence that you and I had, was one where you made, what I think is a brilliant and valuable point, which is, the problem in academia is not that people are being canceled for expressing outrageous views, but that they’re canceled for expressing views that might very well be correct.
Steven: And that’s the problem. So, even when it comes to radioactive beliefs that might be correct, take advantage of this, perhaps archaic, perhaps justifiable peculiarity called tenure, and do the, kind of, the yeoman work within your field, adding to knowledge in domains that are not absolutely incendiary, and inflammatory, until you’re in a position of strength.
So that’s another bit of, kind of, cynical, practical advice. But the other is, and this would be more, I think, advice to the people in power, than the people who are starting out is, you’ve got to change the system, this is a... Scientists can, or at least ought to be, constantly on the lookout for bias, we have double blind designs with placebo controls, so that we don’t have the distortions of experimental expectancy, namely cooking the data to favor the hypothesis you want. Well, political bias is another contaminant of the scientific enterprise, and that should be singled out, and we should have safeguards against that. This is a point that, you mentioned Phil Tetlock earlier in the conversation, that he has made, together with Johnathan Haidt, and some others, Duarte, and Lee Jussim.
Richard: Yeah. I mean, I think that’s a difficult piece of advice. Wait until you have tenure, in 20 years maybe. I think you have to have a low time preference for people to take that advice. I mean, if you have the passion for a subject, go for it. And if your ideas aren’t that radical, and you feel like you can advance knowledge without stepping on toes, I mean, go ahead, but yeah. If you really want to say something that is controversial, or would get you in trouble, telling people to wait 15, 20 years, it is just a very, very hard pill to swallow.
Steven: It is. And it is a pathology of academia that I’d have to offer that advice. The thing about tenure, of course, is it’s great when you have it, but ideally, there should be freedom of inquiry for everyone, at every career level. And of course, the granting of tenure itself, can be distorted by conformity to your politically correct ideology.
Richard: Yeah. Do you think the rise of the administrators, and the restrictions on speech, and things like diversity statements that you have to submit to universities ... It seems like the system is getting better, or “better,” at filtering out anyone who might say anything wrong, once they get that, you’re right, it does seem like there’s more of an ideological selection to make sure, that when you get to that end point, I mean, you would have had to really... You would have had to have been a great actor to have …
Steven: Yes. No, I think it truly is pernicious, and those diversity statements really are an outrage for job candidates. So, I think part of the... I think it’s also good that we are starting to have organizations that are pushing back, because a lot of this is just bullying from some people in power, perhaps not reflecting the majority opinion, although a growing minority, but probably not a majority. But organizations like Heterodox Academy, the Academic Freedom Alliance, Foundation for Individual Rights in Education, Counterweight. The people who believe in freedom of inquiry, and freedom of thought, really do have to organize and push back, because the institutional forces have been gaining in strength, and they’ve become increasingly entrenched in universities. And part of the reason that they’re entrenched is the, going back to the principal-agent problem, or public choice theory, the incentives for deans, and provosts, and presidents so far, have been stacked in one direction. Namely, no one suffers any consequences from implementing these repressive policies, whereas, they know that if they stand up for intellectual freedom then, they might be carried out of their office by protestors. There might be noisy demonstrations outside their office, and, you know, banging pots, and stuff. But if there are organizations that would make the life of a college president miserable if he tried to suppress academic freedom, that is, some nuisances from the other side, then probably, I suspect a lot of them, if you were to interview them confidentially, would actually not believe the things that they say in their public statements.
But they know what side their bread is buttered on. They know how to avoid damaging publicity. So there should be damaging publicity from the other direction too.
Richard: Yeah, well I mean, one thing, you talk about pushing back within the institutions, one thing Eric Kaufmann has written about for a report for CSPI, he actually did surveys of professors, and I think he burst the myth that there’s this silent majority that’s pro-free speech, right? And there’s a lot of ambivalence, right? Depending on how you ask the question.
Richard: But the unquestionably pro-censorship side does outnumber the pro-free speech side. And it gets worse the younger you look. If you look at PhD students, versus …
Steven: No, right. No, they’re definitely cohort X. My memory, and actually I did cite that in my recent interview in the New York Times Magazine, where I think they actually mentioned it in a footnote. Actually, I don’t know if they named the CSPI study itself, but that’s what they alluded to, that the intolerance does tend to increase with younger cohorts.
But I might be misremembering the data from that very, very interesting study, but I thought it was still a majority who were, but a shrinking majority, who were, at least in principle, in favor of freedom of expression.
Richard: I think it depends a lot on how you ask the questions.
Steven: You’re right. Yeah.
Richard: So, I think if you’re just straight up, you know, “Free speech, versus representation in the curriculum.” And that question, “Do people want control over their syllabus?” So, in that case, they’ll be alright. But if you ask specifics actually, like, “Should someone be able to talk about race and intelligence, or this or that.” It’s actually... It actually looks much worse in those cases.
Richard: So the overall lesson is that there’s no unambiguously pro-free speech majority, or even plurality in academia. But what that leads Eric to say is that he thinks that the reform needs to come from outside the university, right? You need regulations from government. He’s actually, I think in the UK they took some of suggestions him and other people have made, and they’ve appointed people from outside the university to look into free speech issues. Have you given any thought about possibilities like that?
Steven: Yeah, so it should be... You don’t want the government to be putting restrictions on content. That is, you may not teach Critical Race Theory, as some of the state legislatures have tried to do. And to the credit of the Foundation for Individual Rights in Education, they have opposed that kind of heavy-handed interference on content coming from the right as strongly as they have the abuses coming from the left. But I think it’s completely legitimate to have some kind of transparency to the process by which these policies are set. And that if there are flagrant violations of the kind of fiduciary duties of a university, such as to allow different opinions to be expressed if they’re backed up by argument or evidence, then university are inviting external scrutiny, external control, if they continue to flout them.
So it shouldn’t be content, but it should be the rules. That is, the... We may need external referees and umpires that are not players on one side or another.
Richard: Yeah, so just to wrap up, your book on rationality just came out. You always tackle the big issues, mind, language, war, the course of human history, rationality. Do you have any idea what’s coming next yet?
Steven: Yeah, I do have a line of research that was going to culminate in a book this year, but I put it on the back burner in order to write Rationality. But I do have a proposal for a book that would be called, Don’t Go There: Common Knowledge and the Science of Hypocrisy, Civility, Outrage, and Taboo.
Steven: It will have to be a very different book, because I first conceived it before the Great Awokening. I mean, those issues were there, they go back to the ‘70s, if not earlier, but the explosion of the outrage industry, particularly in academia, but also in journalism, means that I’ll have to probably spend more attention on the recent manifestations. But the key idea is that, what is that makes something outrageous, as opposed to something you disagree with? What marks that line? And I suggest it’s a phenomenon of, what the game theorists and philosophers call common knowledge. I mean, the difference between everyone knowing something and everyone knowing that everyone knows it. But that is a... We know that that is a major logical difference. That there are some things that you can deduce with knowing that everyone knows something, as opposed to just everyone knowing it. But it makes a huge psychological difference. It’s the difference between something being out there, out of the bag, the little boy seeing that the emperor is naked. That phenomenon, versus a kind of elephant in the room, where everyone pretends that it doesn’t exist, even though they know that it does.
And that psychological difference, and logical difference, drives a lot of our social and political discourse in terms of both genteel, and sometimes beneficial hypocrisy, politeness, tact, innuendo, euphemism, on the one hand, and things that are out there, often triggering outrage on the other.
Richard: Yeah. Well, I said that was the last question, but that just reminded me of something I’ve wondered about. So, John McWhorter has written about how the old swear words, the F-word, and stuff like that, are not really profanity any more. That the most sacred words, the things that are constantly... The things that are most taboo, are racial slurs, you know, homophobic slurs, things like that. Does that say something very deep about our culture? Is it possible to read too much into that? Does it have any historical precedents or is this something strange? Because the way you talked about it... I remember the way you talk about swearing in previous books.
You say things like, “When people are in pain, or they want to express strong emotion, they go to the divine, they go to fornication, they go to bodily functions.” Right? And the fact that that’s not profanity anymore, but racial slurs, and homophobic slurs are, I mean, at the deepest level, sort of, what does that say about what’s happened to society?
Steven: In terms of which words become taboo, and which ones aren’t, there is a... You see very broad, long term kind of tectonic forces that make some things less outrageous than others. The most obvious one is religion. That is, there are virtually no religious taboo words anymore. “Hell, damn, you know, Jesus Christ.” There used to be. We know that because the movie Gone With The Wind, created a sensation when the last line was, “Frankly my dear, I don’t give a damn.” And that was really edgy back in 1939. Now it has to be explained to someone what that fuss was about.
Or, earlier, when Pygmalion was first performed on the stage, the predecessor to My Fair Lady. And the Eliza Doolittle character caused a raucous when she said, “not bloody likely” at an upper class tea party. And not only did the character within the plot of the play shock her fellows, but the audiences of the play itself were shocked that the word “bloody” should be said on the British stage. Again, you have to explain it.
And in fact, sensibilities had changed so much, that by the time Pygmalion was adapted to the musical My Fair Lady, they had to change the shock line, because no one would have gotten the outrage to the word “bloody.” And that’s when it was at the Ascot races, “Move your bloomin’ arse.” That had to be the... So, it shows that the shock words can change over time, and in general, the fact that our culture has become more secular, is partly an explanation for why religious taboo words have lost their sting, and why racial and homophobic words have acquired one.
On top of that though, it isn’t just the content, because the thing is, that for every taboo word there is a genteel synonym. So it’s not that you just can’t talk about it. It’s the words themselves take on taboo status in a kind of self-reinforcing dynamic where the fact that other people treat them as taboo, means that they are taboo. And so, some words can drift in and out of taboo status. And it’s the particular word, not just the content. But yeah, we’ve seen it with racial words, including ones that even 10 or 20 years ago, both... John can get away with it, probably more than I can, but I would... I uttered... As the linguists would say, I didn’t use the words, but I mentioned them in my capacity as a linguist.
Steven: I study words. How do you study something if you can’t even talk about what you’re studying? And probably, so many of mine, could dig through YouTube and see me utter the N-word, in a long list of other profanities that I was...
Richard: Give them the blueprint if they want to make a better attempt. Shows you how incompetent those linguistic grad students were that they didn’t even find that. [laugher]
Steven: Yeah. Right, right. Maybe we shouldn’t be giving them these clues. And the thing is, linguists, of all peoples, should know the difference between use and mention. Namely, actually weaving a word into your speech, with all of its connotations, versus discussing it at a meta level as a word that other people say in circumstances. And bizarrely, that distinction... One of the way in which our intellectual culture has become stupider is that that distinction is adamantly denied. And that, Don McNeil, the New York Times editor, who lost his job because he discussed the N-word and actually mentioned it in a discussion with some high school students. And so, years later he got sacked.
Steven: This is like primitive word magic, that you can’t even utter the word. It will arouse dreadful awe inspiring forces. The opposite of the lesson number one in linguistics, which is that words are just conventions and their meaning comes from the way people interpret them in a context.
Richard: Yeah, that Don McNeil one is nothing. Did you hear about the guy teaching Chinese at USC…
Steven: Oh yes. The guy that used the pause words, ne ga, in Chinese. Yes. That’s the problem, and again, going back to that LSA position. Nothing is too stupid to cause outrage.
Richard: Yeah exactly. On that book on common knowledge, is there an estimated time of arrival, or is it still in the embryonic stage?
Steven: Oh, that’s just a... At the time that you and I are having this conversation, Rationality hasn’t even hit the stores.
Steven: So I’ve got a lot of talking about that book. So probably not for four or five years.
Richard: Okay. No rush. I just look forward to reading everything you write. Steve, it’s been a pleasure, thanks for joining us.
Steven: The pleasure’s been mine. Thanks so much Richard for having me on.