How a humiliating military loss proves that so much of our so-called "expertise" is fake, and the case against specialization and intellectual diversity
Great piece, but conflicted in certain ways. You make sweeping claims that “’expertise’ as we understand it is largely fake” and that “the entire concept of specialization” is “the main problem with academia”. But then you point to fields like civil engineering, physics, and aerospace engineering as examples of genuine specialized expertise.
Moreover, the “real” fields have formal structures similar to those of the “fake” fields. They have their own university departments, degrees, journals, and so on. And although some STEM research requires expensive infrastructure, other research doesn’t. Most mathematicians don’t need electron scanning microscopes or the Large Hadron Collider, so their work could be done outside formal academia. Yet formal math academia seems to work okay.
A better take is not that expertise is generally fake, but rather that expertise becomes increasingly fake as its domain shifts from the analysis of inanimate things to the analysis of human behavior.
What your “fake” fields have in common that they are concerned with predicting and managing human behavior, whereas the “real” fields are concerned with predicting and managing the behavior of inanimate physical objects: rockets, bridges, electrons, etc. Or with abstract mathematics.
Why this dividing line? Probably because human beings, and especially human brains, are extraordinarily complicated. Compared with other things, the human brain remains poorly understood. As a result, fields that rely upon an understanding of it make less progress.
This dividing line is evident in domains that straddle the human and the inanimate. In medicine, it seems to me that orthopedic surgery is more “real’ than psychiatry, not because psychiatrists are dumb but because the brain is more complicated than the anterior cruciate ligament.
Likewise with COVID. Expertise in vaccine development, which relies upon serious knowledge of biology and chemistry, is clearly real. It’s unlikely that Philippe Lemoine could step into the shoes of a senior scientist at BioNTec or Moderna and match their performance in a few months. But epidemiological modeling, which relies upon assumptions about a vast array of human behaviors, has proven to be mostly fake. Predicting human behavior is harder than predicting the efficacy of a vaccine.
This is a good comment, but I disagree that the main problem is the human sciences is that they’re too complicated. Social desirability bias is a bigger issue, people don’t have ideological commitments on how to build bridges and don’t judge each other based on their engineering views. Social scientists get many obvious things wrong, like male/female differences and whether you can build a feminist democracy in Afghanistan.
Higher dimensional data, though, definitely makes it easier to find a model that fits both one's biases and the data. If y = a*x + b, there's no way I can force x to not have a positive effect on y. But if I have 50 more variables plus interaction effects, it is much more likely that there are models in which the coefficient on x is negative, 0, or positive that all fit the data equally well. So it may be that in fields where the possible range of models is much more constrained, it's harder to let one's biases run wild.
I'm not doubting the importance of social desirability bias, but I also wonder if people naturally deal with being presented with a surfeit of possible models that seem to fit past data similarly well by deferring to their biases.
I would argue that fields like engineering and the natural sciences have these structures because they need to, and fields like the social sciences and humanities have these structures to borrow the legitimacy of engineering and the natural sciences.
Falsifiability distinguishes fields in which expertise matters and fields in which expertise is fake. You must consistently have more new data than you can explain away, only then will expertise develop through trial and error.
Some fields are simply impossible to have meaningful expertise in.
> whereas the “real” fields are concerned with predicting and managing the behavior of
> inanimate physical objects: rockets, bridges, electrons, etc. Or with abstract mathematics.
Academic computer science is often a "fake" field. Though advances in e.g. machine learning do sometimes come from academia, and there do exist successful companies that came from someone's PhD (Google is one), most Comp. Sci. academic papers and even subdisciplines are useless in the real world (read the old Joel on Software post on his brief period in grad school at Yale). I've heard similar stuff about work in math depts also.
It's not fake in the way epidemiology is fake, but academic CS does have a problem with chasing uninteresting ideas that mainstream industry rejected and doing so for decades without end. There's no feedback loop so e.g. functional PL theory becomes ever more exotic and disconnected from the languages that are actually used to write real software, and nothing pushes it back to the mainstream. Cryptography is also an unfortunate subfield, with many algorithms presenting "proofs of security" that don't actually prove anything interesting. And that's applied maths, so ... yeah.
Academic computer science as you describe it seems just a subdiscipline of academic mathematics. The divide seems to be "exact/inexact" rather than "real/fake", and for whatever reason exactness is simpler in two disparate object matters - inanimate objects and abstract math.
I wouldn't go so far as saying "epidemiological modeling . . . has proven to be mostly fake." It's a fantastic set of mathematical and statistical tools to analyze historical or clinical data. It works exceedingly well in this capacity -- else modern drug development would be little different from a medieval apothecary! It's also quite good in pointing out fake cures like hydroxychloroquine (again, in a post-hoc, well-designed clinical study).
The acute problem we've seen over the past year and a half -- and the sort of problem, but not exactly the same, analyzed by Tetlock -- is that these models don't have much predictive power on what will happen as opposed to what has happened. How could they? They're mostly linear ( or linearizable, though fancy techniques ), and the future writ large is non-linear.
True in a world with full information, false in the real world where information is limited and thus you can make all kinds of predictions about the absent information.
Good comment. Some human behavior can be statistically predicted, like, for example, that people will buy more of a good if its price declines. But much human behavior is iterative and strategic. People adjust reactions based on what others know, what they know that others know they know, what they know that others know they know they know, etc. These kind “games” can’t be predicted by anyone. In the end, the players will tend to herd towards some self-reinforcing outcome that can be hard to predict beforehand.
A simple example is stock prices. We have good models of what explain stock prices historically. However, those are useless to predict future stock prices. Why? Stock prices are determined by the buy and sell orders of millions of humansj. Once someone published a model that reliable predicts when and by how much a stock’s price will rise and convinces others of its utility, millions will issue sufficient buy orders on such a stock fated to rise, such that it is immediately bid up to the price that matches the prediction before anyone can profit from it, rendering the prediction model interesting but useless.
In the Afghan case, the first few counter-examples to the narrative that the Taliban could not easily
overcome their military disadvantage probably led to a cascade that involved the strategic surrender of platoons of Afghan government troops who instantly took into account the fact that troops and cities before them were falling like dominoes. Yes, they could have colluded with each other not to surrender and to hold out the fight, but they seemingly rationally decided to head to the door all together.
Some of the Monday morning quarterbacks are now singing that the US should have at least begun evacuations earlier. But the players would have been aware of it, and the same cascade could have happened months ago.
Eh, I don't think that's an example of this. They're correct that nearly all mutations in viruses are inconsequential (or even deleterious to the virus). They don't dispute that immune (or vaccine) escape occurs via mutation; rather, hardly any mutations confer those traits.
They said we shouldn't worry because mutations are common. They didn't consider the fact that a novel disease has not yet been optimized for transmission, so there are likely to be adaptive mutations which will quickly spread and swamp the non-adaptive mutations (however more commonly they arise). Greg Cochran predicted this in advance.
Great piece, thank you. A key factor at play here, I think, is what NN Taleb calls "skin in the game" - complete for a Taliban and zero for any decision-making American.
We created an entire class whose intellectual pursuits are fully disconnected from their practical results, both in academia and in public sector. If we reinstate accountability, intelligence will quickly follow. But of course that won't happen politically until we reach another existential-threat situation.
I would though draw a distinction between "fake" expertise and "malicious" expertise. The common "malicious" pattern is "advocates posing as analysts". In many domains - foreign policy for an example - almost 100% of "analysts" are in fact advocates.
While this dynamic leads to a pattern of those subject matter experts not making better recommendations than amateurs, it's not necessarily because the experts lack capability or useful superior knowledge.
For instance, DF @ the Atlantic is a truly brilliant guy. Very, very smart. But you would want to be careful following his foreign policy advice, because he will always pose as an analyst when he is in fact generally operating as an advocate. So in that sense I would not call his expertise "fake" - despite the quality of his guidance. He could provide better analysis than the vast majority of laymen, he just prefers not to.
The "advocate posing as analyst" dynamic has at this point poisoned a large share of US intellectual output. I think the problem of "malicious" expertise has somewhat different solutions to "fake" expertise.
I honestly can't think of anyone in academic political science who thought the US approach to Afghanistan was working (or could work). And I can think of a lot of people who were extremely critical of it.
It's true that a lot of policy types have some level of formal education in political science/IR, but you surely know that there's essentially zero role for academic political science in their decision making (which is itself an indictment of political science though a very different one than what you're giving).
> As a reader wrote to the Marginal Revolution blog, ethicists have some terrible takes because to get published, they need to be original.
I agree that novelty for novelty's sake contributes to many problems and failures, but I think we should keep in mind that this affects industry at least as much as academia. Every new engineer, designer, etc. at a company wants to make their mark. Regression to the mean is hard to avoid, so over time, market-leading products and services get worse as they accumulate more subpar "contributions".
> Here’s suicides in the US from 1981 to 2016.
This is an almost meaningless plot in most contexts. The per capita plot, while still not great, is a much better representation of reality.
> Neither of these perspectives contributes all that much. You’ve made the conversation more diverse, but dumber.
The "intellectual" in "intellectual diversity" is doing a lot of the work. Obviously there are viewpoints that don't contribute anything and aren't worth including, but a monoculture has its own downsides. The real needle to thread here is maintaining a vibrant and productive discourse while fending off the postmodern idea that all viewpoints are equally valid. (Both political sides have now learned to employ postmodernism selectively when it benefits them. I hope someday history accurately portrays postmodernism as one of the greatest intellectual missteps of human civilization.)
Enjoy your posts. I think this topic however deserves a tighter argument and post than what you've given it here.
I also think you overlook experience and realistic/local knowledge in your IQ + CBA is enough point (putting aside motivation discrepancy). The US could bring to bear more intelligent people capable of conducting CBA than the Taliban. The difference however is that the Taliban's CBA model (using this as a shorthand for applied epistemology) is more realistic because it is based on local knowledge and experience. The US' efforts on the other hand were designed by highly credentialed people using abstracted and stylized knowledge that didn't fit on-the-ground reality well.
In other words, even using your own model, the Taliban won and the US lost in large parts because of the discrepancy in their types of expertise about Afghanistan. Instead of being an indictment of expertise, it's an indictment of a certain kind of expertise -- that which is produced by social science. It is more evidence for what should be nearly self-evident: Knowledge of human behavior, communities, cultures and societies isn't amenable to the abstraction through the scientific method.
I don't disagree with that. Obviously the Taliban had expertise regarding the language, terrain, culture, etc. that we lacked. My argument is against the Western version of expertise that is based on credentials, peer review, universities, etc., which is what most people mean by the term. It's not an argument about all possible forms of knowledge.
Agreed. While I appreciate the overall point - I don't see the situation in Afghanistan as one of "highly intelligent generalists defeat overly-specialized experts." If anything, the Taliban is far more specialized on the topic of "how to rule Afghanistan" than the Pentagon is. The problem was the hubris in believing that Ivy League schools and thinktanks and the like could possibly produce specific enough expertise to counter (or surpass) the expertise that Taliban types developed over the process of *actually fighting various wars in Afghanistan for the better part of the last several decades.* It's less "witch doctor beats Von Braun at rocketry" and more "witch doctor beats Von Braun at curing African disease Von Braun has never encountered" while the media convinces everyone that curing disease is absolutely dependent on a deep understanding of rocketry.
Had this been a battle between Pentagon-led and Taliban-led forces over long-term control of Nicaragua or some other "neutral" field, I suspect the Pentagon probably wins.
We did the same things in both Vietnam and Iraq (which will implode next). We do well when military experts fight military experts and everyone wears an identifying uniform. The military always fails when confronted by nonmilitary local forces. Skin in the game counts for a lot better but so does willingness to do anything it takes to win.
I think you’re hitting a lot of soft targets to bolster your argument. Wouldn’t a particle physicist would be more of an expert on particle physics than the generally educated person? Wouldn’t an experienced thoracic surgeon have more expertise in the thorax than a general surgeon? Wouldn’t Magnus have greater expertise than a highly ranked player. I think there’s some division between hard and soft domains here - maybe hard and soft isn’t the right phraseology, but you get my drift. Meanwhile, I’ll let Lebron call the play next time time the court - not that average player off the bench.
Space race analogy is bad analogy. It suggests each side is pursuing an equally difficult goal when this is almost certainly not the case. For all we know, trying to build a democratic nation in Afghanistan (our goal) is 100 times harder than the Taliban's goal of controlling power. It doesn't matter how much expertise he has, I will beat Garry Kasparov every time if he plays without a queen.
Not necessarily.. A) They both kinda had the same goals in the sense of preserving their interests in the region.. The US decided to do this via a VERY confused process of 'nation-building' where they constantly shifted the goalposts, imported their own institutions and completely failed to get the buy-in of the Afghan populace.. If the US had decided to invest exclusively in wormhole technology to win the space race and then get beat to the Moon, it doesn't really make sense to say that IF we had gotten the wormhole working, we could have gone to Alpha Centauri.
B) How badly rushed the evacuation is and how quickly the Taliban took over, seemingly surprising the US Intelligence is the problem. Sure, you'd win if you played Kasparov without a queen, but if you checkmate him in 5 moves, the audience can't help but wonder if Kasparov maybe had a few too many.
I think you are overestimating your chances with chess: winning with a handicap is a standard skill for chessmasters. The general point might be valid though.
What's unique for our time is the extreme liberal distaste for anything that potentially hints towards the 'superiority' of the oppressor categories (Whites, males, straights, etc.) In this world, innate intelligence cannot be real, behavior cannot have any consistent relationship with one's chromosomes or genetic inheritance.
It's a huge blind spot that's causing a monumental build up of data, any and all data, which suggests the null hypothesis of the blank slate, or standard social science model as it's called, is false. It gets so bad that people accuse AI of racism when it can identify a person's race from samples of x rays.
Modern day CRT amounts to a kind of last ditch effort to explain why equal treatment doesn't generate equal outcomes without rejecting the null hypothesis. Since the obvious testable causes of inequality consistent with the null have already been tested repeatedly. (such as school funding) you're left with unfalsifiable ones that implicate anyone and everyone.
Conservatives themselves only improve the situation if and when they're willing to reject the null hypothesis and also willing and able to withstand the physical violence that will be visited upon them when they do. Many of them are often so cowardly and compromised at this point that they might not even manage that.
If there was any group of people I'd trust to solve this problem it wouldn't be conservatives but Islamists. Conservatives didn't survive the 60's purges. It's not enough to suggest the null hypothesis is wrong, you need people who, whatever their other faults, will straight up tell you that (for example) biological sex is real and men and women are different. You will also need people that are more terrifying to offend than the campus orgs that drove non-leftists from the universities to begin with.
____________
But even without the null hypothesis polluting social science you still have the issue that in a market place of ideas, ideas which are 'powerful' (however wrong or ineffective) will always win out over ideas which are simple. (And don't justify throwing tons of money at some problem as is the case with nation building and education)
"There’s a saying in academia that “instead of measuring what we value, we value what we can measure.”"
...And this leads to a lot of numbers that don't match up to reality. "Give me hard data!" sounds like a hard-headed thing to say, until you realize that the data being requested doesn't reflect any sort of reality. Then it becomes decidedly soft-headed. If you've ever been employed at a job with lots of perfunctory metrics, you know about this.
Overall, this is the best attack on technocracy I've read. It backs the "trust the experts" types into a corner. I don't know how much difference that will really make (they can always just ignore it...) but anyone who engages honestly with this will be forced to take skepticism of "expertise" more seriously.
I’m not sure I’d consider the Afghanistan mission to have “failed”. The primary goal was to prevent Islamic terrorist attacks in the US. The problem was that there wasn’t a way to prevent terrorist control of Afghanistan without the US military presence. So we went in, accomplished the goal, and didn’t have a way to leave (not necessarily a bad trade-off going by 2002 priorities.) We took a crack at the low-probability strategy of building a functioning government, since we had to be there anyway (and direct US colonial rule wouldn’t be a good look.)
There’s still a reasonable cost-benefit analysis that concludes “continued US presence is better than a 10% higher chance of major terrorist attack” but that’s not a winning message, so everyone had to pretend like we could “win” in some way, or just hope the US public focuses on other stuff. It worked for 20 years, and an argument could be made that we made it through the highest terrorism danger period and China’s now a more important military focus than it was in 2001.
Yes, the withdrawal was done terribly, but there was another set of tough choices there, and I think that falls more to “bureaucratic squabbling and lack of accountability results in incompetence” than a PhD in military withdrawals getting his predictions wrong.
How do you determine whether the overthrow of Taliban rule in 2001 prevented terrorist attacks in the US, as opposed to numerous domestic anti-terrorism efforts? I'm not sure you can determine that. . . but I'm no expert LOL
The other tricky thing here is that a lot of the counter-insurgency techniques and experiments worked at the tactical level, but the overall strategic plan failed/didn't get sufficient support to succeed given the challenge. So yes it was a failure, but in part due to decisions and structural factors beyond that of the specific experts brought in (who often diagnosed the problems correctly).
It's not true that expert politcal judgement showed that experts did were no better than laymen. The laymen - undergrads at Berkely I think - did substantially worse than random, and on average the experts did no better than random. The cavet to this is that there were a group of experts - the foxes - who did much better than random. Additionally, time-series econometrics did exceptionally well, and that of course is an an area of expertise. I think the modern literature of forecasting also casts doubt on the claim that experts are useless. There are groups of people who can forecast really very well up to about 3 year time spans, and they do this in large part by reading large groups of experts as well of using base rate forecasting, and of course the base rate must be constructed.
I'm also not convinced we should be surprised that politcal scientists are bad a forecasting. Their role isn't to come up with the best predictive model of the social world, it's to come up with the most accurate causal model of one small part of the social world. This is a very different task. Potentially the analogy would be disregarding physics because solid state physicists can't predict when a car engine will break. A further point is that the work on expert politcal judgement was mostly done in the 80s, well before the causal inference revolution that's occurred in the social sciences - instrumental variables weren't even a thing!
Some comments note that the Afghan government, contractors and others had a strong incentive to "play act" success (i.e. lie). Certainly true. But all major endeavors face potential fraud by participants, often with catastrophic consequences. One of the responsibilities of a civil engineer is to supervise a project to detect materials that don't meet specs, construction techniques that deviate from plans, and so forth. A well specified social policy needs to be monitored just as closely. A successful system will have the whistle blowing built in and adequately funded.
Great piece, but conflicted in certain ways. You make sweeping claims that “’expertise’ as we understand it is largely fake” and that “the entire concept of specialization” is “the main problem with academia”. But then you point to fields like civil engineering, physics, and aerospace engineering as examples of genuine specialized expertise.
Moreover, the “real” fields have formal structures similar to those of the “fake” fields. They have their own university departments, degrees, journals, and so on. And although some STEM research requires expensive infrastructure, other research doesn’t. Most mathematicians don’t need electron scanning microscopes or the Large Hadron Collider, so their work could be done outside formal academia. Yet formal math academia seems to work okay.
A better take is not that expertise is generally fake, but rather that expertise becomes increasingly fake as its domain shifts from the analysis of inanimate things to the analysis of human behavior.
What your “fake” fields have in common that they are concerned with predicting and managing human behavior, whereas the “real” fields are concerned with predicting and managing the behavior of inanimate physical objects: rockets, bridges, electrons, etc. Or with abstract mathematics.
Why this dividing line? Probably because human beings, and especially human brains, are extraordinarily complicated. Compared with other things, the human brain remains poorly understood. As a result, fields that rely upon an understanding of it make less progress.
This dividing line is evident in domains that straddle the human and the inanimate. In medicine, it seems to me that orthopedic surgery is more “real’ than psychiatry, not because psychiatrists are dumb but because the brain is more complicated than the anterior cruciate ligament.
Likewise with COVID. Expertise in vaccine development, which relies upon serious knowledge of biology and chemistry, is clearly real. It’s unlikely that Philippe Lemoine could step into the shoes of a senior scientist at BioNTec or Moderna and match their performance in a few months. But epidemiological modeling, which relies upon assumptions about a vast array of human behaviors, has proven to be mostly fake. Predicting human behavior is harder than predicting the efficacy of a vaccine.
This is a good comment, but I disagree that the main problem is the human sciences is that they’re too complicated. Social desirability bias is a bigger issue, people don’t have ideological commitments on how to build bridges and don’t judge each other based on their engineering views. Social scientists get many obvious things wrong, like male/female differences and whether you can build a feminist democracy in Afghanistan.
I agree that this is a huge problem too. Blank Slatism in the Western social sciences is the equivalent of Lysenkoism in Soviet biology.
Higher dimensional data, though, definitely makes it easier to find a model that fits both one's biases and the data. If y = a*x + b, there's no way I can force x to not have a positive effect on y. But if I have 50 more variables plus interaction effects, it is much more likely that there are models in which the coefficient on x is negative, 0, or positive that all fit the data equally well. So it may be that in fields where the possible range of models is much more constrained, it's harder to let one's biases run wild.
I'm not doubting the importance of social desirability bias, but I also wonder if people naturally deal with being presented with a surfeit of possible models that seem to fit past data similarly well by deferring to their biases.
The fact that OP chose to implicate the "complexity" of humans rather than the biases humans have about humans is a case in point.
I would argue that fields like engineering and the natural sciences have these structures because they need to, and fields like the social sciences and humanities have these structures to borrow the legitimacy of engineering and the natural sciences.
Falsifiability distinguishes fields in which expertise matters and fields in which expertise is fake. You must consistently have more new data than you can explain away, only then will expertise develop through trial and error.
Some fields are simply impossible to have meaningful expertise in.
> whereas the “real” fields are concerned with predicting and managing the behavior of
> inanimate physical objects: rockets, bridges, electrons, etc. Or with abstract mathematics.
Academic computer science is often a "fake" field. Though advances in e.g. machine learning do sometimes come from academia, and there do exist successful companies that came from someone's PhD (Google is one), most Comp. Sci. academic papers and even subdisciplines are useless in the real world (read the old Joel on Software post on his brief period in grad school at Yale). I've heard similar stuff about work in math depts also.
It's not fake in the way epidemiology is fake, but academic CS does have a problem with chasing uninteresting ideas that mainstream industry rejected and doing so for decades without end. There's no feedback loop so e.g. functional PL theory becomes ever more exotic and disconnected from the languages that are actually used to write real software, and nothing pushes it back to the mainstream. Cryptography is also an unfortunate subfield, with many algorithms presenting "proofs of security" that don't actually prove anything interesting. And that's applied maths, so ... yeah.
Academic computer science as you describe it seems just a subdiscipline of academic mathematics. The divide seems to be "exact/inexact" rather than "real/fake", and for whatever reason exactness is simpler in two disparate object matters - inanimate objects and abstract math.
I wouldn't go so far as saying "epidemiological modeling . . . has proven to be mostly fake." It's a fantastic set of mathematical and statistical tools to analyze historical or clinical data. It works exceedingly well in this capacity -- else modern drug development would be little different from a medieval apothecary! It's also quite good in pointing out fake cures like hydroxychloroquine (again, in a post-hoc, well-designed clinical study).
The acute problem we've seen over the past year and a half -- and the sort of problem, but not exactly the same, analyzed by Tetlock -- is that these models don't have much predictive power on what will happen as opposed to what has happened. How could they? They're mostly linear ( or linearizable, though fancy techniques ), and the future writ large is non-linear.
What does it mean for a model to have "predictive power" for something that already happened? A prediction is meant to be about the future.
True in a world with full information, false in the real world where information is limited and thus you can make all kinds of predictions about the absent information.
Beautiful response to a good article.
Herbert Simon described this 'dividing line' as artificial versus natural sciences. His book 'Sciences of the Artificial' explains it.
I don't know about you, but I don't trust pieces that are not conflicted in certain ways.
Good comment. Some human behavior can be statistically predicted, like, for example, that people will buy more of a good if its price declines. But much human behavior is iterative and strategic. People adjust reactions based on what others know, what they know that others know they know, what they know that others know they know they know, etc. These kind “games” can’t be predicted by anyone. In the end, the players will tend to herd towards some self-reinforcing outcome that can be hard to predict beforehand.
A simple example is stock prices. We have good models of what explain stock prices historically. However, those are useless to predict future stock prices. Why? Stock prices are determined by the buy and sell orders of millions of humansj. Once someone published a model that reliable predicts when and by how much a stock’s price will rise and convinces others of its utility, millions will issue sufficient buy orders on such a stock fated to rise, such that it is immediately bid up to the price that matches the prediction before anyone can profit from it, rendering the prediction model interesting but useless.
In the Afghan case, the first few counter-examples to the narrative that the Taliban could not easily
overcome their military disadvantage probably led to a cascade that involved the strategic surrender of platoons of Afghan government troops who instantly took into account the fact that troops and cities before them were falling like dominoes. Yes, they could have colluded with each other not to surrender and to hold out the fight, but they seemingly rationally decided to head to the door all together.
Some of the Monday morning quarterbacks are now singing that the US should have at least begun evacuations earlier. But the players would have been aware of it, and the same cascade could have happened months ago.
The hard sciences don't do great, either: https://www.nature.com/articles/s41564-020-0690-4
Eh, I don't think that's an example of this. They're correct that nearly all mutations in viruses are inconsequential (or even deleterious to the virus). They don't dispute that immune (or vaccine) escape occurs via mutation; rather, hardly any mutations confer those traits.
They said we shouldn't worry because mutations are common. They didn't consider the fact that a novel disease has not yet been optimized for transmission, so there are likely to be adaptive mutations which will quickly spread and swamp the non-adaptive mutations (however more commonly they arise). Greg Cochran predicted this in advance.
Great piece, thank you. A key factor at play here, I think, is what NN Taleb calls "skin in the game" - complete for a Taliban and zero for any decision-making American.
We created an entire class whose intellectual pursuits are fully disconnected from their practical results, both in academia and in public sector. If we reinstate accountability, intelligence will quickly follow. But of course that won't happen politically until we reach another existential-threat situation.
This is a superb piece.
I would though draw a distinction between "fake" expertise and "malicious" expertise. The common "malicious" pattern is "advocates posing as analysts". In many domains - foreign policy for an example - almost 100% of "analysts" are in fact advocates.
While this dynamic leads to a pattern of those subject matter experts not making better recommendations than amateurs, it's not necessarily because the experts lack capability or useful superior knowledge.
For instance, DF @ the Atlantic is a truly brilliant guy. Very, very smart. But you would want to be careful following his foreign policy advice, because he will always pose as an analyst when he is in fact generally operating as an advocate. So in that sense I would not call his expertise "fake" - despite the quality of his guidance. He could provide better analysis than the vast majority of laymen, he just prefers not to.
The "advocate posing as analyst" dynamic has at this point poisoned a large share of US intellectual output. I think the problem of "malicious" expertise has somewhat different solutions to "fake" expertise.
I honestly can't think of anyone in academic political science who thought the US approach to Afghanistan was working (or could work). And I can think of a lot of people who were extremely critical of it.
It's true that a lot of policy types have some level of formal education in political science/IR, but you surely know that there's essentially zero role for academic political science in their decision making (which is itself an indictment of political science though a very different one than what you're giving).
> As a reader wrote to the Marginal Revolution blog, ethicists have some terrible takes because to get published, they need to be original.
I agree that novelty for novelty's sake contributes to many problems and failures, but I think we should keep in mind that this affects industry at least as much as academia. Every new engineer, designer, etc. at a company wants to make their mark. Regression to the mean is hard to avoid, so over time, market-leading products and services get worse as they accumulate more subpar "contributions".
> Here’s suicides in the US from 1981 to 2016.
This is an almost meaningless plot in most contexts. The per capita plot, while still not great, is a much better representation of reality.
> Neither of these perspectives contributes all that much. You’ve made the conversation more diverse, but dumber.
The "intellectual" in "intellectual diversity" is doing a lot of the work. Obviously there are viewpoints that don't contribute anything and aren't worth including, but a monoculture has its own downsides. The real needle to thread here is maintaining a vibrant and productive discourse while fending off the postmodern idea that all viewpoints are equally valid. (Both political sides have now learned to employ postmodernism selectively when it benefits them. I hope someday history accurately portrays postmodernism as one of the greatest intellectual missteps of human civilization.)
This. This comment is not appreciated enough.
Enjoy your posts. I think this topic however deserves a tighter argument and post than what you've given it here.
I also think you overlook experience and realistic/local knowledge in your IQ + CBA is enough point (putting aside motivation discrepancy). The US could bring to bear more intelligent people capable of conducting CBA than the Taliban. The difference however is that the Taliban's CBA model (using this as a shorthand for applied epistemology) is more realistic because it is based on local knowledge and experience. The US' efforts on the other hand were designed by highly credentialed people using abstracted and stylized knowledge that didn't fit on-the-ground reality well.
In other words, even using your own model, the Taliban won and the US lost in large parts because of the discrepancy in their types of expertise about Afghanistan. Instead of being an indictment of expertise, it's an indictment of a certain kind of expertise -- that which is produced by social science. It is more evidence for what should be nearly self-evident: Knowledge of human behavior, communities, cultures and societies isn't amenable to the abstraction through the scientific method.
Makes me want to re-read James C Scott.
I don't disagree with that. Obviously the Taliban had expertise regarding the language, terrain, culture, etc. that we lacked. My argument is against the Western version of expertise that is based on credentials, peer review, universities, etc., which is what most people mean by the term. It's not an argument about all possible forms of knowledge.
Agreed. While I appreciate the overall point - I don't see the situation in Afghanistan as one of "highly intelligent generalists defeat overly-specialized experts." If anything, the Taliban is far more specialized on the topic of "how to rule Afghanistan" than the Pentagon is. The problem was the hubris in believing that Ivy League schools and thinktanks and the like could possibly produce specific enough expertise to counter (or surpass) the expertise that Taliban types developed over the process of *actually fighting various wars in Afghanistan for the better part of the last several decades.* It's less "witch doctor beats Von Braun at rocketry" and more "witch doctor beats Von Braun at curing African disease Von Braun has never encountered" while the media convinces everyone that curing disease is absolutely dependent on a deep understanding of rocketry.
Had this been a battle between Pentagon-led and Taliban-led forces over long-term control of Nicaragua or some other "neutral" field, I suspect the Pentagon probably wins.
We did the same things in both Vietnam and Iraq (which will implode next). We do well when military experts fight military experts and everyone wears an identifying uniform. The military always fails when confronted by nonmilitary local forces. Skin in the game counts for a lot better but so does willingness to do anything it takes to win.
I think you’re hitting a lot of soft targets to bolster your argument. Wouldn’t a particle physicist would be more of an expert on particle physics than the generally educated person? Wouldn’t an experienced thoracic surgeon have more expertise in the thorax than a general surgeon? Wouldn’t Magnus have greater expertise than a highly ranked player. I think there’s some division between hard and soft domains here - maybe hard and soft isn’t the right phraseology, but you get my drift. Meanwhile, I’ll let Lebron call the play next time time the court - not that average player off the bench.
Your graph on murders is per capita but your graph on suicides is not (and, I suggest, it should be).
Space race analogy is bad analogy. It suggests each side is pursuing an equally difficult goal when this is almost certainly not the case. For all we know, trying to build a democratic nation in Afghanistan (our goal) is 100 times harder than the Taliban's goal of controlling power. It doesn't matter how much expertise he has, I will beat Garry Kasparov every time if he plays without a queen.
Not necessarily.. A) They both kinda had the same goals in the sense of preserving their interests in the region.. The US decided to do this via a VERY confused process of 'nation-building' where they constantly shifted the goalposts, imported their own institutions and completely failed to get the buy-in of the Afghan populace.. If the US had decided to invest exclusively in wormhole technology to win the space race and then get beat to the Moon, it doesn't really make sense to say that IF we had gotten the wormhole working, we could have gone to Alpha Centauri.
B) How badly rushed the evacuation is and how quickly the Taliban took over, seemingly surprising the US Intelligence is the problem. Sure, you'd win if you played Kasparov without a queen, but if you checkmate him in 5 moves, the audience can't help but wonder if Kasparov maybe had a few too many.
I think you are overestimating your chances with chess: winning with a handicap is a standard skill for chessmasters. The general point might be valid though.
What's unique for our time is the extreme liberal distaste for anything that potentially hints towards the 'superiority' of the oppressor categories (Whites, males, straights, etc.) In this world, innate intelligence cannot be real, behavior cannot have any consistent relationship with one's chromosomes or genetic inheritance.
It's a huge blind spot that's causing a monumental build up of data, any and all data, which suggests the null hypothesis of the blank slate, or standard social science model as it's called, is false. It gets so bad that people accuse AI of racism when it can identify a person's race from samples of x rays.
Modern day CRT amounts to a kind of last ditch effort to explain why equal treatment doesn't generate equal outcomes without rejecting the null hypothesis. Since the obvious testable causes of inequality consistent with the null have already been tested repeatedly. (such as school funding) you're left with unfalsifiable ones that implicate anyone and everyone.
Conservatives themselves only improve the situation if and when they're willing to reject the null hypothesis and also willing and able to withstand the physical violence that will be visited upon them when they do. Many of them are often so cowardly and compromised at this point that they might not even manage that.
If there was any group of people I'd trust to solve this problem it wouldn't be conservatives but Islamists. Conservatives didn't survive the 60's purges. It's not enough to suggest the null hypothesis is wrong, you need people who, whatever their other faults, will straight up tell you that (for example) biological sex is real and men and women are different. You will also need people that are more terrifying to offend than the campus orgs that drove non-leftists from the universities to begin with.
____________
But even without the null hypothesis polluting social science you still have the issue that in a market place of ideas, ideas which are 'powerful' (however wrong or ineffective) will always win out over ideas which are simple. (And don't justify throwing tons of money at some problem as is the case with nation building and education)
"There’s a saying in academia that “instead of measuring what we value, we value what we can measure.”"
...And this leads to a lot of numbers that don't match up to reality. "Give me hard data!" sounds like a hard-headed thing to say, until you realize that the data being requested doesn't reflect any sort of reality. Then it becomes decidedly soft-headed. If you've ever been employed at a job with lots of perfunctory metrics, you know about this.
Overall, this is the best attack on technocracy I've read. It backs the "trust the experts" types into a corner. I don't know how much difference that will really make (they can always just ignore it...) but anyone who engages honestly with this will be forced to take skepticism of "expertise" more seriously.
Is the "total suicides" graph accounting for population growth? It doesn't look like it.
I’m not sure I’d consider the Afghanistan mission to have “failed”. The primary goal was to prevent Islamic terrorist attacks in the US. The problem was that there wasn’t a way to prevent terrorist control of Afghanistan without the US military presence. So we went in, accomplished the goal, and didn’t have a way to leave (not necessarily a bad trade-off going by 2002 priorities.) We took a crack at the low-probability strategy of building a functioning government, since we had to be there anyway (and direct US colonial rule wouldn’t be a good look.)
There’s still a reasonable cost-benefit analysis that concludes “continued US presence is better than a 10% higher chance of major terrorist attack” but that’s not a winning message, so everyone had to pretend like we could “win” in some way, or just hope the US public focuses on other stuff. It worked for 20 years, and an argument could be made that we made it through the highest terrorism danger period and China’s now a more important military focus than it was in 2001.
Yes, the withdrawal was done terribly, but there was another set of tough choices there, and I think that falls more to “bureaucratic squabbling and lack of accountability results in incompetence” than a PhD in military withdrawals getting his predictions wrong.
How do you determine whether the overthrow of Taliban rule in 2001 prevented terrorist attacks in the US, as opposed to numerous domestic anti-terrorism efforts? I'm not sure you can determine that. . . but I'm no expert LOL
The other tricky thing here is that a lot of the counter-insurgency techniques and experiments worked at the tactical level, but the overall strategic plan failed/didn't get sufficient support to succeed given the challenge. So yes it was a failure, but in part due to decisions and structural factors beyond that of the specific experts brought in (who often diagnosed the problems correctly).
What's with the cheap shot at Berenson? He hasn't been 100% accurate, but he's been far closer than anyone at The Atlantic...
Wrong.
It's not true that expert politcal judgement showed that experts did were no better than laymen. The laymen - undergrads at Berkely I think - did substantially worse than random, and on average the experts did no better than random. The cavet to this is that there were a group of experts - the foxes - who did much better than random. Additionally, time-series econometrics did exceptionally well, and that of course is an an area of expertise. I think the modern literature of forecasting also casts doubt on the claim that experts are useless. There are groups of people who can forecast really very well up to about 3 year time spans, and they do this in large part by reading large groups of experts as well of using base rate forecasting, and of course the base rate must be constructed.
I'm also not convinced we should be surprised that politcal scientists are bad a forecasting. Their role isn't to come up with the best predictive model of the social world, it's to come up with the most accurate causal model of one small part of the social world. This is a very different task. Potentially the analogy would be disregarding physics because solid state physicists can't predict when a car engine will break. A further point is that the work on expert politcal judgement was mostly done in the 80s, well before the causal inference revolution that's occurred in the social sciences - instrumental variables weren't even a thing!
Related observation:
Some comments note that the Afghan government, contractors and others had a strong incentive to "play act" success (i.e. lie). Certainly true. But all major endeavors face potential fraud by participants, often with catastrophic consequences. One of the responsibilities of a civil engineer is to supervise a project to detect materials that don't meet specs, construction techniques that deviate from plans, and so forth. A well specified social policy needs to be monitored just as closely. A successful system will have the whistle blowing built in and adequately funded.