Honestly, I have been waiting for this article all my life....as a doctor 30 years of being shut down every time I even hint at the really obvious (to me) fact that our jobs benefit us a million times more than they could ever benefit almost any patient....we are though on the edge of patients choosing the free AI with it's 98% accuracy over the human doctor with our 74% accuracy. Additionally no chance of being sexually abused, no power dynamic. Great point about the make work involved in infantilising everyone and setting up a structure where some humans arbitrarily decide what treatment other humans can have compared to a simpler automated system with some safeguards for the truly ridiculous etc. but my experience is that most patients are perfectly capable of interacting with a medical AI and choosing treatments based on that. Also, medical litigation collapses as you can't sue an AI and with that money in the system instead of lawyers' pockets we can all have all the MRIs and genetic testing we might want. Happy to be out of a job in a world where honesty rules, bring it on....the part of me that sees all this stuff feels very lonely so thanks for this article....
Yeah, I do have to admit for all the hard work to get there doctors do make a lot of money (and the difficult of making them is at least a huge portion of why).
You *did* hoard half your earnings like my radiologist buddy who's always raving about his oncoming early retirement via obsolescence, right?
I am so happy to be near the end of my career. Where should I put my millions of dollars of investments? It’s about half in real estate and half in stocks with a smattering of bonds. AI risks collapsing all of that value, with the possible exception of the stock market. As wealth becomes more and more concentrated even that fiction of sharing the profits will probably go away. Look what Zuckerberg and Musk have done to the old ideas of corporate governance. And that was in just 13 years.
Who I really worry about is my children who are just entering adulthood and all the other young people out there who are at great risk of becoming irrelevant.
I think that make-work or UBI (or "welfare," as you call it) are two sides of the same lousy coin. Neither really honors the wealth of literature on positive psychology, which suggests that the sense of dedicating oneself to something profoundly useful is of fundamental importance to people’s lives, neither promotes "eudaimonia" or "flourishing," and both could lead to profound ennui, as Carl argued, potentially making society worse off on balance. In general, I don't really see a scenario in which maximal automation doesn't lead to some form of cultural, economic, and/or political disempowerment with an accompanying psychological impact.
That's very true. But most will forego children because they are a perceived "inconvenience" to their pursuit of immediate gratification.
They won't be fucking to have children. They'll do it for the sake of pleasure and pleasure alone. And that is meaningless (well, almost completely so).
For those of us that chose meaningful sex, resulting in offspring and the continuation of our life's narrative, your are absolutely on point.
Don't worry. The first law of behavioural genetics says that all behavioral traits are at least partly genetic and heritable. So is the desire to have children - not strongly heritable but somewhat. Given enough time this will cure the childlessness problem. The people who want children will have most of the children in their generation; those in turn will have inherited the desire to have children and their generation will have a bit more children than the previous one, etc. If a stable utopia lasts for long enough we'll have a world where everybody wants children.
If people have UBI then they can spend some of their free time dedicating themselves to some purpose that is profoundly useful to other humans, but not paid for by the market for some reason or another. If their basic needs are already being subsidized they will have some kind of comparative advantage in something over AI.
I keep hearing about how humans are all depressed nowadays because social and civic organizations are on the decline. People could spend some time setting up some of those. Clubs, amateur sports teams, book clubs, etc. It would be easier to do without a job, and not profitable enough that a company would pay AIs to do it.
It’s very possible that in a world where humans no longer need to work towards any significant goal my local beer league softball team will seem of profound spiritual importance.
It can be, but why if there is no profit in it? It presumably will cost some money to set up an AI to do something. If a human is living off UBI, then they are already "bought and paid for." It will be cheaper to just let them do it. If they want maybe they can use some of the UBI money to hire an AI assistant occasionally. Otherwise, might as well let them do it.
It's futile to regulate AI to save the jobs but nevertheless this piece is very much cope. White collar jobs will basically be gone in 5 years, there will simply be no need to use a human for any of them because humans will not have an economical advantage. At that point an accountant or software engineer can theoretically become a nurse or a nanny but they can't practice their original profession anymore. There will still be made up white collar jobs for sure, but not real ones providing economic value.
Regarding robotics, once you have an unlimited amount of robotics engineers you're gonna develop robots that replace manual labor. Then people remain very much like ants or giraffes or whatever and we must attempt to design the AI so that it chooses to keep us alive in the world.
The historical parallels in the article are weak. In the past, we replaced manual labor with mental labor and that's the majority of the economy. We can go back to manual labor but that will buy us 10 years tops. At that point the only relevant jobs are ones where it's especially valuable to be a human, which is not much.
Totally agree with this comment. This piece is missing large corporate jobs, and how many white collar roles are essentially tracking, reporting, logistics oversight, supplier management, etc. I spent a good chunk of my career on a physical commodities trade floor and the number of people needed to operationalize and report on deals could easily shrink by 80% or more. I’m not saying policy should protect these jobs, and I’m not saying new jobs won’t be created elsewhere, but you better believe every major corporation in the world is as we speak finding ways to implement AI to cut costs aka shrink their workforce - and consulting and services firm are going to help, and then duplicate at peer companies - and this is going to be a major trend for the next decade or more.
Would you be willing to bet me that 5 years from now the number of white collar jobs will have declined by more than 50%? I'd be willing to bet $1000 against you.
You’re not dealing with rational individuals here. It’s like the singularity subreddit. They haven’t replaced a single engineer with AI, just outsourcing them and allegedly using the savings to fund their AI projects.
5 years seems pretty unlikely. At that point LLM development will probably have plateaued. AI will, at that point, probably still require huge amounts of oversight from white collar workers to make sure it doesn't hallucinate nonsense. Perhaps we will have an economy 100x bigger, but with the same amount of human employees, as each human goes over the work of 100 AIs to check for hallucinations.
Humans will also probably still be needed to generate and/or curate training data. We are running low on training data and from what I understand, "synthetic data" needs human curation to be useful.
There's also the issue of comparative advantage. If AI is 100x better at humans at White Collar Job Categories A, C, and F, but only 10x better at White Collar Job categories B, D, and E, then it might make sense to continue employing humans at B, D, and E so that AI can devote its effort to A, C, and F.
It seems a pretty big extrapolation to think we will have AI robotics engineers fairly soon. The reason white collar jobs are in danger is that they involve language manipulation, which LLMs are good at. Doing actual engineering will probably take longer.
Half the commenters on Substack are complete retards who probably believe everything Elon Musk says. These morons are predicting the plot of “Terminator” by 2030. It should have happened 20 years ago.
This is a really utopian view of the future, unmoored from a careful reading of capitalism or human nature. The people who run large corporations don’t care about their employees, except insofar as they are useful to them. They will cut as many as they can in the name of efficiency, in fact they have a duty to their shareholders to do so.
The oligarchs who run society don’t care about the peasants. You have stated in the past that what really matters is government power, not economic power, but as recent events have shown, you can buy the presidency for $33B, which is chump change to the wealthy. The top 20 wealthiest people in America have over $3T in net worth. They can - and mostly have already - bought all of Congress. Do you think they *want* to pay taxes? Of course not. People at the top are driven by ambition and greed, not humanitarian impulses.
In the past, the working class were able to negotiate by withholding their labor, or by using violence. The first will soon no longer people possible, as jobs are 95% eliminated by AI. So the vast majority will only survive on whatever scraps the Musks and Bezos’ of the world decide to leave to them. You can see how they treat their own employees to imagine what they will do to people who don’t even help them become rich, but instead are just parasites to them.
The final negotiating chip the working class has is to use violence. Threat of another French Revolution has moderated things at least somewhat. But Silicon Valley is rapidly building AI war technology that can be run by a few trusted lieutenants and should be able to easily keep the mobs in check.
So, no checks on the power of the elite at all. All the wealth and power goes to the 5% (generously) that run things and the rest are serfs. A very small percentage make all the decisions. This is our most likely future.
I largely agree that the primary AI-related worry is doom. I disagree that the right move is to give up and hope the developers solve alignment. They very clearly won't. We do not live in that world. Our reality is far less convenient. Aligning a superintelligent machine with human goals is a wicked problem that no one is remotely close to solving. The people who have been working on alignment for longest (MIRI's been at it for decades, not years) have said as much, time and again, and deliberately chose to pivot their entire organization towards communicating that fact. Even OpenAI said so themselves two years ago! And that was before most of their alignment researchers left.
"There's no political will to shut down AI" is a self-fulfilling prophecy, and not a good reason to give up on creating that will. Plenty of politicians say, in private, that they're also extremely worried but that they don't want to look silly or alarmist. The atmosphere needs to change, that much is true.
You're one of the people who might be able to help it change. If, as you say, you take seriously the chance of an AI-fueled apocalypse, then perhaps it's worth engaging further with arguments about the likelihood.
What assumptions are you making about alignment? How would your opinion change if you pegged the chance of catastrophic misalignment at 50%? 80%? 95%? What would it take to update you there? There's only about 7 bits of evidence between 14% and 95%.
I think it's amazing that someone with children accepts a 14% risk of death/doom. There is nothing else where we accept remotely that level of risk. Any other industry posing even a tenth of that risk would be shut down or sued/fined into oblivion.
That's the problem exactly for those nonbelievers. But they can still have kids (family even now is better for bringing meaning to your life than jobs are), maybe contemplate in some secular way, dunno.
Most Americans, though not UltraOrthdox Jews, doesn't complain that their lives lack meaning either.
However, it still doesn't change the fact that their lives lack meaning, for which the consequences will inevitably be felt after they've gained enough life experience and wisdom to become aware of the fact (but maybe too late to do anything meaningful about it).
I honestly have very mixed feelings about this article. I agree with most of what you've said about AI replacing human jobs not being a threat. At the same time I am inclined to go much further and say that the idea that LLMs can do most cognitive tasks as well as a human being is total bunk, only believed in by gullible people who want to believe in it. My own view is that Yann LeCun (the deep learning researcher & 2018 Turing Prize laurette) is right when he says LLMs are less intelligent than a housecat.
The gist of it is that AI has very powerful pattern-matching abilities that allow it to (for example) ace the L-SAT or the tests they give kids at the Math Olympiad... but only be because the internet is full of very similar practice questions on which to train it. If a cognitive test doesn't closely parallel something in its training data, then it fails, usually in a ridiculous way, even if it's something that would be easy for a human - such as listing the Super Bowls where the winning score was a prime number (very easy provided you have a list of Super Bowl outcomes and a list of the 1st 20 or so primes, both of which AI "knows") or spelling the names of the US state capitals backwards and putting them in alphabetical order.
You see similarly silly things with image generation. Ask an AI to make a picture with a certain number of people and give it details about how to draw them - what they're wearing, what they're holding, hair color, etc... then it will make a picture in which the traits you've asked for - brown hair, overalls, etc. - are assigned to the figures at random rather than the specific people that are supposed to have them.
And don't even get me started on how every request to make a picture with a gibbous moon or a first or third quarter moon gets interpreted as a full moon instead... because full moons are MUCH more common in art, and all that AI does is pattern-matching, not thinking.
Hence my belief that AI risk is nill and that the "AI researchers" who say otherwise are themselves engaging in make-work to make their jobs seem important. Since after all prdinary people would lose interest in AI "alignment" and all that claptrap if they just played around with AI with the INTENTION of spotting its weaknesses, rather than blindly trusting the "experts" when they say stuff like "Oh no! It passed such and such psychology exam! It is smarter than 90% of college students!"
Of course the elites will fine and better than ever as always with whatever bullshit job they can have, but there's no mass technological unemployment scenario that would not lead to a crisis. Millions of people work white collar or driving jobs that will inevitably be completely automated and I don't think anybody is going to give them bullshit jobs to make up for it. To think UBI or whatever mass welfare program will keep everybody out of job well above the poverty line is naive and ignorant of how power works, because if the productive value of the masses is zero, no government or plutocrats has any reason to give them shit (might as well even kill them), let alone vast amounts of wealth.
But I agree the cat is already out of the bag and AI development cannont be stopped, so I guess this is inevitable then. But don't pretend there's "nothing" to worry about, unless of course you are speaking for yourself and the country's elites.
It's the speed of change that more than the rate of change that will be most likely to create mayhem, but the rate of change on disempowering millions of white collar jobs is not a non-issue if there are no viable alternatives and they are unable to generate wealth while only a few hoover up all the money
Disempowering millions of people will come with costs, even if we do find a solution eventually
I guess I am not worried but for the opposite reason - I don't think the AI is that good, or improving that fast. Most jobs are not fake jobs - your psychiatrist is hopefully doing more than dispensing the same pill for decades.
I am engaged with AI for multiple hours most days and, the more I work with it, the more I feel frustration as opposed to fear. This is especially true for creative writing. It's good for research and data analysis, it speeds up coding by a few orders of magnitude, but you need to check everything. It will go from analyzing data to inventing data between two prompts for no obvious reason.
My biggest concern at the moment is that a lot of people will be too lazy to check AI work because it looks good/plausible even when it's not. There is a tremendous temptation of productivity at the cost of accuracy. I also expect AI to help truly creative people realize more of their ideas as it handles more of the boring stuff.
So, medium-term I think we will see a new class of AI winners, as opposed to AI being the winner. Scammers, ruthless productivity chasers and genuine creatives seem like they would benefit the most.
Btw, did you use AI to create those charts and the narrative around it? Just curious because that part was really boring.
It's quite possible for AI to be controlled by humans other than the CCP or Taliban and for humanity to have a problem. (And, in China, it probably will be controlled by the CCP.)
The Industrial Revolution was human-controlled and made lots of people much worse off until labor movements forced the capitalists who owned the machines to give up more of the proceeds. In the USA, we had the Gilded Age; other countries have similar eras. It's similarly quite possible that lots of jobs get automated away, creating massive unemployment and instability enough to impoverish or kill a significant portion of the population, possibly even greater than the portion who benefit from the increased efficiency. Look at how much damage moving a few tens of million jobs to China did to the central USA--we got an opioid epidemic and a right-wing populist uprising. AI's likely to be at least as bad.
It also spawned communist and socialist movements, usually closely intertwined with the above labor movements, that caused huge amounts of death and suffering; perhaps we'll see what the collateral damage of the oncoming IRL version of Dune's Butlerian Jihad against machines is?
This is my biggest fear. You don't put the entire professional class out of work without a huge socialist revolution. But even worse will be the people who start terrorist activities to essentially take out technology. Rather easy to imagine them going way too far, if enough people are out of work, and just start blowing up power stations and taking us back to the stone age. I really do not want to live in a pre-electricity era, so to me this would be the worst possible outcome.
Besides all this, the upper middle class who will be out of work are also the ones that spend money on everything, beyond the basics. Once there are no office workers, no more offices, erasing the entire value of every downtown and all office buildings. Every product and service sold to those people no longer had customers who can pay, so they crash too. It would be a gigantic economic catastrophe and there's no good outcome there, that isn't at the very least very painful in the short to medium term.
You could say the same about every earlier technological revolution. For example, there have been numerous diatribes arguing that farming and writing were developed primarily to enslave the majority of humans for the benefit of an elite class.
Through a certain lens, those narratives are correct. Increasing levels of technological and cultural sophistication generally lead to greater inequality—both material and social. However, these narratives often overstate the level of agency and planning behind these advances. Moreover, the individuals and social groups that ultimately gain control over these institutions are rarely the same ones who discovered or invented them. And in the end, absolute living standards tend to rise with these advancements.
Nick Land is one of the most prominent recent intellectuals to explore patterns of technological disempowerment. I’ve personally never been able to get through his verbose and meandering writing, but credible review articles suggest this is his central thesis. His Accelerationism theory predicts that humans will become increasingly disempowered, eventually being replaced by the technocapitalist system—and that there’s nothing anyone can do to stop it. As an aside, while his ideas seem Marxist in background, he is also regarded as a leading Neoreactionary intellectual.
Regardless, I think we should focus less on who, if anyone, is actually in control of this process and instead concentrate on how we can react and adapt for our own well-being.
There is always a line that should not be crossed. Machines to a certain extend, reasonable.. most things these days seem to have evolved into: inhumane, unsustainable, ridiculous and defeating the original purpose of making life better.. just look at the automated phone systems.. it has become a bain and is used against us.
balance and common sense like cognitive thinking are missing. Withing reason is gone.
People seem stupid ( still behave like animals and worse ) and we produce artificial intelligence.. lol as if the cell phones were not warning enough ...
This time, it's global. Maybe human kind will learn it's lesson... finally.
We have been here.
All successful empires and such always ended up in rubble.. Old Romans, Egygtians, Mayans, Inkan's, gone insane with gredd and excesses of the lower temptations.
Positions of power always attract the wrong characters. A healthy human being simply wants ti enjoy it's existence. Not rule others.
We seem to want to grow and grow.. bigger and bigger. Forgetting our limits. Nature has limits, this planet has limits, so do we.
Some think they can break the laws of nature, make up human laws, rules and regulations.. control and greed. Again and again, history is full of it.. exploitation of man and land. Oppression, tyranny and destruction.. again and again.. delusions of grandeur.
Maybe all we need to do is find our place, stay reasonable and sustainable.
Stay local and sane instead of: Go big or go home.
The principle of Noble. Self- restrain and self governed. Instead of loud, proud, large and in charge..
Natural life instead of human hubris and excesses.
We have forgotten that nature is our only source and we are part of nature.
We continue to produce weapons of mass destruction, bio weapons, dangerous chemicals and pharmaceutical .. that alone shows our insanity as a species.
Self destruction.
Now we move on to Mars.. wherever we go, there we'll be.
And the newest genius idea is artificial intelligence.. lol
Automation, digital control .. replacing ourselves???
15 Minute Concentration Camps ..
The Earth was given to us. Free of charge.
Why do we follow this insanity?
Why does human kind fall for this trap of leadership again and again.
Self- discipline and self- governing.
We simply cannot subsidize responsibility.
Too comfortable?
We used to feed ourselves, cloth ourselves.. giving up responsibility and independence.. selling out .. industry and military complex took over. Brilliant deception, long time in the making.
Tax treadmill on one side, lazy chair on the other.. kept in check by fear and propaganda.
We have repeatedly surpassed our limits over the last few thousand years. Again, I’d cite agriculture and writing. We also have religions and other cultural values that have developed over millennia. The Industrial Revolution—actually a series of machine-power revolutions, from steam to internal combustion and electricity—was particularly significant because it drastically increased our productive capacity.
Our information revolutions began with the earliest developments of written language, followed by a massive expansion in distribution due to the printing press with movable type. That process was later industrialized with steam-powered printing and distribution via trains and trucks. The early analog electrical technologies of radio and then broadcast TV were even more impactful. And we’re still working through the digital revolution, with the internet, social media, streaming, and now generative AI.
The point is that none of this is under anyone’s control. There have been numerous attempts to regulate technology after its deployment. Some, like the Catholic Church’s control of the printing press, had limited success. The Ottomans exercised even greater control over religious texts for an extended period. Yet no one has ever consistently maintained control, let alone rolled back technological progress.
It’s understandable that some prefer a simple, technology-minimal life. That too has happened repeatedly throughout history, and we’re all familiar with contemporary Amish communities that strictly limit their technology use. Yet the broad arc of society always moves toward greater technological adoption. Any mass culture that resists this trend is almost guaranteeing its continuous disempowerment—until it becomes entirely irrelevant.
One major problem with even the best regulations is that they trade growth, development and efficiency for safety. Hopefully with AI powered software, we will be able to greatly minimise the costs while retaining the full benefits. We might end up in a situation where someone goes through a 4 year degree to gain credentials just to sign off on an AI generate piece of paper.
Fully agree with the point of the human preference for humans. I think this is largely because of aesthetics and status. We have had machine made fast, off the shelf food for decades yet restaurants and chefs are still in business. If all that mattered was efficiency, we could have eliminated cooking to a great extent. Its largely due to aesthetic preferences and the fact that eating food cooked by humans signals high status, and humans arent going to stop playing status games any time soon.
I don't think white collar work is fully going to go away though. A large part of white collar work is communication, dealing with inputs on the ground and maintaining a non generalisable knowledge base. There is a lot of context and data generated in white collar work on the fly and not really generalisable and I dont think AI will be able to handle that better than humans. Atleast for the near future, I expect a human + AI pairing to be the norm.
We should also see lowering of costs of entry for businesses into many markets and hopefully increased competition and more differentiated products.
Also, its not certain to me that AI will just keep on improving continuously. It is bound to hit some wall due lack of data to train, or the methods used to train models not being able to generate better models, thus requiring another major breakthrough in science to be able to continue the growth. And breakthroughs in science are unpredictable. Most of the improvements after GPT 3.5 have been due to using more data or tweaks in the existing ways to train or use these systems. Not due to radical improvements in the state of the art(Afaik). This should hit a wall sooner than later.
Download my free book about #AI and unemployment (reviewed or written about in The Economist, The Washington Post, and many other major media publications)...
Re: Adderall, the government seems to have taken a vested interest in disallowing drugs to those who want 1) to get high 2) enhance their performance above "normal" human baselines.
Richard, your argument from Luddite invocation is at least subject to question I’d say, from sociologist Eric Dahlin’s 2024 sociological study of U.S. job loss to AI ( https://bit.ly/DahliE-2024-6 ). Dahlin finds preliminary evidence of AI exceptionalism compared with historical waves of technological displacement and re-employment.
Your argument of rising welfare and falling poverty from economic growth, I think may come undone with due attention to material throughputs via fossil fuels and other such extraction, conversion to human usage, waste sinks, and exhaustion ratios. Ecological analysis by for example Seibert and Rees published in Energies journal (2021, https://bit.ly/SeiRee-2021 ) predicts that future trends in material throughputs for human consumption will severely compress living standards and human population limits, regardless of what humans may wish.
GenAI is also a huge accelerator for material resource overshoots, summarized for example in this Conversation essay about water from 2024 ( https://bit.ly/GuBovV-2024-3_21 ) and this Financial Times report about coal (2024, https://on.ft.com/4chyMNI ). I understand Deepseek is reporting far more efficient startup yields than have the U.S.-based GenAI products; but we have yet to see independent verification and scholarship on that. Overall I am skeptical based on historical trends of energy and technological changes when analyzed transnationally (I could cite more but see for example above, Seibert and Rees 2021).
About welfare policy prospects and universal basic income (UBI) as a solution to AI displacement of human jobs, I’m persuaded by comparative political scholar Bo Rothstein, who specializs in the Nordic countries’ welfare regimes, that UBI is infeasible ( https://bit.ly/RothB-2017 ). Considerable evidence from the rich democracies would suggest that welfare payers in, expect most able-bodied beneficiaries of taxes and transfers in one’s country to work and contribute to the common weal as well as pulling one’s weight unless personally unable for specific reasons. That’s not a UBI logic but rather a welfare policy logic of programs, state administration by civil servants managed for even-handed implementation, social workers with case responsibilities, etc.
Finally Richard your criticism of welfare statistics uses a source I would not. You allude to a “trick the left has pulled off” to muddle the anti-poverty effects of taxes and transfers. Rather in my view you cite there a weak data source, is all. I would point instead to sociological studies that go cross-national — and such data is to be desired for your kind of super-macro, comparative-historical argument in the above essay.
Typically in recent years, sociologists compare economic distributions *after* taxes and transfers. Some of those studies also compute counterfactually what the pre-distribution *would have been* had not the taxes and transfers happened. These studies are excerpted and summarized for example into one of the leading textbooks for introductory sociology by Manza, NYU Sociology Faculty et al 2023 ( https://bit.ly/ManzaJ-2023 ), section 11.4. Also relevant is a new mobility study summarized in the Atlantic by one of the sociologist authors (Parolin 2025, https://bit.ly/ParolZ-2025 ). Also sociologist David Brady, who has been a main contributor to the cross-national measurement of poverty rates and welfare policy effects, summarizes and criticizes the U.S. government’s history of welfare metric methods in his 2023 paper, the first eight paragraphs ( https://bit.ly/BradyD-2023_SciAdv ).
For a while there will be some people employed in jobs that most humans would rather have done by another human. For now that is certain parts of the law, certain parts of medicine and psychiatry and some live entertainment. For a while at least, people will want human lawyers, doctors, musicians and hookers. All those jobs are at risk in a generation or less as the robotic version surpasses the human one. There will always be a very niche market for the extremely wealthy.
Honestly, I have been waiting for this article all my life....as a doctor 30 years of being shut down every time I even hint at the really obvious (to me) fact that our jobs benefit us a million times more than they could ever benefit almost any patient....we are though on the edge of patients choosing the free AI with it's 98% accuracy over the human doctor with our 74% accuracy. Additionally no chance of being sexually abused, no power dynamic. Great point about the make work involved in infantilising everyone and setting up a structure where some humans arbitrarily decide what treatment other humans can have compared to a simpler automated system with some safeguards for the truly ridiculous etc. but my experience is that most patients are perfectly capable of interacting with a medical AI and choosing treatments based on that. Also, medical litigation collapses as you can't sue an AI and with that money in the system instead of lawyers' pockets we can all have all the MRIs and genetic testing we might want. Happy to be out of a job in a world where honesty rules, bring it on....the part of me that sees all this stuff feels very lonely so thanks for this article....
Yeah, I do have to admit for all the hard work to get there doctors do make a lot of money (and the difficult of making them is at least a huge portion of why).
You *did* hoard half your earnings like my radiologist buddy who's always raving about his oncoming early retirement via obsolescence, right?
I am so happy to be near the end of my career. Where should I put my millions of dollars of investments? It’s about half in real estate and half in stocks with a smattering of bonds. AI risks collapsing all of that value, with the possible exception of the stock market. As wealth becomes more and more concentrated even that fiction of sharing the profits will probably go away. Look what Zuckerberg and Musk have done to the old ideas of corporate governance. And that was in just 13 years.
Who I really worry about is my children who are just entering adulthood and all the other young people out there who are at great risk of becoming irrelevant.
I think that make-work or UBI (or "welfare," as you call it) are two sides of the same lousy coin. Neither really honors the wealth of literature on positive psychology, which suggests that the sense of dedicating oneself to something profoundly useful is of fundamental importance to people’s lives, neither promotes "eudaimonia" or "flourishing," and both could lead to profound ennui, as Carl argued, potentially making society worse off on balance. In general, I don't really see a scenario in which maximal automation doesn't lead to some form of cultural, economic, and/or political disempowerment with an accompanying psychological impact.
Being economically useful isn't the only way to achieve meaning or eudaimonia, there are others: contemplation, procreation etc.
Seriously? We're all going to be satisfied with spending our days thinking and fucking?
People are already delegating contemplation to AI. And soon, you won't be fucking another human, but a robot instead.
Neither of those are meaningful.
Interesting thought, though.
More likely we’ll be even more preening and self-obsessed.
Exactly. We naturally desire ever more convenience and safety. "The Algorithm" exploits that to a good degree.
AI will exploit it until we are no longer relevant to each other.
Actual procreation can bring meaning: https://open.substack.com/pub/thelivingfossils/p/the-existential-relief-of-having
That's very true. But most will forego children because they are a perceived "inconvenience" to their pursuit of immediate gratification.
They won't be fucking to have children. They'll do it for the sake of pleasure and pleasure alone. And that is meaningless (well, almost completely so).
For those of us that chose meaningful sex, resulting in offspring and the continuation of our life's narrative, your are absolutely on point.
Don't worry. The first law of behavioural genetics says that all behavioral traits are at least partly genetic and heritable. So is the desire to have children - not strongly heritable but somewhat. Given enough time this will cure the childlessness problem. The people who want children will have most of the children in their generation; those in turn will have inherited the desire to have children and their generation will have a bit more children than the previous one, etc. If a stable utopia lasts for long enough we'll have a world where everybody wants children.
If people have UBI then they can spend some of their free time dedicating themselves to some purpose that is profoundly useful to other humans, but not paid for by the market for some reason or another. If their basic needs are already being subsidized they will have some kind of comparative advantage in something over AI.
For instance?
I keep hearing about how humans are all depressed nowadays because social and civic organizations are on the decline. People could spend some time setting up some of those. Clubs, amateur sports teams, book clubs, etc. It would be easier to do without a job, and not profitable enough that a company would pay AIs to do it.
Distractions, not purpose.
That really depends on your perspective.
It’s very possible that in a world where humans no longer need to work towards any significant goal my local beer league softball team will seem of profound spiritual importance.
"If people have UBI then they can spend some of their free time dedicating themselves to some purpose that is profoundly useful to other humans..."
This, too, can be automated in the extreme case
It can be, but why if there is no profit in it? It presumably will cost some money to set up an AI to do something. If a human is living off UBI, then they are already "bought and paid for." It will be cheaper to just let them do it. If they want maybe they can use some of the UBI money to hire an AI assistant occasionally. Otherwise, might as well let them do it.
It's futile to regulate AI to save the jobs but nevertheless this piece is very much cope. White collar jobs will basically be gone in 5 years, there will simply be no need to use a human for any of them because humans will not have an economical advantage. At that point an accountant or software engineer can theoretically become a nurse or a nanny but they can't practice their original profession anymore. There will still be made up white collar jobs for sure, but not real ones providing economic value.
Regarding robotics, once you have an unlimited amount of robotics engineers you're gonna develop robots that replace manual labor. Then people remain very much like ants or giraffes or whatever and we must attempt to design the AI so that it chooses to keep us alive in the world.
The historical parallels in the article are weak. In the past, we replaced manual labor with mental labor and that's the majority of the economy. We can go back to manual labor but that will buy us 10 years tops. At that point the only relevant jobs are ones where it's especially valuable to be a human, which is not much.
Totally agree with this comment. This piece is missing large corporate jobs, and how many white collar roles are essentially tracking, reporting, logistics oversight, supplier management, etc. I spent a good chunk of my career on a physical commodities trade floor and the number of people needed to operationalize and report on deals could easily shrink by 80% or more. I’m not saying policy should protect these jobs, and I’m not saying new jobs won’t be created elsewhere, but you better believe every major corporation in the world is as we speak finding ways to implement AI to cut costs aka shrink their workforce - and consulting and services firm are going to help, and then duplicate at peer companies - and this is going to be a major trend for the next decade or more.
This has been my impression too. White collar jobs get destroyed by AI employed by large corporations, already happening
Would you be willing to bet me that 5 years from now the number of white collar jobs will have declined by more than 50%? I'd be willing to bet $1000 against you.
You’re not dealing with rational individuals here. It’s like the singularity subreddit. They haven’t replaced a single engineer with AI, just outsourcing them and allegedly using the savings to fund their AI projects.
Are you r$t@rded? All white collar jobs gone in 5 years? You probably think that we’re going to colonize Mars by then too.
5 years seems pretty unlikely. At that point LLM development will probably have plateaued. AI will, at that point, probably still require huge amounts of oversight from white collar workers to make sure it doesn't hallucinate nonsense. Perhaps we will have an economy 100x bigger, but with the same amount of human employees, as each human goes over the work of 100 AIs to check for hallucinations.
Humans will also probably still be needed to generate and/or curate training data. We are running low on training data and from what I understand, "synthetic data" needs human curation to be useful.
There's also the issue of comparative advantage. If AI is 100x better at humans at White Collar Job Categories A, C, and F, but only 10x better at White Collar Job categories B, D, and E, then it might make sense to continue employing humans at B, D, and E so that AI can devote its effort to A, C, and F.
It seems a pretty big extrapolation to think we will have AI robotics engineers fairly soon. The reason white collar jobs are in danger is that they involve language manipulation, which LLMs are good at. Doing actual engineering will probably take longer.
Half the commenters on Substack are complete retards who probably believe everything Elon Musk says. These morons are predicting the plot of “Terminator” by 2030. It should have happened 20 years ago.
"We'll make great pets" - Porno for Pyros
This is a really utopian view of the future, unmoored from a careful reading of capitalism or human nature. The people who run large corporations don’t care about their employees, except insofar as they are useful to them. They will cut as many as they can in the name of efficiency, in fact they have a duty to their shareholders to do so.
The oligarchs who run society don’t care about the peasants. You have stated in the past that what really matters is government power, not economic power, but as recent events have shown, you can buy the presidency for $33B, which is chump change to the wealthy. The top 20 wealthiest people in America have over $3T in net worth. They can - and mostly have already - bought all of Congress. Do you think they *want* to pay taxes? Of course not. People at the top are driven by ambition and greed, not humanitarian impulses.
In the past, the working class were able to negotiate by withholding their labor, or by using violence. The first will soon no longer people possible, as jobs are 95% eliminated by AI. So the vast majority will only survive on whatever scraps the Musks and Bezos’ of the world decide to leave to them. You can see how they treat their own employees to imagine what they will do to people who don’t even help them become rich, but instead are just parasites to them.
The final negotiating chip the working class has is to use violence. Threat of another French Revolution has moderated things at least somewhat. But Silicon Valley is rapidly building AI war technology that can be run by a few trusted lieutenants and should be able to easily keep the mobs in check.
So, no checks on the power of the elite at all. All the wealth and power goes to the 5% (generously) that run things and the rest are serfs. A very small percentage make all the decisions. This is our most likely future.
I largely agree that the primary AI-related worry is doom. I disagree that the right move is to give up and hope the developers solve alignment. They very clearly won't. We do not live in that world. Our reality is far less convenient. Aligning a superintelligent machine with human goals is a wicked problem that no one is remotely close to solving. The people who have been working on alignment for longest (MIRI's been at it for decades, not years) have said as much, time and again, and deliberately chose to pivot their entire organization towards communicating that fact. Even OpenAI said so themselves two years ago! And that was before most of their alignment researchers left.
"There's no political will to shut down AI" is a self-fulfilling prophecy, and not a good reason to give up on creating that will. Plenty of politicians say, in private, that they're also extremely worried but that they don't want to look silly or alarmist. The atmosphere needs to change, that much is true.
You're one of the people who might be able to help it change. If, as you say, you take seriously the chance of an AI-fueled apocalypse, then perhaps it's worth engaging further with arguments about the likelihood.
What assumptions are you making about alignment? How would your opinion change if you pegged the chance of catastrophic misalignment at 50%? 80%? 95%? What would it take to update you there? There's only about 7 bits of evidence between 14% and 95%.
I think it's amazing that someone with children accepts a 14% risk of death/doom. There is nothing else where we accept remotely that level of risk. Any other industry posing even a tenth of that risk would be shut down or sued/fined into oblivion.
The best case scenario is a world in which all people lead comfortable lives that offer no meaning.
There are ways to find meaning that don't require you to be economically useful, for example, religion.
Belief in which has fallen off a cliff.
That's the problem exactly for those nonbelievers. But they can still have kids (family even now is better for bringing meaning to your life than jobs are), maybe contemplate in some secular way, dunno.
Birth rates are crashing everywhere.
Okay, then we're going to spend our days contemplating the divine and fucking?
Why not? That's what the ultraorthodox Jews do and they don't complain that their lifes lack meaning. For me personally it sounds very enjoyable.
Most Americans, though not UltraOrthdox Jews, doesn't complain that their lives lack meaning either.
However, it still doesn't change the fact that their lives lack meaning, for which the consequences will inevitably be felt after they've gained enough life experience and wisdom to become aware of the fact (but maybe too late to do anything meaningful about it).
I honestly have very mixed feelings about this article. I agree with most of what you've said about AI replacing human jobs not being a threat. At the same time I am inclined to go much further and say that the idea that LLMs can do most cognitive tasks as well as a human being is total bunk, only believed in by gullible people who want to believe in it. My own view is that Yann LeCun (the deep learning researcher & 2018 Turing Prize laurette) is right when he says LLMs are less intelligent than a housecat.
I've written about this at my own blog before and will probably do so again: https://twilightpatriot.substack.com/p/the-fantasty-of-ai-alignment
The gist of it is that AI has very powerful pattern-matching abilities that allow it to (for example) ace the L-SAT or the tests they give kids at the Math Olympiad... but only be because the internet is full of very similar practice questions on which to train it. If a cognitive test doesn't closely parallel something in its training data, then it fails, usually in a ridiculous way, even if it's something that would be easy for a human - such as listing the Super Bowls where the winning score was a prime number (very easy provided you have a list of Super Bowl outcomes and a list of the 1st 20 or so primes, both of which AI "knows") or spelling the names of the US state capitals backwards and putting them in alphabetical order.
You see similarly silly things with image generation. Ask an AI to make a picture with a certain number of people and give it details about how to draw them - what they're wearing, what they're holding, hair color, etc... then it will make a picture in which the traits you've asked for - brown hair, overalls, etc. - are assigned to the figures at random rather than the specific people that are supposed to have them.
And don't even get me started on how every request to make a picture with a gibbous moon or a first or third quarter moon gets interpreted as a full moon instead... because full moons are MUCH more common in art, and all that AI does is pattern-matching, not thinking.
Hence my belief that AI risk is nill and that the "AI researchers" who say otherwise are themselves engaging in make-work to make their jobs seem important. Since after all prdinary people would lose interest in AI "alignment" and all that claptrap if they just played around with AI with the INTENTION of spotting its weaknesses, rather than blindly trusting the "experts" when they say stuff like "Oh no! It passed such and such psychology exam! It is smarter than 90% of college students!"
I asked O3 mini for the prime number super bowls and it gave:
"Super Bowl IV (1970) – Winning score: 23
Kansas City Chiefs defeated the Minnesota Vikings 23–7.
Super Bowl XIV (1980) – Winning score: 31
Pittsburgh Steelers beat the Los Angeles Rams 31–19.
Super Bowl XXVI (1992) – Winning score: 37
Washington Redskins edged the Buffalo Bills 37–24.
Super Bowl XXXII (1998) – Winning score: 31
Denver Broncos defeated the Green Bay Packers 31–24.
Super Bowl XXXIV (2000) – Winning score: 23
St. Louis Rams won over the Tennessee Titans 23–16."
..... etc. etc............
Those kinds of issues were more of a problem for older models and seem like they're being ironed out.
Of course the elites will fine and better than ever as always with whatever bullshit job they can have, but there's no mass technological unemployment scenario that would not lead to a crisis. Millions of people work white collar or driving jobs that will inevitably be completely automated and I don't think anybody is going to give them bullshit jobs to make up for it. To think UBI or whatever mass welfare program will keep everybody out of job well above the poverty line is naive and ignorant of how power works, because if the productive value of the masses is zero, no government or plutocrats has any reason to give them shit (might as well even kill them), let alone vast amounts of wealth.
But I agree the cat is already out of the bag and AI development cannont be stopped, so I guess this is inevitable then. But don't pretend there's "nothing" to worry about, unless of course you are speaking for yourself and the country's elites.
It's the speed of change that more than the rate of change that will be most likely to create mayhem, but the rate of change on disempowering millions of white collar jobs is not a non-issue if there are no viable alternatives and they are unable to generate wealth while only a few hoover up all the money
Disempowering millions of people will come with costs, even if we do find a solution eventually
I guess I am not worried but for the opposite reason - I don't think the AI is that good, or improving that fast. Most jobs are not fake jobs - your psychiatrist is hopefully doing more than dispensing the same pill for decades.
I am engaged with AI for multiple hours most days and, the more I work with it, the more I feel frustration as opposed to fear. This is especially true for creative writing. It's good for research and data analysis, it speeds up coding by a few orders of magnitude, but you need to check everything. It will go from analyzing data to inventing data between two prompts for no obvious reason.
My biggest concern at the moment is that a lot of people will be too lazy to check AI work because it looks good/plausible even when it's not. There is a tremendous temptation of productivity at the cost of accuracy. I also expect AI to help truly creative people realize more of their ideas as it handles more of the boring stuff.
So, medium-term I think we will see a new class of AI winners, as opposed to AI being the winner. Scammers, ruthless productivity chasers and genuine creatives seem like they would benefit the most.
Btw, did you use AI to create those charts and the narrative around it? Just curious because that part was really boring.
Let's go with (1) and (2).
It's quite possible for AI to be controlled by humans other than the CCP or Taliban and for humanity to have a problem. (And, in China, it probably will be controlled by the CCP.)
The Industrial Revolution was human-controlled and made lots of people much worse off until labor movements forced the capitalists who owned the machines to give up more of the proceeds. In the USA, we had the Gilded Age; other countries have similar eras. It's similarly quite possible that lots of jobs get automated away, creating massive unemployment and instability enough to impoverish or kill a significant portion of the population, possibly even greater than the portion who benefit from the increased efficiency. Look at how much damage moving a few tens of million jobs to China did to the central USA--we got an opioid epidemic and a right-wing populist uprising. AI's likely to be at least as bad.
It also spawned communist and socialist movements, usually closely intertwined with the above labor movements, that caused huge amounts of death and suffering; perhaps we'll see what the collateral damage of the oncoming IRL version of Dune's Butlerian Jihad against machines is?
This is my biggest fear. You don't put the entire professional class out of work without a huge socialist revolution. But even worse will be the people who start terrorist activities to essentially take out technology. Rather easy to imagine them going way too far, if enough people are out of work, and just start blowing up power stations and taking us back to the stone age. I really do not want to live in a pre-electricity era, so to me this would be the worst possible outcome.
Besides all this, the upper middle class who will be out of work are also the ones that spend money on everything, beyond the basics. Once there are no office workers, no more offices, erasing the entire value of every downtown and all office buildings. Every product and service sold to those people no longer had customers who can pay, so they crash too. It would be a gigantic economic catastrophe and there's no good outcome there, that isn't at the very least very painful in the short to medium term.
Technocracy, automation and digital control.. is all in favor for who? Less humans, less trouble and more for the few on top..
Just like 15 Minute Concentration camps.. we have been warned.
We are organic beings in an organic world... we just have forgotten.. or were made to forget.
You could say the same about every earlier technological revolution. For example, there have been numerous diatribes arguing that farming and writing were developed primarily to enslave the majority of humans for the benefit of an elite class.
Through a certain lens, those narratives are correct. Increasing levels of technological and cultural sophistication generally lead to greater inequality—both material and social. However, these narratives often overstate the level of agency and planning behind these advances. Moreover, the individuals and social groups that ultimately gain control over these institutions are rarely the same ones who discovered or invented them. And in the end, absolute living standards tend to rise with these advancements.
Nick Land is one of the most prominent recent intellectuals to explore patterns of technological disempowerment. I’ve personally never been able to get through his verbose and meandering writing, but credible review articles suggest this is his central thesis. His Accelerationism theory predicts that humans will become increasingly disempowered, eventually being replaced by the technocapitalist system—and that there’s nothing anyone can do to stop it. As an aside, while his ideas seem Marxist in background, he is also regarded as a leading Neoreactionary intellectual.
Regardless, I think we should focus less on who, if anyone, is actually in control of this process and instead concentrate on how we can react and adapt for our own well-being.
The cotton gin couldn’t also paint a picture, diagnose a patient, design a house, etc. Slightly different.
There is always a line that should not be crossed. Machines to a certain extend, reasonable.. most things these days seem to have evolved into: inhumane, unsustainable, ridiculous and defeating the original purpose of making life better.. just look at the automated phone systems.. it has become a bain and is used against us.
balance and common sense like cognitive thinking are missing. Withing reason is gone.
People seem stupid ( still behave like animals and worse ) and we produce artificial intelligence.. lol as if the cell phones were not warning enough ...
This time, it's global. Maybe human kind will learn it's lesson... finally.
We have been here.
All successful empires and such always ended up in rubble.. Old Romans, Egygtians, Mayans, Inkan's, gone insane with gredd and excesses of the lower temptations.
Positions of power always attract the wrong characters. A healthy human being simply wants ti enjoy it's existence. Not rule others.
We seem to want to grow and grow.. bigger and bigger. Forgetting our limits. Nature has limits, this planet has limits, so do we.
Some think they can break the laws of nature, make up human laws, rules and regulations.. control and greed. Again and again, history is full of it.. exploitation of man and land. Oppression, tyranny and destruction.. again and again.. delusions of grandeur.
Maybe all we need to do is find our place, stay reasonable and sustainable.
Stay local and sane instead of: Go big or go home.
The principle of Noble. Self- restrain and self governed. Instead of loud, proud, large and in charge..
Natural life instead of human hubris and excesses.
We have forgotten that nature is our only source and we are part of nature.
We continue to produce weapons of mass destruction, bio weapons, dangerous chemicals and pharmaceutical .. that alone shows our insanity as a species.
Self destruction.
Now we move on to Mars.. wherever we go, there we'll be.
And the newest genius idea is artificial intelligence.. lol
Automation, digital control .. replacing ourselves???
15 Minute Concentration Camps ..
The Earth was given to us. Free of charge.
Why do we follow this insanity?
Why does human kind fall for this trap of leadership again and again.
Self- discipline and self- governing.
We simply cannot subsidize responsibility.
Too comfortable?
We used to feed ourselves, cloth ourselves.. giving up responsibility and independence.. selling out .. industry and military complex took over. Brilliant deception, long time in the making.
Tax treadmill on one side, lazy chair on the other.. kept in check by fear and propaganda.
This country was founded on "Ungovernable".
Let's get back there.. roll up those sleeves..
and stay within reason.
We have repeatedly surpassed our limits over the last few thousand years. Again, I’d cite agriculture and writing. We also have religions and other cultural values that have developed over millennia. The Industrial Revolution—actually a series of machine-power revolutions, from steam to internal combustion and electricity—was particularly significant because it drastically increased our productive capacity.
Our information revolutions began with the earliest developments of written language, followed by a massive expansion in distribution due to the printing press with movable type. That process was later industrialized with steam-powered printing and distribution via trains and trucks. The early analog electrical technologies of radio and then broadcast TV were even more impactful. And we’re still working through the digital revolution, with the internet, social media, streaming, and now generative AI.
The point is that none of this is under anyone’s control. There have been numerous attempts to regulate technology after its deployment. Some, like the Catholic Church’s control of the printing press, had limited success. The Ottomans exercised even greater control over religious texts for an extended period. Yet no one has ever consistently maintained control, let alone rolled back technological progress.
It’s understandable that some prefer a simple, technology-minimal life. That too has happened repeatedly throughout history, and we’re all familiar with contemporary Amish communities that strictly limit their technology use. Yet the broad arc of society always moves toward greater technological adoption. Any mass culture that resists this trend is almost guaranteeing its continuous disempowerment—until it becomes entirely irrelevant.
I think we totally agree. It will be our choices that seal our fate. Where would industry be if no one would buy the poison...
If we all did act noble, there'd be no need for police..
One major problem with even the best regulations is that they trade growth, development and efficiency for safety. Hopefully with AI powered software, we will be able to greatly minimise the costs while retaining the full benefits. We might end up in a situation where someone goes through a 4 year degree to gain credentials just to sign off on an AI generate piece of paper.
Fully agree with the point of the human preference for humans. I think this is largely because of aesthetics and status. We have had machine made fast, off the shelf food for decades yet restaurants and chefs are still in business. If all that mattered was efficiency, we could have eliminated cooking to a great extent. Its largely due to aesthetic preferences and the fact that eating food cooked by humans signals high status, and humans arent going to stop playing status games any time soon.
I don't think white collar work is fully going to go away though. A large part of white collar work is communication, dealing with inputs on the ground and maintaining a non generalisable knowledge base. There is a lot of context and data generated in white collar work on the fly and not really generalisable and I dont think AI will be able to handle that better than humans. Atleast for the near future, I expect a human + AI pairing to be the norm.
We should also see lowering of costs of entry for businesses into many markets and hopefully increased competition and more differentiated products.
Also, its not certain to me that AI will just keep on improving continuously. It is bound to hit some wall due lack of data to train, or the methods used to train models not being able to generate better models, thus requiring another major breakthrough in science to be able to continue the growth. And breakthroughs in science are unpredictable. Most of the improvements after GPT 3.5 have been due to using more data or tweaks in the existing ways to train or use these systems. Not due to radical improvements in the state of the art(Afaik). This should hit a wall sooner than later.
I disagree. There will be a massive impact.
Download my free book about #AI and unemployment (reviewed or written about in The Economist, The Washington Post, and many other major media publications)...
https://mfordfuture.com/2025/01/29/download-my-book-about-how-ai-will-automate-jobs-and-lead-to-technological-unemployment-the-lights-in-the-tunnel/
Re: Adderall, the government seems to have taken a vested interest in disallowing drugs to those who want 1) to get high 2) enhance their performance above "normal" human baselines.
There's a theme here with the body of drug law.
Richard, your argument from Luddite invocation is at least subject to question I’d say, from sociologist Eric Dahlin’s 2024 sociological study of U.S. job loss to AI ( https://bit.ly/DahliE-2024-6 ). Dahlin finds preliminary evidence of AI exceptionalism compared with historical waves of technological displacement and re-employment.
Your argument of rising welfare and falling poverty from economic growth, I think may come undone with due attention to material throughputs via fossil fuels and other such extraction, conversion to human usage, waste sinks, and exhaustion ratios. Ecological analysis by for example Seibert and Rees published in Energies journal (2021, https://bit.ly/SeiRee-2021 ) predicts that future trends in material throughputs for human consumption will severely compress living standards and human population limits, regardless of what humans may wish.
GenAI is also a huge accelerator for material resource overshoots, summarized for example in this Conversation essay about water from 2024 ( https://bit.ly/GuBovV-2024-3_21 ) and this Financial Times report about coal (2024, https://on.ft.com/4chyMNI ). I understand Deepseek is reporting far more efficient startup yields than have the U.S.-based GenAI products; but we have yet to see independent verification and scholarship on that. Overall I am skeptical based on historical trends of energy and technological changes when analyzed transnationally (I could cite more but see for example above, Seibert and Rees 2021).
About welfare policy prospects and universal basic income (UBI) as a solution to AI displacement of human jobs, I’m persuaded by comparative political scholar Bo Rothstein, who specializs in the Nordic countries’ welfare regimes, that UBI is infeasible ( https://bit.ly/RothB-2017 ). Considerable evidence from the rich democracies would suggest that welfare payers in, expect most able-bodied beneficiaries of taxes and transfers in one’s country to work and contribute to the common weal as well as pulling one’s weight unless personally unable for specific reasons. That’s not a UBI logic but rather a welfare policy logic of programs, state administration by civil servants managed for even-handed implementation, social workers with case responsibilities, etc.
Finally Richard your criticism of welfare statistics uses a source I would not. You allude to a “trick the left has pulled off” to muddle the anti-poverty effects of taxes and transfers. Rather in my view you cite there a weak data source, is all. I would point instead to sociological studies that go cross-national — and such data is to be desired for your kind of super-macro, comparative-historical argument in the above essay.
Typically in recent years, sociologists compare economic distributions *after* taxes and transfers. Some of those studies also compute counterfactually what the pre-distribution *would have been* had not the taxes and transfers happened. These studies are excerpted and summarized for example into one of the leading textbooks for introductory sociology by Manza, NYU Sociology Faculty et al 2023 ( https://bit.ly/ManzaJ-2023 ), section 11.4. Also relevant is a new mobility study summarized in the Atlantic by one of the sociologist authors (Parolin 2025, https://bit.ly/ParolZ-2025 ). Also sociologist David Brady, who has been a main contributor to the cross-national measurement of poverty rates and welfare policy effects, summarizes and criticizes the U.S. government’s history of welfare metric methods in his 2023 paper, the first eight paragraphs ( https://bit.ly/BradyD-2023_SciAdv ).
For a while there will be some people employed in jobs that most humans would rather have done by another human. For now that is certain parts of the law, certain parts of medicine and psychiatry and some live entertainment. For a while at least, people will want human lawyers, doctors, musicians and hookers. All those jobs are at risk in a generation or less as the robotic version surpasses the human one. There will always be a very niche market for the extremely wealthy.