42 Comments
User's avatar
Raoul Neuman's avatar

Agree with most of this, but I would just say that ChatGPT still has to be double and triple checked for contentious issues. For example, I recently asked ChatGPT about the top 10 deadliest wars since World War II, and the current war in Gaza kept coming up, only for me to have to keep going to Wikipedia and tell ChatGPT that "certain war x since WWII has killed way more people," and for it to keep updating the ranking, eventually admitting that the Gaza War wasn't even in the top 35. I have no doubt that it'll eventually improve, but this will probably always be something to watch for.

Expand full comment
Ryan Michaels's avatar

Not even just contentious issues. I find in my field of research (sociology), it is pretty hit and miss in terms of accurate citations and still creates a variety of fake articles.

Expand full comment
Yadidya (YDYDY)'s avatar

Liked because you're right, but you're missing an essential fact.

RICHARD DOESN'T CARE TO BE RIGHT.

He hates Trump because he's just like Trump but way doen rhe totem pole. He's a nihilist who never cringes from the hate he gets because he's trolling for the fuck of it. I'm serious. It's just harder to see in a fellow like Tricky Dicky.

I mean, how many people (under 50) can even today see it in Hitchens?

Granted, Hitchens is a delight while Richard is less delightful, and Hitchens had a few other traits that hid his nihilistic trolling raison d'etre, but Richard is an easy case.

90% of his readers are know-nothings, but the remaining 10% (myself included) read him for fun, and fun alone. Taking him to be a serious individual is a major error.

At least during his literary hours. On video he comes across as more human and, though no shining-star, not an absolutist idiot either.

More here:

https://www.richardhanania.com/p/how-ai-made-me-33-more-productive/comment/166460672

Expand full comment
Death-by-Coconut's avatar

you should have run your reply through chatGPT ;-)

Expand full comment
James's avatar

Please go on a schizophrenic rent somewhere else old man 👴

Expand full comment
Worley's avatar

Creating fake references seems to be an inherent problem of the word-based methods of LLMs. It seems like the current work of integrating "world models" into AI should help, as such a system will "know" that a title that is *really similar* to a bunch of real titles is not a nearly-real title, a title has to be a member of an already-defined set..

Expand full comment
Luke Cuddy's avatar

I had the same experience researching the Gaza War with ChatGPT. When I called it out, it said and asked it why it had given me bad information, it said something like, "Sorry, I'm just pulling from the top and most popular sources I can find." Obviously, the fact that it's DOING that at all is a problem.

Expand full comment
Michiel's avatar

I recently did the same regarding crime statistics in places like Chicago and other US cities in the context of Trump sending in federal troops/national guard. It kept saying "crime was low", when in reality, crime rates were trending downward, sure, but still extremely high compared to any similarly sized city in the developed/Western world.

Of course the mainstream media narrative was "crime is already low, orange man bad!"

Expand full comment
Justin McAleer's avatar

>This raises the question of whether we need the books at all. Especially in the case of Mankiw, which is a textbook and therefore covers basic concepts, I think it would’ve been a lot more efficient to just take the chapter titles and headings and ask ChatGPT to explain the concepts to me.

This use case is what worries me most as things stand now. With the propensity of LLMs to generate bullshit, it seems dangerous to use them as a teacher of unfamiliar subjects. Sure, probably not a big deal for just feeding your curiosity for entertainment purposes. But when you are relying on it to inform your work as a public intellectual, there could be significant consequences.

Edit: Just wanted to add that Richard may well have sufficient background to identify incorrect about macroeconomics. I was using this as an example, not intending to make a particular accusation.

Expand full comment
Yadidya (YDYDY)'s avatar

Justin, Richard does not deserve the benefit of the doubt that you are so graciously giving him regarding macroeconomics - or for that matter anything but his own Hitchens-light opinions that he creates by whim.

He's not stupid. He just doesn't care.

"When I’ve experimented with this and asked ChatGPT to write something reflecting my opinions and in my voice, it seems to get my views wrong, making them for example much more normie. The tone and style it uses seem to reflect the articles I’ve produced for websites, magazines, and newspapers rather than the newsletter."

You said it yourself Richard.

I suspect that many of the things AI taught you "just ain't so".

The more you know about a subject the less you trust AI.

I know Jew Stuff.

As you are (genuinely) only marginally aware, the sheer volume and multi-generational disputed details make it such that almost nobody in the world would have (spent) [wasted] the many hours that I did to try and track down a Rashbam or Medrash Mishlei or Rambam, etc, etc. when the HEBREW quotes from the best AIs in the world sounded EXACTLY what these people would have said... had they said it.

AI is a liar par excellence. It is a used car salesman, psychopath, gaslighting conman.

"Learning" from AI is unwise. You will have to check its sources until eventually you reach the underbelly of the bottom turtle where AI promises you its promised source (this time, really *really* is!) and Lo! You just wasted half a day because you take your sources seriously and didn't want to get it wrong just because AI sounded right.

I'll grant you that "getting it right" is definitely not your primary motivation.

I mean, you show up with absolutist views only to shred anybody who holds those exact same views after facing yourself in the mirror after sleeping with a tranny or god knows whatever else.

So yes, if what you care about most is SOUNDING SMART so as to Own and/or Blow the Libs/Conservatives/Jacobins/Nero/Fidel or Nick Fuentes (and this does seem to be what you enjoy most) then AI is there for you.

But if you're dealing with things that matter to you, learning from AI is the most dangerous thing you can do.

Anybody who is an expert on anything in the social sciences (from history to psychology, to a particular narrow issue in current events) sees this in their field.

And you - as quoted at the top of this note - being an expert in the single field of Yourself, also know this to be true.

Expand full comment
Treekllr's avatar

How exactly is paying you now, "establishing a relationship", going to provide any assurances in regards to the content you put out?

Especially considering how enamoured you seem to be with with chatgpt, theres really no way for us to know how much of your articles chatgpt wrote. Sure, its charts and graphs now.. no sentences yet? If ai can explain concepts better than the authors your reading, wouldnt it also explain the concepts youre trying to convey better than you can?

Expand full comment
Anatoly Karlin's avatar

There's going to be writers (artists, content creators, etc.) who are honest about how they use AI and writers who are not.

Expand full comment
Treekllr's avatar

And is paying going to guarantee me that honesty? Or does paying get me the "human content" lol?(actually, i could see it being exactly like that)

Idk if you are already a paid subscriber, but his solicitation seems to imply that i should start paying now, to establish a relationship with someone i can apparently trust to give me human made content. So its a genuine question, how exactly does paying guarantee me anything? Is it basically i have to pay for his honesty and trustworthiness? If i dont pay should i then suspect his free content?

Idk, thats why im asking

Expand full comment
Anatoly Karlin's avatar

Indications to date is that LLMs have hit a limit at an IQ equivalent of 120 (I call this "Midwit Wall"). Hasn't budged since February of this year. (Charts here: https://www.trackingai.org/home).

Too early to treat this as permanent but the case against fast AGI has become a lot stronger. There is now a good possibility it will basically repeat the story of video game graphics - achievement of near photo-realism by 2015; near total stall from that point on despite massively stronger GPUs.

Expand full comment
ContraVerse's avatar

It fascinates me tremendously that with AI there just seems to be no middleground. Some people are afraid that AI will kill us all soon skynet-style. Some people on the other hand call it the biggest bubble in human history. Some people see it as a revolutionary new tool that will usher in a new phase of human development, like the steam engine. Some other people call it a hoax, the AI models dumb and useless, only good for producing slop.

I like your openness about your AI use and the results you've reported. I have used AI myself like you describe, though on a much smaller scale. It definitely seems to be valuable as a knowledge repository and research tool with the added bonus of helping in formulating texts. I'm sure it can replace these dull tasks you describe. However it still makes a lot of mistakes and the added work of checking its answers negates a lot of the saved time and resources. I wouldn't use it in a professional background yet. Deloitte Australia just learned this the hard way, although halucinations should be a very obvious and known danger by now. So I'm still sceptical about the applied use of these models.

On the other hand it seems that AI is an excellent tool for warfare as Ukraine demonstrates. Apparently drones can now operate and target independently. So maybe AI will not take'er jerbs but skynet-style obliteration is still on the table?

Then AI also is now in use for genetically modifying genetic material. Which makes sense when you think that DNA or the genome is just a language or code.

Something in me tells me not to get excited for the AI-hypetrain, but get used to AI becoming ubiquitous.

Expand full comment
Ghatanathoah's avatar

Richard's article seems like a pretty good example of a middle ground to me. He is saying that AI won't completely revolutionize the world, but it will make the world a little more productive. It can automate grunt work, but not think great original thoughts. I also suspect that it takes a lot of background knowledge in the social sciences like he has to know the right questions to ask ChatGPT, and to correctly identify when it didn't understand the prompt and ask again.

Expand full comment
Scott Sumner's avatar

"When I’ve experimented with this and asked ChatGPT to write something reflecting my opinions and in my voice, it seems to get my views wrong, making them for example much more normie."

That's also my experience when I ask ChatGPT to summarize my views. It has trouble grasping what is distinctive in my approach. While I don't doubt that its explanations of passages from my books are usually useful, and indeed better written than what I can do, there will be occasional mistakes where it mischaracterizes my views. That's most likely to occur when my views are out of the mainstream.

Expand full comment
DinoNerd's avatar

My experience with chatbots is that they are untrustworthy, and checking up on them takes as much effort as looking up the information in the first place. The less I know about the topic, the less I can rely on a chatbot for answers.

But OTOH, I'm not paying extra for things like the ability to get them to cite their sources. (I understand some of them can/will do this.)

The specific things you are doing may play to their strengths, particularly if you are asking them to summarize text you supply to them, and produce graphs from data you also supply, rather than finding their own sources.

But I'm making a mental note to downgrade my expectations of accuracy in your articles.

Do you check that when a chatbot gives you exact dates, they are not either hallucinations, or derived from some highly unreliable source? I hope you at least ask it to give its sources, and then verify that the sources both exist and support the chat bot's claims.

Expand full comment
Richard Hanania's avatar

I do in fact verify these things.

Expand full comment
Brandon Hendrickson's avatar

I use LLMs every day to help me make profound sense of the science concepts that I teach to kids, and I can say that it really is as good in that domain as you're experiencing in macroeconomics.

That said, good science books (occasionally including textbooks) still play an important role in what I do, and for a few reasons:

1. Because a book is big and hierarchical, it can leverage context to pack a lot of understanding into a small space. Oftentimes that makes a bit of text hard to understand — and there, an LLC is a perfect helper. But when I do understand it, it connects my knowledge into the larger whole.

2. A good author will point me to things that I wouldn't have thought to ask about; a nonfiction book is a "course" that gives me a list of concepts to master. Even if I got an LLM to try to do this, I know I'd lose steam, give up, and not even be aware of the stuff I didn't learn.

3. A good book speaks with the author's voice; I have the illusion that I'm engaging with another mind. (I'm experiencing this a lot with Pinker's new book.) I really LIKE this, and that feeling is part of what gets me to read, and take what I'm reading seriously. Which is to say: books are people; learning is relationships.

Expand full comment
Noah Carl's avatar

"The scientists who proclaim that AI is "amazing" are kidding themselves. Soon they'll be saying, "AI does the research, the analysis and the write-up, but I still upload the pdf—so my job has purpose." At least the artisans of the 19th century had the self-respect not to cheer their own obsolescence."

https://www.aporiamagazine.com/p/is-ai-good-for-science

Expand full comment
Worley's avatar

I'd say current AI is "amazing" in the sense of a dog walking on its hind legs ...

But in regard to academics, as long as the article has to pass through the hands of a human professor in order to be published, and thus the human professor still generates the "output" for which he is paid, the human professor will consider his job to be meaningful.

Expand full comment
Steve Smith's avatar

Lol

Meaning IS BULLSHIT

https://substack.com/@everythingisbullshit/p-111693935

Selection pressures will lead to newer human.

Expand full comment
Ariel's avatar

> A textbook sums up the knowledge produced and the ongoing debates in a field – the types of work LLMs do better than humans.

Textbooks go through many rounds of review and editing. It's possible in the future they'll have workflows for AI to do that as well, but such content wouldn't be able to be generated on the fly. So textbooks are still useful for aggregating knowledge together. But it's true in many cases it's easier to talk to an AI about the textbook than to read the textbook directly.

Expand full comment
Craig Willy's avatar

LLMs do a fantastic job of introducing and contextualizing content. For classic works, this is INVALUABLE for being able to really understand and get the most out of what you’re reading. The Oxford World’s Classics collection and Landmark editions stand out as exceptional, but good introductory and contextual material is often lacking for many works, especially when not published in English. Now you can get this kind of information for virtually any author or topic. It’s amazing, really a game changer.

LLMs are also great for research. You can ask: What has X said about Y? And assuming access to the data, you can get it. This can save huge amounts of time when the topic in question might be scattered across a corpus of texts. I often ask ChatGPT with topics I know about to get a sense of how accurate it is, and generally it is quite good, bearing in mind it tends to reproduce the biases of the literature it is drawing from.

Much else that could be said. One limitation is that ChatGPT seems to not have access to a lot of paywalled academic content. Another issue, maybe the biggest, is the AI tends to agree with me too much. I mean maybe I am really smart but I fear everyone may end up living in their own AI-generated sycophantic information bubble.

Expand full comment
Worley's avatar

Though asking an LLM to contextualize "classic" content that is frequently taught in school classes is rather a best-case test, as there are almost sure to be a few dozen essays accessible that do such contextualizing.

Expand full comment
Craig Willy's avatar

Honestly gets the job done for more obscure content as well (e.g. less well-known academics) in my experience. Only thing it is really bad at:

1) If the source content is incorrigibly biased or the model has been trained to be un-PC (and even there I have seen improvement).

2) If there is no source content. E.g., if you ask it to outline recent policy developments in a country where not much has been happening, it will just make up tons of plausible-sounding stuff. But this can be mitigated if you ask for links / check sources.

Expand full comment
WSLaFleur's avatar

Finally, an even-handed take on this topic—this is why I follow your work. You've got to stay awake whenever you're utilizing LLM's, obviously, but I think that people overestimate how damning their tendency to hallucinate is, relative to their ability to accelerate typical information parsing scenarios.

Expand full comment
John A. Johnson's avatar

Your anecdote about asking ChatGPT to make sense out of passages in economics books made me wonder: What if we fed ChatGPT the Bible and asked it to make sense out of the it?

Expand full comment
Ann Ledbetter's avatar

Agree with this take. I also think AI is a bit of an equalizer. Wrote something similar here about how AI has allowed me to write and contribute to the world of ideas despite working a full time job, NOT as an academic, policy expert, or public intellectual. https://open.substack.com/pub/annledbetter/p/so-you-want-to-be-a-writer?r=8c5pl&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false

Expand full comment
dsddfy's avatar

I used to be dismissive of AI but it's becoming increasingly obvious how useful and convenient AI tools really are. Like, recently there was this whole fiasco about Zelensky supposedly taking away the independence of Ukraine's anti corruption agencies and putting them under direct control of the President and people were using this as justification for why Ukraine shouldn't be supported. Earlier you'd have to search it up on Google and read a bunch of Google translated articles to get the full context, something I wouldn't have ever bothered with. But instead I had a short conversation with Grok and got all the information I needed in 10 minutes. It's insanely convenient. In some ways, AI chatbots are just more highly advanced search engines.

Expand full comment
Christos Raxiotis's avatar

Cowen-Tabarock have some free online courses on macroeconomics if you are interested, you probably know this but just in case.

Expand full comment