Discussion about this post

User's avatar
Raoul Neuman's avatar

Agree with most of this, but I would just say that ChatGPT still has to be double and triple checked for contentious issues. For example, I recently asked ChatGPT about the top 10 deadliest wars since World War II, and the current war in Gaza kept coming up, only for me to have to keep going to Wikipedia and tell ChatGPT that "certain war x since WWII has killed way more people," and for it to keep updating the ranking, eventually admitting that the Gaza War wasn't even in the top 35. I have no doubt that it'll eventually improve, but this will probably always be something to watch for.

Expand full comment
Justin McAleer's avatar

>This raises the question of whether we need the books at all. Especially in the case of Mankiw, which is a textbook and therefore covers basic concepts, I think it would’ve been a lot more efficient to just take the chapter titles and headings and ask ChatGPT to explain the concepts to me.

This use case is what worries me most as things stand now. With the propensity of LLMs to generate bullshit, it seems dangerous to use them as a teacher of unfamiliar subjects. Sure, probably not a big deal for just feeding your curiosity for entertainment purposes. But when you are relying on it to inform your work as a public intellectual, there could be significant consequences.

Edit: Just wanted to add that Richard may well have sufficient background to identify incorrect about macroeconomics. I was using this as an example, not intending to make a particular accusation.

Expand full comment
40 more comments...

No posts

Ready for more?