"The scientists who proclaim that AI is "amazing" are kidding themselves. Soon they'll be saying, "AI does the research, the analysis and the write-up, but I still upload the pdf—so my job has purpose." At least the artisans of the 19th century had the self-respect not to cheer their own obsolescence."
>This raises the question of whether we need the books at all. Especially in the case of Mankiw, which is a textbook and therefore covers basic concepts, I think it would’ve been a lot more efficient to just take the chapter titles and headings and ask ChatGPT to explain the concepts to me.
This use case is what worries me most as things stand now. With the propensity of LLMs to generate bullshit, it seems dangerous to use them as a teacher of unfamiliar subjects. Sure, probably not a big deal for just feeding your curiosity for entertainment purposes. But when you are relying on it to inform your work as a public intellectual, there could be significant consequences.
Agree with most of this, but I would just say that ChatGPT still has to be double and triple checked for contentious issues. For example, I recently asked ChatGPT about the top 10 deadliest wars since World War II, and the current war in Gaza kept coming up, only for me to have to keep going to Wikipedia and tell ChatGPT that "certain war x since WWII has killed way more people," and for it to keep updating the ranking, eventually admitting that the Gaza War wasn't even in the top 35. I have no doubt that it'll eventually improve, but this will probably always be something to watch for.
I use LLMs every day to help me make profound sense of the science concepts that I teach to kids, and I can say that it really is as good in that domain as you're experiencing in macroeconomics.
That said, good science books (occasionally including textbooks) still play an important role in what I do, and for a few reasons:
1. Because a book is big and hierarchical, it can leverage context to pack a lot of understanding into a small space. Oftentimes that makes a bit of text hard to understand — and there, an LLC is a perfect helper. But when I do understand it, it connects my knowledge into the larger whole.
2. A good author will point me to things that I wouldn't have thought to ask about; a nonfiction book is a "course" that gives me a list of concepts to master. Even if I got an LLM to try to do this, I know I'd lose steam, give up, and not even be aware of the stuff I didn't learn.
3. A good book speaks with the author's voice; I have the illusion that I'm engaging with another mind. (I'm experiencing this a lot with Pinker's new book.) I really LIKE this, and that feeling is part of what gets me to read, and take what I'm reading seriously. Which is to say: books are people; learning is relationships.
How exactly is paying you now, "establishing a relationship", going to provide any assurances in regards to the content you put out?
Especially considering how enamoured you seem to be with with chatgpt, theres really no way for us to know how much of your articles chatgpt wrote. Sure, its charts and graphs now.. no sentences yet? If ai can explain concepts better than the authors your reading, wouldnt it also explain the concepts youre trying to convey better than you can?
It fascinates me tremendously that with AI there just seems to be no middleground. Some people are afraid that AI will kill us all soon skynet-style. Some people on the other hand call it the biggest bubble in human history. Some people see it as a revolutionary new tool that will usher in a new phase of human development, like the steam engine. Some other people call it a hoax, the AI models dumb and useless, only good for producing slop.
I like your openness about your AI use and the results you've reported. I have used AI myself like you describe, though on a much smaller scale. It definitely seems to be valuable as a knowledge repository and research tool with the added bonus of helping in formulating texts. I'm sure it can replace these dull tasks you describe. However it still makes a lot of mistakes and the added work of checking its answers negates a lot of the saved time and resources. I wouldn't use it in a professional background yet. Deloitte Australia just learned this the hard way, although halucinations should be a very obvious and known danger by now. So I'm still sceptical about the applied use of these models.
On the other hand it seems that AI is an excellent tool for warfare as Ukraine demonstrates. Apparently drones can now operate and target independently. So maybe AI will not take'er jerbs but skynet-style obliteration is still on the table?
Then AI also is now in use for genetically modifying genetic material. Which makes sense when you think that DNA or the genome is just a language or code.
Something in me tells me not to get excited for the AI-hypetrain, but get used to AI becoming ubiquitous.
I used to be dismissive of AI but it's becoming increasingly obvious how useful and convenient AI tools really are. Like, recently there was this whole fiasco about Zelensky supposedly taking away the independence of Ukraine's anti corruption agencies and putting them under direct control of the President and people were using this as justification for why Ukraine shouldn't be supported. Earlier you'd have to search it up on Google and read a bunch of Google translated articles to get the full context, something I wouldn't have ever bothered with. But instead I had a short conversation with Grok and got all the information I needed in 10 minutes. It's insanely convenient. In some ways, AI chatbots are just more highly advanced search engines.
"The scientists who proclaim that AI is "amazing" are kidding themselves. Soon they'll be saying, "AI does the research, the analysis and the write-up, but I still upload the pdf—so my job has purpose." At least the artisans of the 19th century had the self-respect not to cheer their own obsolescence."
https://www.aporiamagazine.com/p/is-ai-good-for-science
Lol
Meaning IS BULLSHIT
https://substack.com/@everythingisbullshit/p-111693935
Selection pressures will lead to newer human.
>This raises the question of whether we need the books at all. Especially in the case of Mankiw, which is a textbook and therefore covers basic concepts, I think it would’ve been a lot more efficient to just take the chapter titles and headings and ask ChatGPT to explain the concepts to me.
This use case is what worries me most as things stand now. With the propensity of LLMs to generate bullshit, it seems dangerous to use them as a teacher of unfamiliar subjects. Sure, probably not a big deal for just feeding your curiosity for entertainment purposes. But when you are relying on it to inform your work as a public intellectual, there could be significant consequences.
Agree with this take. I also think AI is a bit of an equalizer. Wrote something similar here about how AI has allowed me to write and contribute to the world of ideas despite working a full time job, NOT as an academic, policy expert, or public intellectual. https://open.substack.com/pub/annledbetter/p/so-you-want-to-be-a-writer?r=8c5pl&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false
Agree with most of this, but I would just say that ChatGPT still has to be double and triple checked for contentious issues. For example, I recently asked ChatGPT about the top 10 deadliest wars since World War II, and the current war in Gaza kept coming up, only for me to have to keep going to Wikipedia and tell ChatGPT that "certain war x since WWII has killed way more people," and for it to keep updating the ranking, eventually admitting that the Gaza War wasn't even in the top 35. I have no doubt that it'll eventually improve, but this will probably always be something to watch for.
I use LLMs every day to help me make profound sense of the science concepts that I teach to kids, and I can say that it really is as good in that domain as you're experiencing in macroeconomics.
That said, good science books (occasionally including textbooks) still play an important role in what I do, and for a few reasons:
1. Because a book is big and hierarchical, it can leverage context to pack a lot of understanding into a small space. Oftentimes that makes a bit of text hard to understand — and there, an LLC is a perfect helper. But when I do understand it, it connects my knowledge into the larger whole.
2. A good author will point me to things that I wouldn't have thought to ask about; a nonfiction book is a "course" that gives me a list of concepts to master. Even if I got an LLM to try to do this, I know I'd lose steam, give up, and not even be aware of the stuff I didn't learn.
3. A good book speaks with the author's voice; I have the illusion that I'm engaging with another mind. (I'm experiencing this a lot with Pinker's new book.) I really LIKE this, and that feeling is part of what gets me to read, and take what I'm reading seriously. Which is to say: books are people; learning is relationships.
How exactly is paying you now, "establishing a relationship", going to provide any assurances in regards to the content you put out?
Especially considering how enamoured you seem to be with with chatgpt, theres really no way for us to know how much of your articles chatgpt wrote. Sure, its charts and graphs now.. no sentences yet? If ai can explain concepts better than the authors your reading, wouldnt it also explain the concepts youre trying to convey better than you can?
It fascinates me tremendously that with AI there just seems to be no middleground. Some people are afraid that AI will kill us all soon skynet-style. Some people on the other hand call it the biggest bubble in human history. Some people see it as a revolutionary new tool that will usher in a new phase of human development, like the steam engine. Some other people call it a hoax, the AI models dumb and useless, only good for producing slop.
I like your openness about your AI use and the results you've reported. I have used AI myself like you describe, though on a much smaller scale. It definitely seems to be valuable as a knowledge repository and research tool with the added bonus of helping in formulating texts. I'm sure it can replace these dull tasks you describe. However it still makes a lot of mistakes and the added work of checking its answers negates a lot of the saved time and resources. I wouldn't use it in a professional background yet. Deloitte Australia just learned this the hard way, although halucinations should be a very obvious and known danger by now. So I'm still sceptical about the applied use of these models.
On the other hand it seems that AI is an excellent tool for warfare as Ukraine demonstrates. Apparently drones can now operate and target independently. So maybe AI will not take'er jerbs but skynet-style obliteration is still on the table?
Then AI also is now in use for genetically modifying genetic material. Which makes sense when you think that DNA or the genome is just a language or code.
Something in me tells me not to get excited for the AI-hypetrain, but get used to AI becoming ubiquitous.
I used to be dismissive of AI but it's becoming increasingly obvious how useful and convenient AI tools really are. Like, recently there was this whole fiasco about Zelensky supposedly taking away the independence of Ukraine's anti corruption agencies and putting them under direct control of the President and people were using this as justification for why Ukraine shouldn't be supported. Earlier you'd have to search it up on Google and read a bunch of Google translated articles to get the full context, something I wouldn't have ever bothered with. But instead I had a short conversation with Grok and got all the information I needed in 10 minutes. It's insanely convenient. In some ways, AI chatbots are just more highly advanced search engines.