I would disagree with your assertion at the end that you couldn't have said it better yourself. I mean Claude certainly could have said it better, and I have a strong preference for your authorial voice over ChatGPT's
I think dumb people, specifically those in cults like maga, cannot have their opinions changed by AI, they will call the LLM woke or biased. we already see with on X with people rejecting what groks says, and asking it pointed questions until it says something that the X user likes.
AI being used to over come natural short comings, could this be said about nearly everything? Conscientiousness? Height? Humour? Extraversion? Will we all become extremely similar with the advent of AI powered genetic engineering? Most of personality differences are flaws that will eventually be able to be fixed.
AI will be amazing for the problem of the breadth of scientific knowledge slowing the advancement of science.
Good point about people focusing on the negatives, I've had someone send me an article of AI causing a death, i responded with one where AI saved a life, didn't get a response.
> Maybe the problem is that using AI is not fair to other writers? Banning it would be like prohibiting steroids in athletic competition. But LLMs are available to everyone, and they don’t cause long-term health damage, so this is not the same thing.
That isn't the correct counterpoint at all.
The "maybe" you raise is much older than LLMs are; you might be interested in what discussion exists of "why are performance-enhancing drugs allowed in mathematics?".
The most important reason is that we want the mathematical results for their inherent value. This is similar to how performance-enhancing drugs (fertilizers and pesticides) are allowed in agriculture.
For performance enhancement to be banned in sports, it was necessary that sports have no value. Writing isn't like that.
The real shift may not be from human writing to AI writing, but from writing as production to writing as validation.
Once generation becomes cheap, the scarce layer is no longer sentence-making itself. It is judgment, verification, and reputational accountability for what gets published.
That changes the role of the writer more than it abolishes it.
Richard you're getting increasingly long-winded, repetitive, and redundant in the points you are making. You also repeat yourself and say the same thing twice and then it's repetitive and redundant.
Another thing you do is repeat the same argument essentially in a subsequent paragraph. But the topic of that paragraph is the same as the first one.
I don't think it's a coincidence this is happening more with your increased use of AI to coauthor stuff. Actually reading the whole piece becomes a slog. The AI is making you boring.
> Richard you're getting increasingly long-winded, repetitive, and redundant in the points you are making. You also repeat yourself and say the same thing twice and then it's repetitive and redundant.
That's the only way to write if you're hoping to move the culture.
I agree with you on a high level. However, I feel like AI as an inoculation against conspiracy thinking has only panned out so far because of the billing incentives being subscription-based. As soon as even one company becomes more ad-driven (looking at you OpenAI), they would be incentivized capture eyeballs and train the models in a way to do so. From the journey of social media, it is pretty clear the only way to do that is some form of customized outrage manufacturing about an out-group.
Interesting thoughts. If you try to push against a technology that makes things easy, you are going to be pushing up hill, and in the end you will almost certainly lose. I also believe that in the future AI will do more of our writing -- and, alarmingly, more of our thinking. Being able to write fluent prose will become a party trick akin to multiplying 3-digit numbers in one's head.
I think one big issue with AI-writing is that it will encourage people to do AI-based reading. Most of the way that people consume writing nowadays is skimming articles online. We don't have the patience to read entire articles anymore. I've already begun to ask AI to summarize articles that I am interested in, but don't want to take the time to read. AI-writing compounds this problem -- first by making more written content, and secondly by weakening the contract between author and reader -- why should I pay close attention to your writing when you used a machine to generate it?
Brave take on a hot topic. Did you get the same sense of satisfaction and accomplishment from writing the piece as you would have if you had written the conclusion and summary yourself? Have you noticed any difference in how you value your work when it’s AI written?
All I had to do was tell it to grade the article on a rubric of “polemical advocacy investigative journalism” and to accept that the citations in the article accurately and truthfully reflect what I represent them to cite. Which is true.
Ironically, even though LLMs tend to add weight to claims from official sources (like the conclusion of an FBI investigation), it can still parse logic more objectively than most people could on this topic. It doesn’t have the same emotional stigma response to a taboo topic and simply looks at evidence I wrote about.
At the end, all three AI platforms I mentioned agreed with my argument about the Israelis arrested on 9/11 having foreknowledge and the FBI engaging in some form of cover-up. I did this without instructing it to agree with me, but rather to grade the article and its conclusions.
Imagine how cool it will be when AI can not only write well but come up with ideas all by itself. Writers won't have to do anything!
I would disagree with your assertion at the end that you couldn't have said it better yourself. I mean Claude certainly could have said it better, and I have a strong preference for your authorial voice over ChatGPT's
I think dumb people, specifically those in cults like maga, cannot have their opinions changed by AI, they will call the LLM woke or biased. we already see with on X with people rejecting what groks says, and asking it pointed questions until it says something that the X user likes.
AI being used to over come natural short comings, could this be said about nearly everything? Conscientiousness? Height? Humour? Extraversion? Will we all become extremely similar with the advent of AI powered genetic engineering? Most of personality differences are flaws that will eventually be able to be fixed.
AI will be amazing for the problem of the breadth of scientific knowledge slowing the advancement of science.
Good point about people focusing on the negatives, I've had someone send me an article of AI causing a death, i responded with one where AI saved a life, didn't get a response.
> Maybe the problem is that using AI is not fair to other writers? Banning it would be like prohibiting steroids in athletic competition. But LLMs are available to everyone, and they don’t cause long-term health damage, so this is not the same thing.
That isn't the correct counterpoint at all.
The "maybe" you raise is much older than LLMs are; you might be interested in what discussion exists of "why are performance-enhancing drugs allowed in mathematics?".
The most important reason is that we want the mathematical results for their inherent value. This is similar to how performance-enhancing drugs (fertilizers and pesticides) are allowed in agriculture.
For performance enhancement to be banned in sports, it was necessary that sports have no value. Writing isn't like that.
The real shift may not be from human writing to AI writing, but from writing as production to writing as validation.
Once generation becomes cheap, the scarce layer is no longer sentence-making itself. It is judgment, verification, and reputational accountability for what gets published.
That changes the role of the writer more than it abolishes it.
Richard you're getting increasingly long-winded, repetitive, and redundant in the points you are making. You also repeat yourself and say the same thing twice and then it's repetitive and redundant.
Another thing you do is repeat the same argument essentially in a subsequent paragraph. But the topic of that paragraph is the same as the first one.
I don't think it's a coincidence this is happening more with your increased use of AI to coauthor stuff. Actually reading the whole piece becomes a slog. The AI is making you boring.
I do think AI needs to be more concise.
> Richard you're getting increasingly long-winded, repetitive, and redundant in the points you are making. You also repeat yourself and say the same thing twice and then it's repetitive and redundant.
That's the only way to write if you're hoping to move the culture.
I agree with you on a high level. However, I feel like AI as an inoculation against conspiracy thinking has only panned out so far because of the billing incentives being subscription-based. As soon as even one company becomes more ad-driven (looking at you OpenAI), they would be incentivized capture eyeballs and train the models in a way to do so. From the journey of social media, it is pretty clear the only way to do that is some form of customized outrage manufacturing about an out-group.
Interesting thoughts. If you try to push against a technology that makes things easy, you are going to be pushing up hill, and in the end you will almost certainly lose. I also believe that in the future AI will do more of our writing -- and, alarmingly, more of our thinking. Being able to write fluent prose will become a party trick akin to multiplying 3-digit numbers in one's head.
I think one big issue with AI-writing is that it will encourage people to do AI-based reading. Most of the way that people consume writing nowadays is skimming articles online. We don't have the patience to read entire articles anymore. I've already begun to ask AI to summarize articles that I am interested in, but don't want to take the time to read. AI-writing compounds this problem -- first by making more written content, and secondly by weakening the contract between author and reader -- why should I pay close attention to your writing when you used a machine to generate it?
Brave take on a hot topic. Did you get the same sense of satisfaction and accomplishment from writing the piece as you would have if you had written the conclusion and summary yourself? Have you noticed any difference in how you value your work when it’s AI written?
I wrote an article on the Dancing Israelis conspiracy and Gemini, Claude, and Grok all say that I’ve made a conclusive case. It’s pretty funny. https://andrewdolgin.substack.com/p/911-and-the-dancing-israelis-refuting?r=8yze6&utm_medium=ios
All I had to do was tell it to grade the article on a rubric of “polemical advocacy investigative journalism” and to accept that the citations in the article accurately and truthfully reflect what I represent them to cite. Which is true.
Ironically, even though LLMs tend to add weight to claims from official sources (like the conclusion of an FBI investigation), it can still parse logic more objectively than most people could on this topic. It doesn’t have the same emotional stigma response to a taboo topic and simply looks at evidence I wrote about.
At the end, all three AI platforms I mentioned agreed with my argument about the Israelis arrested on 9/11 having foreknowledge and the FBI engaging in some form of cover-up. I did this without instructing it to agree with me, but rather to grade the article and its conclusions.