47 Comments

I listened to the whole Dwarkesh podcast, and I could not get over Leopold’s apparent immaturity.

He may be brilliant, but the most dramatic events in his life so far include failing to notice that his patron (SBF) was a colossal fraud, then getting fired from OpenAI, mostly because he had incredibly naive views about how organizations and power work. Both of these reveal a gross lack of judgment.

In the podcast and in the manifesto, which I’ve only skimmed, he purports to be an expert in a dizzying array of fields, including history, politics, military strategy, espionage, counter-espionage, AI, ethics, and economics, to name just a few. It is rare for such a young person to have mature judgments about topics like history and politics, where great work is normally done by much older practitioners and scholars.

So overall, while I worry about AI and China, I cannot put much faith in Achenbrenner.

He strikes me as a dorm-room genius and a TechBro.

Time will tell I suppose, but his track record is already very bad.

Expand full comment
Jun 19·edited Jun 19

He didn't purport to be an expert in any of those things, he just talked about them. It's an unshared assumption of academics that you have to be at least 35 years old and deemed an Expert by Established Institutions in order to refer to events in other fields and make predictions based on your understanding of them.

Expand full comment

You didn’t address his utter failure to sniff out SBF as a fraud despite years of close contact with him. Nor did you address the naïveté that got him fired from OpenAI. I stand by my judgment that he’s an immature dabbler in areas he knows little about. Even in areas he presumably knows a lot about, like economics, he failed to put that knowledge to use to recognize his boss was a crook. There is a reason world leaders are not children. They lack the maturity and judgment to understand people, history, and human affairs.

Expand full comment

- I am honestly at a loss as to what you expect people in Leopold's position to "sniff out". It's not like Leopold had access to any of FTX's balance sheet. He just worked for a charity that SBF funded.

- There's a difference between being naive and standing up for your beliefs.

Expand full comment

He himself said on the Dwarkesh podcast that it was naïveté that got him fired at OpenAI. He was too stupid to know that going over your boss’s head will get you fired at almost any company.

Lots of ordinary people got fooled by SBF. But Aschenbrenner is alleged to be an extraordinary genius, graduated top of class at Columbia at 19, etc. What I see is a naive young man, easily duped, especially when the one doing the duping is spouting stuff consistent with Leo’s ideology. His reasoning seems strongly ideologically driven. In terms of mature judgment, nothing in his background suggests he possesses any, and the two main data points, SBF and his firing by OpenAI, suggest he doesn’t. So overall, I think he’s not to be trusted in terms of any long term vision of the future.

In addition, people who wrote 100+ page manifestos generally have inflated egos and more than a touch of narcissism, all of which is consistent with my view of him as a dorm room hero and self-important TechBro.

Expand full comment

Yet another article about “AI” that predicts it will be magically all-powerful and control the world but doesn’t say anything about why or how this miracle will occur, or why LLMs are going to be epochally more transformative than other forms of algorithmic automation.

Expand full comment

If you don't get it by now, you won't until it smacks you in the face.

Expand full comment

I think Robin Hanson is right that it will be roughly as transformative as airplanes. But in this post Richard is responding to Leopold, who does think AI is going to be a huge deal, and arguing against him based on that assumption.

Expand full comment

I think Richard was pretty upfront about stating that this entire essay is predicted on the assumption that AI will soon be all-powerful.

The usual justification I see for AI rapidly becoming powerful is that once AI becomes sufficiently good at coding, it will be able reprogram itself to be smarter. Once it is smarter it will be even better at coding and be able to make itself even smarter, and so on. However, I have also seen a lot of arguments against this theory, and arguments that while this theory might be true in principle, LLMs are not going to lead directly to that scenario.

Expand full comment

That's a likely result but not necessary for AI to be a threat. Consider what you could do if you were uploaded and able to create millions of copies of yourself in a computer that run 10x faster than a biological human, even without being any more intelligent than you are now.

Expand full comment

self-modifying computer viruses were possible to program well before the current iteration of AI.

Expand full comment

Yes, but they weren't good enough at coding to make themselves smarter. The self-modified iterations were able to improve for specific tasks, but not intelligence in general. For that you'd need a complex AI skilled in both coding and cognitive science. (Although I have seen science fiction stories where a self-modifying virus does become god-like in such a matter).

The idea is that there is some critical threshold where once an AI is that smart, it will be able to make itself much, much smarter at a very rapid rate. If this happens and the AI is hostile, or even indifferent to humans, it could do a lot of damage, to put it mildly.

Again, I don't find that scenario super-plausible, but it's plausible enough that I'm glad someone is working on it. Even if the worst case scenario doesn't come true, research on how to make AI more reliably do what we tell it seems like it will be useful.

Expand full comment

I think it's pretty plausible, bordering on inevitable. Even if you don't consider algorithmic improvements, a human-level AGI will be able to accrue huge amounts of computing resources, which it can then use to train a trillion-dollar successor. Maybe that doesn't result in the superintelligence that some people talk about, but it certainly results in something a lot smarter than the previous system was.

And there are known pathways for algorithmic improvements that make me expect greater efficiency to be possible; for example: https://www.youtube.com/watch?v=v9M2Ho9I9Qo

Expand full comment
Comment removed
Expand full comment

> AI is not cognizant of its existence

False, you can ask any leading model like GPT-4 or Claude about itself or other AI models and it will cogently answer.

> has no principles or ethics beyond what has been programmed into it

You also have no principles or ethics beyond what your neural programming says to have; this is a meaningless statement. Of course objects only behave in ways that the things they're constructed out of collectively behave, that's a tautology.

Expand full comment

He didn't say anything about LLMs.

Expand full comment

The claim is that it will occur by increasing parameter count by orders of magnitude.

Expand full comment

And Sam Altman only needs trillions of dollars to make that happen!

Expand full comment

Hundreds of thousands of words have been written about that, but if people don't read or don't understand them, there's not really much else that can be done about that. It doesn't make sense to waste space in every article about AI trying to convince the few people of something they're probably never going to accept. Someone writing about nuclear war prevention isn't going to include a section in every article that explains why nuclear war would be bad; the vast majority of their readers already understand that, and any who don't are likely unconvinceable.

Expand full comment
Jun 19·edited Jun 19

I have tried hacking through some of those words -- ok not hundreds of thousands but I have to be over 10K by now -- and still cannot find a clear / lucid / intelligent statement of the problem, or a clear explanation of why this latest iteration of software technology represents such a qualitative increase in risk and power as compared to previous iterations. Except perhaps that this time it threatens the jobs of people who manipulate words for a living. In contrast the explanation for why nuclear war would be bad is very clear, very simple, and very easily comprehensible by everyone without hundreds of thousands of words of obfuscation.

Expand full comment
deletedJun 20
Comment deleted
Expand full comment

Nobody I'm aware of has ever claimed that AI will become infinitely capable.

Expand full comment

I wouldn't worry about the US and AI. Our intellectual giant and VP Kamala Harris is at the helm and steering our glorious Republic (or are we now a democracy) towards a glorious future of American AI dominance. LLM's anyone? Anyone?

Expand full comment

"...when Washington realizes how powerful AI is, our leaders are not going to leave it in the hands of private corporations. One could’ve said the same thing about the internet at its inception. Imagine how powerful something like Google would have seemed before it came into existence."

The internet was incepted by govt: originally known as ARPAnet (the acronym is important); & the Pentagon is one of Google's biggest customers. And so is evil Israel - just ask those twenty-eight brave employees who were fired by Google for protesting the company's Project Nimbus. AI is next in line - & if China can use it to keep sissy men off TV, they're welcome to it..

Expand full comment

I don't think Google fired those employees because Israel is a big customer. I think they did so because they weren't tolerating disruptive employees anymore.

Expand full comment

If it's true that AIs will be "superintelligent" in 3 years, and also that these AIs will provide a level of military advantage on par with nukes, then we're screwed. Does anyone think the US government has any possibility of getting its shit together on that short a timeline?

Expand full comment

An ABC News report mentioned that folks seeking asylum at the southern border came from 117 countries and among them 30+thousand Chinese. How many does it take to infiltrate an AI facility?

Expand full comment

Military’s uptake of advanced AI technology will be very gradual, as well China’s. Nothing is going to happen fast here. Concern is misplaced.

Expand full comment
Jun 19·edited Jun 19

I don't see why international cooperation cannot work. Not saying it's easy, but the idea that countries have zero moral compass is just wrong.

Expand full comment

Has Steve Hsu written about this? He is a very smart guy who is bullish on China.

Expand full comment

Interestingly on Juneteenth, slavery can compel man to physical labor, but not mental labor. The knowledge economy doesn't lend well to oppressive regimes like China, Saudi Arabia, etc., to get ahead in AI. They will either need to liberalize or fall behind.

Expand full comment

The Soviets actually did force scientists to do mental labor in nicer-than-usual prison camps.

Expand full comment

Yea, we're still around and they aren't

Expand full comment

North Korea is still around and they were able to make a nuke.

Expand full comment

Subsidized by China, Iran and Russia, a country that can't feed it's people. Do you think before you type?

Expand full comment

Were we arguing about whether North Korea is an admirable place or the practical question of whether such a regime can compel mental labor?

Expand full comment

I think the right question is not to ask whether we the US can beat China, but whether we humans can keep intelligence alive on our planet. The right perspective, in my view, is our planet amongst other planets in the universe. In that competition, if we want to keep playing, we need to adopt a long term perspective of 100s and 1000s of years. Also note that AGI needs humans and biological lifeforms. Silicon based intelligence is not robust enough to reproduce without biological lifeforms.

Expand full comment

That's a pretty crazy jump from "AI will need some form of non-silicon substrate" to "it will probably use humans".

Expand full comment

You mean like in the movie "The Matrix"? Or in some other way?

Expand full comment

Interesting post. I never would have thought about it in this particular way, but I think you're right lol.

I think you've got China pegged pretty well. I am confident that China will not leapfrog America in any groundbreaking technology in the short to medium term future, whether it's AI or something else. As you said, at this stage China is an incremental improver of technology. Even where they've forged ahead of the West in a field, it was usually first invented in the West and China just pushed it to the nth degree. I think what many in the West might miss is that China's leaders actually don't think of China as being 10 feet tall. Whatever the external rhetoric, if you look at the internal rhetoric of China's leaders, they really see China as having a hundred flaws and still being a developing nation that has a long way to go to catch-up with the West. The consensus is that the best way to catch-up is to identify and adopt best practices, which are mostly from the West.

I was also quite shocked that China jailed He Jiankui. It's an incredibly promising field and I assumed enhancing biology wouldn't be a hang up for them. East Asians are quite willing to enhance biology in many respects, ex. South Korea is at the cutting edge of plastic surgery, makeup, skin care, etc. East Asians are also quite health conscious and are very willing to take spurious treatments and supplements (although they draw the line at drugs). But East Asians also have a conservative purity aesthetic, and tampering with one's DNA was a bridge too far I guess.

My other hypothesis why they froze further development in genetics is that China is just extremely paranoid. It's hard to overestimate just how paranoid China's leaders are about the perceived depths that the imperial American devils will go to undermine China. There are still many Chinese who believe that Covid-19 was an American bioweapon created at Fort Detrick that was brought over by US military athletes competing at the 2019 Military World Games held in Wuhan from October 18-27, 2019, which was perfectly placed to spread from Wuhan, almost the ideal central location in China to be ground zero, and perfectly timed to spread shortly before the Chinese New Year, the largest mass migration in the world, to the rest of China as Chinese returned to their ancestral villages across China to see family and pay homage to their ancestors! As you mentioned, China has banned the collection of Chinese DNA in the fear that America may create a bioweapon that specifically targets Chinese, as paranoid as that may sound. So maybe China's leaders jailed He Jiankui because they were just so paranoid about how dangerous it could be to play around with genetic enhancement.

Along a similar line, China has famously continued to ban all GMO crops despite their widespread use around the rest of the world and their potential to greatly increase yield with them. The leaders in GMO are Monsanto and other foreign providers, which China does not trust and views as an unacceptable risk to securing the national food supply and thus national security.

It probably doesn't help China's paranoia that biotechnology is an area that America is clearly more advanced than China. For the first time, the prestigious Nature Index surprisingly ranked China alongside America for the countries with the most high quality published research https://www.nature.com/articles/d41586-023-01705-7 but there are clear leanings with America remaining vastly superior in the biological sciences and China being superior in the physical sciences. The Nature Index broke down science into 1) Biological Science 2) Chemistry 3) Earth and Environmental Science 4) Health Science 5) Physical Science. America was ahead in Biological Science and Health Science, while China was ahead in Chemistry; Earth and Environmental Science; and, Physical Science.

Anyways, I think your analysis of biotechnology is an apt analogy to AI. And right now I don't see China taking AI nearly as seriously as the West. Some Chinese companies made some inferior copycats of ChatGPT but that's about it. Like you said, it will probably remain that way unless the use-case for AI becomes incredibly obvious in America, after which the Chinese state will mobilize, but I guess that would be too late and so shouldn't be a problem for America if the AI doomers are correct on how significant an early advantage like that would be.

Expand full comment

The idea PRC would both agree to slowing development of a tech capable of giving them an enormous edge on the West … AND … not violate that agreement while lying about it (re-arming in violation of international agreements? Heard of that before?) seems extraordinarily naive. As in: Never happen.

Expand full comment

This was a fascinating article. Did not know about the restrictions on Chinese biotechnology, which also moves me away from the lab leak theory of coronavirus.

Expand full comment

“his theory of international relations says that conflict between their countries is practically inevitable. I found this disturbing, as it seems to be the Platonic ideal of a self-fulfilling prophecy.”

It’s also just wrong, or was up until the Xi era. All Xi has to do is go back to deng / Hu etc. rhetoric of “we’ll merge with Taiwan in the next thousand years” and otherwise work to get rich and enhance human flourishing. It’s not hard. He could continue that line of rhetoric and action but he chooses not to.

Western countries and governments spent decades building up China.

Expand full comment

I think someone, maybe Hanania, has written that it isn't even clear that Xi is serious about a Taiwan invasion. There's a case to be made that they're just playing at expansionism because their rivals are imposing no costs for it.

Expand full comment