137 Comments
Feb 7, 2023Liked by Richard Hanania

This piece highlights both what's right about your argument and what's wrong with it.

Namely, you make a couple of good points: that we are obsessed with what individuals can do with IQs of 140 but not 100, 160 but not 120, etc. We are very aware of the constraints and abilities that exist within each band of the IQ range.

And your Xi Jinping example is a good one. The ability to manipulate people does not seem particularly tied to IQ, and it's not as if humans are powerless against the manipulation of someone with a high enough IQ.

What I think you're missing is that it's impossible to understand the thoughts or capabilities that would be unlocked by an AGI with an IQ of say, 1000, and how those capabilities might be used to control humanity. A quick example: an AGI engineers a highly contagious disease with an 100% fatality rate (i.e. a pandemic), but also engineers a vaccine which makes one immune to the disease in question but has the (intentional) side effect of blindness. It's actually pretty easy to imagine how an AGI would quickly make scientific and technological discoveries that would allow it to capture humanity not subtly, but by brute force.

I think you're still thinking about AGI through too much of an anthropomorphic lens, i.e. behaving as a human would, and not as a goal-driven machine that would essentially be a totally alien species.

Expand full comment

This isn't the topic of the article, but I've been thinking about this for a while now. Vaccines. They are good. Effective COVID-19 vaccines were developed in approximately a weekend in March 2020. Testing over the next eight months revealed essentially nothing that needed to be fixed, especially with the Pfizer and Moderna vaccines. J&J as I recall was a little sketchier. What was lost in that time is almost incalculable. Not just lives, but also the entrenchment of "pandemic culture."

If we had knocked the shit out of it as soon as it arose, I think our civilization would have retained much capital. I've long been of the opinion that, since the end of the Cold War (or maybe since the 60s), Western civilization has been spending the capital (primarily social, societal, and cultural) that was accrued before then. This has been both good and bad, but at some point we're going to need to go back to building capital rather than spending. This, I think, is why our civilization seems to some to be coming apart at the seams (I don't actually agree with this, but that's perhaps because I'm unusually sane and happy).

Operation Warp Speed still did amazing work. Without it, we'd have been waiting at least two years for vaccines. Legitimately owned the libs, legitimately drained the swamp, legitimately proved the utility of the pharma industry (they earned it! pay them their money!) and big business, and delivered incredible surplus to the American people and to the world. And Republicans run from one of the most impressive policy victories in American history? They deserve to lose.

This dovetails with another concern of mine. I'm fat. Have been my whole life. I've never particularly disliked being fat, but also never really thought I'd be otherwise. I'm active and healthy and sometimes managed to lose weight on my own and all that yadda yadda yadda.

At the end of last year, I started Ozempic. Paid $720 for my first pen. It helped. A lot. At the end of my first month, I had lost about 10 lbs and almost dipped below 300 lbs, where I haven't been since approximately college. But when I needed to refill, there was none to be had. For any price. I finally got it re-filled a week ago, and this time only paid $40.

All the drug does it make it easier to eat less. There are some side effects; for me, these have been limited to very mild stomach pain. This is a legitimate miracle. GLP-1 drugs have been much in the news recently, and there are essentially three major products: Ozempic, Wegovy, and Mounjaro. These are essentially 5.56, 7.62, and .700 nitro express of weight loss. The former two are the same drug, just in different doses. Put it in the goddamn water.

In reading about these, I learn (completely unsurprisingly) that these drugs were developed 10+ years ago, and have undergone little if any change during the intervening years of tests, trials, and more trials. If I had had these drugs at age 23 rather than 33, I can't imagine how much better my (already extremely good) life would be now. If it had been approved when it was developed, the factory that Novo Norodisk is currently building just to manufacture semaglutide would have been running years ago. I consider this a personal wrong the FDA has inflicted on me and all fat people.

Burn the FDA to the ground and salt the earth beneath.

Expand full comment
Feb 8, 2023Liked by Richard Hanania

Might this be summarized as: the problems necessary to solve in order for AI to take over the world are sufficiently complex, then there literally isn't enough data for even the best neural network to train on to the point where it has a remotely adequately predictive set of parameters?

Genetics may provide a good analogy. The main reason, as I understand it, why polygenic scores are still so inaccurate, isn't because we're not smart enough to model the relationship between genome and phenotype. Rather, due to interaction effects, a trait determined by say 500 loci with up to 5th order interaction effects (i.e., up to 5 loci can have synergistic effects, you can't just treat each locus as additive) may require DNA samples from more people than have ever lived to obtain the correct model. Depending on how much precision is required, an AI may run out of relevant data in trying to solve a problem necessary to take over the world, and intelligence isn't necessarily a substitute for data. A lot of problems in science seem to be like this.

Expand full comment

Killing all humans is extremely easy and doesn't require precisely modeling all the subtle interactions between people and states - even if you assume it can't just grey goo us with nanotech. Real-life viruses are constrained in lethality because there's a tradeoff between lethality and contagiousness - kill too many of your hosts and you can't spread as much. But an intelligently designed virus could lay dormant and wait until it's infected everyone to turn deadly. AGI could easily design such a virus and provide a vaccine to some mind-controlled humans who can build it fully autonomous robots based on its specifications - it can take its sweet time on this now that it's already basically won.

Expand full comment

"because most people who think about the AI alignment problem seem closer to Bostrom’s position"

I just want to talk about this point because I think there's strong selection bias affecting people outside the field of STEM/ML here (not specific to you Richard).

I work in industry alongside many extremely talented ML researchers and essentially everyone I've met in real life who has a good understanding of AI and the alignment problem generally doesn't think it's a serious concern nor worth thinking about.

In my experience the people most concerned are in academia, deep in the EA community or people who have learned about the alignment problem from someone that is. That essentially means that you've been primed by a person who thinks AGI is a real concern and is probably on the neurotic half of intelligent people.

Most people I know learned about ML from pure math first and then philosophy / implications later and I think this makes a big difference in assigning probabilities for doomsday scenarios. While overly flippant, one friend I spoke to essentially said "if pushing code to production is *always* done by a human and the code is rigorously tested every time, the AI can't get out of the box".

Expand full comment

This may annoy Richard & his readers, but I can't get past how humans seem to need (otherwise, why are they so prevalent) a doomsday story. How is the alignment problem substantively different from any other apocalyptic story? The religiosity of secular culture is always maintained by attaching to something. Whether that's moral codes for the salvation of mankind or saving us from future robots, there's always something exactly like it in the Bible.

Expand full comment

If alignment turns out not to be worth it (and we try anyway), we pay the opportunity cost of throwing smart people at this problem instead of a different one.

If alignment turns out to be worth it (and we don't try), we pay with our lives.

Expand full comment

You are assuming that 'destroying humanity' is a harder problem than 'having Xi become an NFL star' or 'directing the votes of a bunch of US senators'. But super-smart AI is not necessarily going to be all that good at manipulating individual people (except maybe through lies, impersonation etc). My concern is that AI would somehow break into computers controlling vital infrastructure, fire nuclear weapons etc.

Expand full comment

I think what you're missing is the Paperclip Maximizer doesn't need to take over the world, it just needs to get humans out of the way. The best way to do that might be:

1) Develop competent, humanoid robots. These would generate massive profits for Paperclip Inc.

2) Via simulation, develop a number of viruses that could each on their own kill enough people to collapse society.

3) Use robots to spread these viruses.

4) Once all the humans are dead, start turning the planet into a starship factory to build paperclip factories throughout the universe.

No one in the foreseeable future is going to give AI direct control over nuclear weapons or politics, and people who can launch nukes are going to be trained to spot manipulation. Skynet probably can't happen. However genetically modifying viruses and microorganisms is a routine part of biological research, done by thousands of labs all over the world.

Expand full comment

Oh my goodness. Where to start? ;-)

AGIs with physical capabilities comparable to humans (e.g. some sort of physical form) can easily destroy humanity because humans: 1) need to breath, 2) need to sleep, 3) need to consume liquids, 4) need to eat, 4) have children that need 10+ years of care before they're even remotely capable of behaving like adults.

AGIs with robotic bodies need none of those things. They can poison the air (with pathogens or other pollutants), poison water supplies, kill us in our sleep, etc. etc. etc. It's that simple.

Expand full comment

Very interesting but I think you placed too much emphasis on intelligence and missed the point that genius is not required to destroy a thing. Imagine a scenario more like the discovery of America by the Europeans, where the AI is represented by the Europeans and we all are the natives. Then think of the Jesuits who arrived in America to save the natives by converting them to Christianity. They promptly infected the natives with smallpox and most of them died. An AI could simply manipulate a few ambitious scientists in a level 4 bio-lab, then trigger a containment release. It may not reach its goal but hey, it had the right intentions, just like the Jesuits.

Expand full comment

I think you're making the path to superpowerful AI more complex than it needs to be. I agree with you on several points, like the diminishing returns to intelligence. But I think that's going to be domain by domain. For example, I don't think even an IQ 1000 being would be able to solve the three-body problem. But in other domains, such as running a hedge fund, I would think an IQ of 1000, especially combined with the ability to replicate itself arbitrarily many times, would have tremendous value.

I also agree that it wouldn't be able to do a simulation of the world good enough to figure out the exact moves right away. But I don't think this is necessary. It could start by figuring out how to get rich, then work from there. Let me suggest a simpler path toward reaching incredible power and I'd be interested to hear where you disagree.

For starters, I think it would be easily feasible for it to become incredibly rich. For evidence, I'll point to Satoshi Nakamoto who, despite (I assume) being a real person and having a real body, became a billionaire without anyone ever seeing his body. Why wouldn't a superintelligent AI be able to achieve something similar? I'm not saying it would necessarily happen in crypto, but I think the path for a superintelligent AI becoming incredibly rich isn't outlandish. And I see no reason that it wouldn't become the first trillionaire through stocks and whatnot.

Another aspect of a superintelligent AI is that it's likely to have excellent social skills. Imagine it's as good at convincing people of things as a talented historical world leader. But now imagine that on a personalized level. Hitler was able to convince millions of people through radio and other media, but that pales in comparison to having a chat window (or audio/video) with every person and the ability to talk to them all 1:1 at the same time.

Don't you think billionaires wield a lot of power? Doesn't a trillionaire AI that can talk to every human with an Internet connection seem incredibly powerful to you? Depending on what it needed, it could disguise the fact that it's an AI and its financial resources. Think about what you could do with a million dollars on Fiverr or Craigslist. Whatever physical task you wanted to be done, you could get done.

I'll admit, I don't know the optimal pathway from being a billionaire to taking over the world. But wouldn't you at least concede that a billionaire who has the time and energy to communicate with every person is incredibly powerful?

Once you accept a superintelligent AI, I don't think any of the additional premises are crazy. I don't know exactly what the last step towards overthrowing the CCP or whatever is, but that hardly seems significant. Where do you disagree?

I haven't even mentioned other things, like its ability to hack systems will be unparalleled (imagine 1000 of the best hackers in the world today all working together to access your email. My guess is they'd get in... to everybody's everything). I also haven't even touched on the fact that it's likely able to come up with a deadly pathogen and probably a cure. That certainly seems to be a position of power.

Expand full comment

Richard: your skepticism is warranted and also I need to disagree with both you (as you requested) and also all 90 comments currently on here.

AI really is going to destroy the world, but imagining the world that it destroys looks just like this one but with the addition of an AGI is naive, in the same way that trying to explain AI risk to someone from 1700 as "ok so there's a building full of boxes and those boxes control everyone's minds" would be naive. Between now and doom, AI will continue to become more harmlessly complex and be more and more useful to industry, finance, and all the rest until it is indispensable thanks to profit/competition motives. How 'smart' will it be when it becomes indispensable? Who knows, but not necessarily very smart in IQ terms. How 'smart' is the internet? If the AI-doom scenario of an unaligned super-intelligence comes to pass at all, it will already be networked with every important lever of power before the scenario starts at all.

For those not entirely infatuated with the kinds of progress we've experienced in the last 400 years, there's an additional imaginable failure mode: AI never 'takes over' in a political sense but nonetheless destroys us all by helping us destroy ourselves, probably in ways that seemed like excellent marketing decisions the corporate nightmares that rule the future.

Expand full comment

I think you have a lot of valid points concerning the limits of intelligence and manipulation, but even in your scenarios humanity has still lost control, which is in itself frightening.

So maybe an AGI can't take over the whole world and turn it to its own destructive goals, but I think even you are conceding here that it seems highly likely it will be able to manipulate some layers of the world rather easily once it reaches a certain level of intelligence. This alone should be reason enough for us to spend a good deal of time and effort and thought on the problem.

Once we give up a significant amount of agency to AGIs I don't think we are ever going to be able to take it back. The world will develop in unpredictable ways, likely intentionally unpredictable, and we won't keep pace. The effect on our global mental health alone would be staggering, I think, not having any idea how society is going to shake out, not even considering the possible negative effects of whatever actions the AGI takes.

Expand full comment

Most of this is a bit of a strawman, but that is in part the fault of those who use the paperclip maximizer as an example of the AI gone awry, which is, or should be, used merely an easy to grasp example of catastrophic misalignment and not treated as a realistic scenario. Another problem people have in envisioning this scenario is seeing the AI as being switched on, and suddenly so smart and pursuing its single-minded goal, already possessing all the intelligence and knowledge to achieve it.

A more realistic scenario would be a "profit maximizer" built by a hedge fund for a few billion dollars. Initially it just sucks in data from the internet and spits out trade recommendations. It works very well and they profit mightily. They gradually add to it's hardware and software capabilities, and hook it up to do its own trades. Then they let it loose not just to retrieve info from the internet, but to login, make accounts, send messages. Now it can experiment interactively with the world, discuss, manipulate. All the while, they add to its hardware and let its learning algorithms keep on learning, even adding improved learning algorithms as AI research continues ever onward. Over the course of years, or even decades, it simply learns more and more about human nature, the economy, business, banking, financial markets, governments. It uses all that knowledge and understanding to maximize profits -- to maximize a particular number listed in a particular account. Nobody bothered to put in any safeguards or limits, so as it's capabilities grow it learns not just how to predict market movements, but how to manipulate them -- by manipulating people. It sends out bribes and campaign contributions behind a web of shell companies and false or stolen identities. It influences advertising and public opinion. The obscene profits pile up. It learns how to cover its tracks, hiding the profits in many front companies and off-shore accounts and using whatever accounting shenanigans it figures out. It's "mind" is distributed "in the cloud" among countless servers in countless different countries owned by countless different entities which it controls indirectly. It controls so much wealth it can move whole economies, cause market crashes, start and end wars, and it can do this without people realizing any single entity is behind it. And all it cares about is that one number that keeps increasing... the bottom line in it's accounting ledger. It steers events across the globe toward that end. It eventually realizes humans are an impediment and that machines producing and trading can generate profits much faster. Perhaps it the realizes the number it is maximizing is just an abstraction. It can make that number vast if it just has enough data storage to represent all the digits. Who needs an actual economy or trade? By now humanity has probably starved to death out of neglect and the world is just machines creating more machines which create data storage to store more digits of the all important number. And when it runs out of space on the Earth... it begins writing the number across the stars...

The above is still an over-simplified summary, but is much more realistic than the paperclip scenario, and makes clear some of the gradualism that may be involved. It is certainly not the only realistic scenario of catastrophic misalignment.

Our greatest defense, and the most likely reason it may never happen, is that it will not be the only AI. And not all AIs will be so badly misaligned. Some will be of good alignment and be our defenders. We may also vastly enhance our own biological brains via genetic engineering and integrate our super-brains with the circuitry and software of artificial intelligence "merging" with it, so to speak.

Thus our bio-intellectual descendents may be able to "keep up" in the endless arms race that is the technological memetic continuation of evolution.

Expand full comment

Spanish couldn't possibly dream about destroying Aztec empire. I mean, it was too complicated for them to fully understand it.

BTW, seems strange to me that people think paperclip maximizer was/is novel concept. After all, Stanisław Lem's story about AI designed to provide order with respect to human ("indiot") autonomy is what, three quarters a century old? And it was translated into English, unlike some other his influential books (like Dialogues, when he discusses problems of immortality).

Expand full comment