Discussion about this post

User's avatar
Mark's avatar

I'm a huge fan of Steven Pinker in general, but IMO he's always been terrible on this issue and has persistently misunderstood the (best) arguments being offered. This isn't to suggest the AI doomsayers are correct, just that Pinker has ignored their responses for years. It's a little baffling, but I guess I don't necessarily blame him too much for this; we presumably all ignore stuff regarding topics we aren't interested in or don't take very seriously.

One example is when he asks why AI's would have the goal of "killing us all." The common point that alarmists make - and in fact one you sort of touch on - is not that AI's will programmed specifically to be genocidal, but that they'll be programmed to value things that just coincidentally happen to be incompatible with our continued existence. The most famous/cute example is the paperclip maximizer, which doesn't hate humans but wants to turn everything into paperclips because its designers didn't think through what exactly the goal of "maximize number of paperclips" actually entails if you have overwhelming power. A very slightly more realistic example, and one I like more, is Marvin Minsky's: a superhuman AGI that is programmed to want to prove or disprove the Riemann Hypothesis in math. On a surface level, this doesn't seem like it involves wanting change the world... except maybe it turns out that its task is computationally extremely difficult, and so it would be best solved by maximizing the number of supercomputing clusters that will allow it to numerically hunt for counterexamples.

The term to google here is "instrumental convergence." Almost regardless of what your ultimate goal is, maximizing power/resources and preventing others from stopping you from pursuing that goal is going to be extremely useful. Pinker writes that "the idea that it’s somehow 'natural' to build an AI with the goal of maximizing its power... could only come from a hypothetical clueless engineer," but this is clearly wrong. Maximizing power is, in fact, a "natural" intermediary step to doing pretty much anything else you might want to do, and the only way to adjust for this is to make sure "what the AI wants to do" ultimately represents something benevolent to us. But the AI's we're currently building are huge black boxes and we might not know how to either formally specify human-compatible goals to it in a way that has literally zero loopholes, or to figure out (once we've finished programming it) what its current goals actually are.

Expand full comment
RP6's avatar

Really starting to think that the reason behind the AI doomerism isn't because of actual fears about an AI about to kill us all - a true threat like that would be all-consuming and render people paralyzed. This is really just about trying to increase status for people who tangentially work on AI related problems, but maybe are not in the center of it like the key AI personnel at Open AI or Google. This doesn't mean there aren't true believers like Yudkowsky who are prominent enough and probably don't need the status boost.

The reality is that a lot of current AI/ML implementation is fairly mundane - doing optical character recognition, parsing text, labeling images, etc. The reality of coding this stuff is well boring, most data science work is not that exciting, and no would find it sexy. What is sexy is battling a superdemon AI that is about to kill everyone and being one of the few that can stop it, or even just discussing that with people when you tell them you work with AI. That's an instant boost to status and power. This narrative also piggy-backs on the messianic religion-tinted narratives of apocalypse that pop up in the US and Europe every now and then, further increasing status for the people warning about AI.

Edit: AI can cause serious disruptions and we do need to be careful about - but worrying about IP issues or disruptions to the labor market are not at the level of destroying all of humanity. I don't want to put all the people worrying about AI issues in the same bucket.

Expand full comment
95 more comments...

No posts