Discussion about this post

User's avatar
Leo Abstract's avatar

Does anyone else feel a strong desire to follow Yudkowsky around with a sign that says "Your optimism disgusts me!"?

Simone's avatar

Honestly feels to me like Robin heavily strawmanned Eliezer's arguments here. His points are very general; you don't need to postulate a single research AI that somehow sneakily improves itself with malicious intent under the researchers' nose for those problems to happen. If at some point we decided "ok, now the AIs are smart enough, we'll stop", then we'd know that can't happen, but odds are we'll just keep pushing them up and up until they might actually be smart enough to get away with FOOMing (if the physical laws of the universe don't for some reason make that impossible: but if they do, no thanks to us for lucking out).

19 more comments...

No posts

Ready for more?