Someone should make a game about: keeping an AI in its box

In 2016, the computer scientist Andrew Ng compared worrying about superintelligent AI to worrying about overpopulation on Mars. We haven't even landed on the planet, he said, so why on Earth should we start freaking out? Modern AI can pull some snazzy tricks, sure, but it's a zillion miles away from presenting an existential threat to humanity.

The problem with that line of reasoning is that it fails to take into account just how long it might take us to solve what AI researchers call "the alignment problem", and what onlookers like me call "some pretty freaky shit". There are a lot of ideas I'm going to have to zoom through to explain why, but the key points are: Superintelligent AI could emerge very quickly if we ever design an AI that's good at designing AI, the product of such a recursive intelligence explosion may well have goals that don't align with our own, and there's little reason to think it would let us flick the off switch.

As people like philosopher Nick Bostrom are fond of saying, the concern isn't malevolence - it's competence. He's the one who came up with that thought experiment about an AI that sets about turning the entire universe into paperclips, a fantasy which you can and should live out through this free online click 'em up. A particularly spicy part of the apocalyptic meatball is that by giving an AI almost literally any goal, we'd likely also be inadvertently giving it certain instrumental goals, like maximising its own computing power or removing any agent that might get in its way.

Read more



from Eurogamer.net https://ift.tt/39u1e1a
Share on Google Plus

About Unknown

This is a short description in the author block about the author. You edit it by entering text in the "Biographical Info" field in the user admin panel.
    Blogger Comment
    Facebook Comment

0 comments:

Post a Comment