fbpx
News Hub

Superintelligent AI would be impossible to control, study claims

Written by Tue 19 Jan 2021

Superintelligence cannot be contained, declare academics

The prospect of superintelligent machines escaping human control and running amok has haunted humanity for decades, ever since Alan Turing famously argued mere machines could conceivably demonstrate human-like intelligence.

In today’s era of AI ascendancy, such doomsday predictions are no longer the object of science fiction fantasy but a prospect scientists and philosophers are taking more and more seriously.

AI has shocked and surprised with its ability to continually outsmart humans at complex tasks such as Chess, Go and Jeopardy, prompting forecasts that it’s only a matter of decades before superintelligent AI lands at our feet.

Others claim humanity’s irrelevance is still centuries away.

Whatever your view on the exact time horizon, researchers and philosophers say humanity ought to prepare for superintelligent AI’s arrival by devising strategies for keeping the genie in the bottle.

But now researchers have claimed it may be theoretically impossible for humans to control what Nick Bostrom calls an entity “smarter than the best human brains in practically every field”.

Publishing their findings in Journal of Artificial Intelligence Researchacademics including Manuel Alfonseca, a computer scientist at the Autonomous University of Madrid, say we’d have to know a machine was capable of acting against our commands and contra to our values before it did so in order to stop it.

To test this safely would require the creation of a containment algorithm that simulated the actions of a superintelligent machine and evaluated its potential harm.

The problem, the researchers say, is that no algorithm could possibly simulate the machine’s behaviour exhaustively or accurately enough to predict the potential consequences of all of its actions.

In other words, if a superintelligent machine capable of causing harm was created, we’d be necessarily in the dark about its potential to do so.

“We argue that total containment is, in principle, impossible, due to fundamental limits inherent to computation itself,” write the researchers. “Strict containment requires simulations of [that are] theoretically (and practically) impossible.”

  • Via: IEEE

Written by Tue 19 Jan 2021

Tags:

general ai research
Send us a correction Send us a news tip