05/02/23

đŸ˜± Hinton on AI: Society Might Not Be Prepared for What Is Coming

Will Douglas Heaven reporting “Geoffrey Hinton tells us why he’s now scared of the tech he helped build”:

Hinton believes that the next step for smart machines is the ability to create their own subgoals, interim steps required to carry out a task. What happens, he asks, when that ability is applied to something inherently immoral?

“Don’t think for a moment that Putin wouldn’t make hyper-intelligent robots with the goal of killing Ukrainians,” he says. “He wouldn’t hesitate. And if you want them to be good at it, you don’t want to micromanage them—you want them to figure out how to do it.”

In his summary of an interview with Geoffrey Hinton, the AI researcher who won the Turing Prize in 2018 and recently left his position at Google to speak more freely about the dangers of AI, Heaven provides a (MIT Technology Review-typical) balanced view on the topic.

The point above, about bad actors willingly misusing technology, is one we ought to take seriously, though – we are already seeing AI being used to generate whole spam sites.