Will Douglas Heaven reporting âGeoffrey Hinton tells us why heâs now scared of the tech he helped buildâ:
Hinton believes that the next step for smart machines is the ability to create their own subgoals, interim steps required to carry out a task. What happens, he asks, when that ability is applied to something inherently immoral?
âDonât think for a moment that Putin wouldnât make hyper-intelligent robots with the goal of killing Ukrainians,â he says. âHe wouldnât hesitate. And if you want them to be good at it, you donât want to micromanage themâyou want them to figure out how to do it.â
In his summary of an interview with Geoffrey Hinton, the AI researcher who won the Turing Prize in 2018 and recently left his position at Google to speak more freely about the dangers of AI, Heaven provides a (MIT Technology Review-typical) balanced view on the topic.
The point above, about bad actors willingly misusing technology, is one we ought to take seriously, though â we are already seeing AI being used to generate whole spam sites.