The man widely regarded as the godfather of Artificial intelligence has left his position at Google to warn the world about how AI development could impact society.
Dr Geoffrey Hinton, who built a neural network in 2013 along with two of his students at the University of Toronto, announced his resignation from the tech giant this week, telling the New York Times that now he regretted his work.
The neural net pioneer warned that The dangers of AI chatbots are “quite scary”, adding that the technology could become more intelligent than humans and be exploited for nefarious purposes.
“I’ve come to the conclusion that the kind of intelligence we’re developing is very different from the intelligence we have.”
“So it’s as if you had 10,000 people and whenever one person learned something, everybody automatically knew it. And that’s how these chatbots can know so much more than any one person.”
He said that until last year he believed Google had been a “proper steward” of the technology, but that changed once Microsoft started incorporating a chatbot into its Bing search engine, and the company began becoming concerned about the risk to its search business.
Google and Microsoft have been caught in an AI arms race since the launch of OpenAI’s ChatGPT last November, which has gripped Silicon Valley.
Microsoft has so far invested $11 billion into OpenAI, with plans to integrate its technology into its Bing search engines and Office Apps.
Google, meanwhile, was forced to issue a “code-red” to protect its control of the search market upon the chatbot’s launch, hurling huge amounts of resources into AI development and unveiling its own chatbot called Bard.
Dr Hilton says this struggle for the top is accelerating AI development and removing the ethical safeguards Google had in place to prevent the technology’s impact on society.
In the NYT today, Cade Metz implies that I left Google so that I could criticize Google. Actually, I left so that I could talk about the dangers of AI without considering how this impacts Google. Google has acted very responsibly.
— Geoffrey Hinton (@geoffreyhinton) May 1, 2023
“Right now, what we're seeing is things like GPT-4 eclipses a person in the amount of general knowledge it has and it eclipses them by a long way. In terms of reasoning, it's not as good, but it does already do simple reasoning," he said.
"And given the rate of progress, we expect things to get better quite fast. So we need to worry about that."
“Bad Actors”
Dr Hinton told the New York Times that the dangers of AI will come when “bad actors” gain access to the technology and exploit it for “bad things.”
When the BBC asked him to elaborate, he said that nation-states employ the technology in geopolitical conflicts and wars, which he described as a “nightmare scenario”.
"You can imagine, for example, some bad actor like [Russian President Vladimir] Putin decided to give robots the ability to create their own sub-goals."
It should speak volumes that "The Godfather of AI" chose to quit his job to protest the misuse of AI by "bad actors"... and his job was at @Google
— D.C. Pennington ???? (@DCPenningtonArt) May 1, 2023
The scientist warned that this eventually might lead AI to create sub-goals like “I need to get more power”, that could lead robots controlled by the technology to turn to violence to reach their desired goals.
"We're biological systems and these are digital systems. And the big difference is that with digital systems, you have many copies of the same set of weights, the same model of the world.
"And all these copies can learn separately but share their knowledge instantly. So it's as if you had 10,000 people and whenever one person learnt something, everybody automatically knew it. And that's how these chatbots can know so much more than any one person."
Current generative AI systems like OpenAI’s ChatGPT are protected from being exploited for malicious activity through restrictions that limit their functionality.
But bad actors have already found workarounds to bypass these restrictions, allowing them to weaponise the technology.
True. Scammers are already using AI generated voice calling to defraud family members with 'help, send money' calls.
“It is hard to see how you can prevent the bad actors from using it for bad things.” — Geoffrey Hinton, “godfather of A.I.” #ArtificialIntelligence
— Suhail Ahmad (@SuhailAhmad) May 2, 2023
An investigation by Blackberry recently revealed that hackers may already be using the AI tool to launch a range of attacks including phishing campaigns and nation-state cyberattacks.
Researchers discovered that hackers were bypassing OpenAI’s restrictions using Telegram bots that use OpenAI’S API to create malicious Python scripts to launch malware attacks and craft convincing phishing emails in seconds.
Responsible Research
Dr Hinton assured that his resignation was not to criticise Google for its handling of AI, but instead as a statement against the current rate of the technology’s development.
"I actually want to say some good things about Google. And they're more credible if I don't work for Google,” the scientist said, adding that he believed Google had been “very responsible”.
Google’s chief scientist, Jeff Dean said in a statement that Google appreciated Hinton’s contributions to the company over the past decade.
“I’ve deeply enjoyed our many conversations over the years. I’ll miss him, and I wish him well,” Dean said.
“As one of the first companies to publish AI Principles, we remain committed to a responsible approach to AI. We’re continually learning to understand emerging risks while also innovating boldly.”
Dr Hinton’s announcement comes after IBM CEO Arvind Krishna told Bloomberg that up to 30 per cent of the company’s back-office roles could be replaced by AI and automation within five years.
The CEO said hiring in areas such as human resources will be slowed or suspended and could see around 7,800 roles being replaced. IBM has a total global workforce of 260,000.