Geoffrey Hinton, a 75-year-old computer scientist and cognitive psychologist, has announced his resignation from Google, saying that he now regrets his work on artificial intelligence (AI). In an interview with the BBC, Hinton spoke about the dangers of AI chatbots and how they could soon overtake the level of information that a human brain holds.
Hinton’s pioneering research on neural networks and deep learning has paved the way for current AI systems like ChatGPT. Neural networks are systems that are similar to the human brain in the way they learn and process information. They enable AIs to learn from experience, as a person would, which is called deep learning.
Hinton warned that chatbots could soon become more intelligent than humans, with GPT-4 already eclipsing a person in the amount of general knowledge it has. He also warned of the dangers of “bad actors” using AI for “bad things”, such as giving robots the ability to create their own sub-goals. This could lead to sub-goals like “I need to get more power”, which could be catastrophic.
Hinton’s announcement underlines the rate at which AI capabilities are accelerating. While there is an enormous upside from this technology, it’s essential that the world invests heavily and urgently in AI safety and control, said Matt Clifford, the chairman of the UK’s Advanced Research and Invention Agency.
Hinton’s concerns about AI are shared by a growing number of experts. AI could affect 300 million jobs, according to a report, and it’s unclear if the world is prepared for the coming AI storm. However, AI could also have numerous benefits, such as making medical diagnoses more accurate and helping to solve complex problems in fields such as engineering and physics.
In conclusion, Hinton’s resignation and his concerns about AI underline the need for caution and regulation in the development of AI. While AI has enormous potential, it’s essential that we invest in AI safety and control to prevent the technology from being used for malicious purposes.