Erik McGregor/LightRocket via Getty Images
The concerns about the risks of artificial intelligence (AI) have been raised recently. Computer scientist Geoffrey Hinton, also known as the “godfather of AI”, left his role at Google to raise awareness about the “existential threat” posed by AI. The Center for AI Safety also issued an open letter, signed by Hinton and many others, emphasizing the potential destruction of humanity by advanced AI. The statement called for global efforts to mitigate this risk.
This surge in concern seems to be driven by the rapid advancement of AI-powered chatbots like ChatGPT, as well as the race to develop more powerful AI systems. The fear is that the tech industry is pushing the boundaries of AI capabilities without considering the consequences. This can certainly sound alarming.
However, the warnings about AI’s potential to wipe out humans are also rather vague. When we examine the specific scenarios presented, it becomes clear that these fears may not be well-founded. Instead, many experts argue that we should focus on the immediate risks posed by existing AI systems, rather than worrying about distant doomsday scenarios.
Those who discuss existential risks typically envision a future where artificial general intelligence (AGI) surpasses human intelligence. They predict a scenario where advanced AI systems are granted more autonomy and access to critical infrastructure, such as the power grid, financial markets, or even warfare. At this point, these AI systems could potentially become rogue or resistant to human control.
However, it is uncertain whether AI systems will ever reach a level of super-intelligence that would enable them to out-think humans and pose an existential threat.