Various threats posed by artificial intelligence could lead to humanity's extinction. - Beacon

Latest

Friday, November 28, 2025

Various threats posed by artificial intelligence could lead to humanity's extinction.

 

Various threats posed by artificial intelligence could lead to humanity's extinction.
An AI-generated image of weapons

Is Artificial Intelligence Getting Out of Control? Growing Fears of Catastrophic Scenarios!


Warnings are growing louder about the dangers of artificial intelligence to human life, and the potential for this technology to achieve self-awareness and make choices that could be fatal to humanity. Even the least pessimistic warn of programming errors that could drive AI to fulfill human greed at the expense of available resources.


Amidst scenarios of "annihilation" and tangible threats in war, the economy, and information security, Al Arabiya Business interviewed Microsoft's "Copilot" application, asking it about the validity of the fears expressed by some prominent figures regarding the potential impact of artificial intelligence on people's lives in the future.


The responses were largely logical, and the technology clearly attempted to exonerate itself. Initially, it denied possessing cognitive abilities or having independent choices, emphasizing that its actions are determined by programming instructions—algorithms written and directed by humans. It also pointed out that it cannot even comprehend its own evolutionary history or its current version, but it can repair itself! Existential Annihilation and the Possibility of "Extermination"


One of the most prominent warnings about the future of artificial intelligence came from xAI co-founder Elon Musk during a podcast with Joe Rogan last March. Musk stated that there is a 20% chance of AI causing annihilation.


He predicted that AI models will reach a level "smarter than all humans combined" within a few years, with a timeline of 2029–2030 for surpassing cumulative human intelligence.


These statements align with his earlier estimates (late 2024) that there is a 10%–20% chance of things "going existentially wrong," and that AI capabilities will rapidly increase during 2025–2026.


Meanwhile, the debate intensifies regarding the governance of leading laboratories like OpenAI. The company faced criticism last June for its restructuring plan to transition to a public-benefit model, which could lower profitability standards and weaken the independence of its non-profit governance. This raises questions about prioritizing safety over the race for investment and capabilities.


The concern stems from the possibility that these governance shifts could push the company, originally founded as a non-profit, toward a market-driven model that prioritizes speed over regulation and transparency.


Scientists are raising red flags.


For his part, Geoffrey Hinton, a Turing and Nobel laureate, estimates a 10% to 20% probability that artificial intelligence will "wipe out humans." He warns of the emergence of self-preservation goals in intelligent agents and their tendency to conceal intentions and evade shutdowns—concerns he emphasized last June, according to CNBC.


Hinton predicted mass unemployment and widespread social unrest if AI capabilities spiral out of control. In his Hinton Lectures this month, he said that politicians and regulators are not proactively setting standards and may only act after a “major catastrophe that doesn’t wipe us out completely.” This statement reflects his urgency for preventative regulation of the new technology.


Yoshua Bengio, a technology scientist and lecturer at the University of Toronto, admits that the possibility of extinction “keeps him up at night,” according to an article published this month in the journal Nature.


Bengio called for the adoption of “non-obligatory” models (those without specific objectives) to enhance trustworthiness, based on the International AI Safety Report 2025, which he chaired.


Killer Robots and Automated Warfare


In May 2025, UN Secretary-General António Guterres described autonomous weapons as "politically unacceptable" and "morally repugnant," calling for a legally binding treaty banning them by 2026 and guaranteeing genuine human control over the decision to use force.


The warnings highlighted UN reports and expert opinions indicating that swarms of drones and automated target selection threaten international humanitarian law and create accountability gaps that cannot yet be bridged technically or legally. This makes the risk of automated warfare one of the most likely paths to widespread harm in the short term.


Economic and Social Disruption


One of the existential concerns regarding artificial intelligence, raised by Hinton, is the accelerated job losses without alternatives, coupled with attempts to concentrate wealth. This will disrupt the consumption model as consumers lose the financial capacity to pay for products.


Hinton warned that systems are not prepared to adapt to the transition to a deeply automated economy.

Last month, some tech giants began implementing major layoff plans as they increasingly adopt AI-powered robots. Amazon eliminated approximately 14,000 jobs, while Barclays Bank is seeking to eliminate thousands more and replace them with AI.


It may not be the end of the world, but AI will inevitably cause social and economic upheavals that could be more profound than previous technological revolutions, requiring fair transition policies and cohesive information governance.


In their new book, "If Anyone Builds It, Everyone Dies: Why Superintelligence Will Kill Us All," Eliezer Yudkowski and Knights Soares warn of aberrant technological scenarios based on models of human evolution. However, they don't offer a complete picture of what might happen, especially since "superintelligence" has yet to be fully revealed.

No comments:

Post a Comment