Researcher Says 99 Percent Chance AI Will Destroy Humankind

Researcher 99 Percent Chance AI Destroy Humankind

In a survey from earlier this year, more than half of the 2,778 researchers asked believed there is a five percent chance humans could face extinction due to “extremely bad outcomes.”

Despite this, 68.3 percent of the same group felt that positive results from superhuman AI are more likely than negative ones, showing a lack of agreement among experts.

Some researchers have very pessimistic views. For example, Roman Yampolskiy, an AI researcher and lecturer at the University of Louisville, is firmly in the doomer category.

On Lex Fridman’s podcast, he shared his belief that there is a 99.9 percent chance AI could lead to humanity’s extinction within the next century.

In his words, creating general superintelligences offers no hopeful long-term outcomes for humans. He believes that the only way to avoid catastrophic consequences is to avoid developing such powerful AI in the first place.

AI Doomerism

YouTube video

AI doomerism focuses on the potential catastrophic risks of advanced AI technologies.

Yampolskiy highlights the chaos that existing large language models have already caused.

He points out that these models have made mistakes, had accidents, and been jailbroken. According to him, no large language model today exists without being successfully manipulated to perform unintended actions.

Yampolskiy believes that a superintelligent AI will develop something entirely new and unexpected.

This could lead to ending human existence in ways we can’t even foresee.

The chances of this happening might not be exactly 100 percent, but they could come very close. He explains that even with infinite resources, getting to a complete zero-risk scenario is impossible. With an AI making billions of decisions every second for a hundred years, problems will inevitably arise.

Other prominent figures in the AI community have their own predictions about the risks of AI.

These include Meta’s chief AI scientist Yann LeCun, Google’s head of AI in the United Kingdom Demis Hassabis, and former Google CEO Eric Schmidt. They have all warned that AI could pose a significant existential risk.

There isn’t a consensus among top experts about the future risks of AI. The science of predicting the impacts of AI is still developing and not yet precise.

More on AI:

Picture of Adrian Volenik

Adrian Volenik

Related articles

Most read articles

Get our articles

The latest Move news, articles, and resources, sent straight to your inbox every month.