As technology continues to advance, growing ever more complex and developing increasingly human-like intelligence, questions about the fate of humanity inevitably arise. Here, experts discuss our technological future.


At the ICML Deep Learning Workshop 2015, six scientists from different institutions briefly discuss their views on the possibilities and perils of technological singularity—the moment when humans create artificial intelligence that so far advanced that it surpasses us (and maybe decides to eradicate us).

Throughout the years, singularity has been one of the most popular bets on what will cause the apocalypse (assuming that it happens).

Jürgen Schmidhuber (Swiss AI Lab IDSIA), Neil Lawrence (University of Sheffield), Kevin Murphy (Google), Yoshua Bengio (University of Montreal), Yann LeCun (Facebook, New York University), and Demis Hassabis (Google Deepmind) discuss their views on the possibility of singularity.

Watch the video below.


With the speed of advancements in robotics and AI, science fiction is quickly becoming science fact. Recent discussions include worries about how AI is already starting to take over some of our jobs, and how, in the future, there may be no roles left for humans.

Prominent figures such as Stephen Hawking, Bill Gates, and Elon Musk have publicly voiced their concerns regarding advancements in the artificial intelligence industry. And Hawking warns that we would become obsolete, “It [AI] would take off on its own and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”

On the other hand, some people argue that, when AI exceeds our capabilities (assuming that it ever does) it may lead to them treating us like gods, insisting that these robots will be our allies rather than enemies. Some people have even started to form of religion around these ideas, believing that god is technology, an ideology some refer to as “rapture of the nerds.

For now, we can only guess how things will go. But one thing is for sure Like any technology, AI is as susceptible to misuse as it is beneficial to mankind.


Here, Dario Amodei and Seth Baum discuss the concerns that actual researchers have regarding artificially intelligent systems – separating the hype from the honest conversations.

From Elon Musk to Stephen Hawking, individuals around the globe are calling for caution and warning about the potentially dangerous pitfalls of artificially intelligent systems. And no wonder. It seems that each day brings forth a new autonomous robot or a new use for an intelligent system. Add to that all the movies and books in which AI is the evil “bad guy,” and it’s easy to feel like undercurrents of fear are slowly permeating our society.

But how much of our fear is a joke, and how much of our fear is justified?

We have previously covered the sensationalism and hype that often surrounds conversations about AI, but here, the Future of Life brings experts together to take a deeper dive into these issues. Namely, they tackle the concerns and fears that actual researchers have about policy and the future of AI.

Dario Amodei has an extensive history with AI. Currently, he is working with OpenAI, a non-profit that is focused on AI research, and he is also the lead author of Concrete Problems in AI Safety. Seth Baum is the Executive Director of the Global Catastrophic Risk Institute,

Podcasts not your preference? You can read the transcript here.