Elon Musk: Superintelligent AI is an Existential Risk to Humanity

Elon Musk thinks the advent of digital superintelligence is by far a more dangerous threat to humanity than nuclear weapons. He thinks the field of AI research must have government regulation. The dangers of advanced artificial intelligence have been popularized in the late 2010s by Stephen Hawking, Bill Gates & Elon Musk. But Musk alone is probably the most famous public person to express concern about artificial superintelligence. Existential risk from advanced AI is the hypothesis that substantial prog
Back to Top