advertisement

A new statement warning about the risks of AI was signed by the likes of OpenAI’s Sam Altman and Turing Award-winner Geoffrey Hinton. But critics question whether such public figures are really operating in good faith.

What is the real point of all these letters warning about AI?

[Photos: Miguel Á. Padriñán/Pexels;
Tara Winstead
/Pexels]

BY Mark Sullivan3 minute read

Hundreds of AI researchers, computer scientists, and executives signed a short statement on Tuesday arguing that artificial intelligence could be a threat to the very existence of the world as we know it.

The statement, which was created by the Center for AI Safety and clocks in at all of 22 words, reads: “Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.”

The list of signatories includes Sam Altman, CEO of Microsoft-backed OpenAI; Google DeepMind CEO Demis Hassabis; Anthropic CEO Dario Amodei; Turing Award-winners Geoffrey Hinton (who recently left Google to speak more freely about AI’s risks) and Yoshua Bengio; and the philosopher and researcher Eliezer Yudkowsky, who has been one of AI’s most outspoken critics. (The computer scientist Yann LeCun did not sign the warning.)

There are, in simple terms, two tiers of risk when it comes to AI: short-term harms, like whether AI might that unfairly disqualify job applicants because of some bias in its training data; and longer-term (often more existential) dangers—namely, that super-intelligent artificial general intelligence (AGI) systems might one day decide that humans are getting in the way of progress and must thus be eliminated.

“The statement I signed is one that sounds true whether you think the problem exists but is sorta under control but not certainly so, or whether you think AGI is going to kill everyone by default and it’ll require a vast desperate effort to have anything else happen instead,” Yudkowsky writes in a message to Fast Company. “That’s a tent large enough to include me.”

As to whether letters like the Center for AI Safety’s will cause any material changes to the pace at which researchers at companies like OpenAI and Google are developing AI systems, that remains to be seen.

Some are even doubtful of the signatories’ intentions—specifically, whether they’re acting in good faith.

PluggedIn Newsletter logo
Sign up for our weekly tech digest.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

ABOUT THE AUTHOR

Mark Sullivan is a senior writer at Fast Company, covering emerging tech, AI, and tech policy. Before coming to Fast Company in January 2016, Sullivan wrote for VentureBeat, Light Reading, CNET, Wired, and PCWorld More


Explore Topics