Some of the risk categories Preparedness is charged with studying seem more . . . far-fetched than others. For example, in a blog post, OpenAI lists “chemical, biological, radiological and nuclear” threats as areas of top concern where it pertains to AI models.
OpenAI CEO Sam Altman is a noted AI doomsayer, often airing fears — whether for optics or out of personal conviction — that AI “may lead to human extinction.” But telegraphing that OpenAI might actually devote resources to studying scenarios straight out of sci-fi dystopian novels is a step further than this writer expected, frankly.
Look, I'm going to be blunt here. Sam Altman isn’t worried about Skynet taking over, he's worried about public backlash against OpenAI. This is nothing but a preemptive PR campaign aimed at controlling the narrative by playing up existential threats and claiming to be prepared for them.
Does Altman really lose sleep over rogue AI ending civilization? I doubt it. This seems like blatant fear-mongering designed to make OpenAI look like responsible stewards of AI research. Crying wolf about Terminator scenarios lets them frame rapid progress as "safe" under their watch. It's total optics.
Make no mistake, Altman doesn't actually care if his nightmare scenarios come true. He cares about shaping the narrative on AI risks to cast OpenAI in a positive light. All this rhetoric about studying doomsday threats is just cynical posturing to get ahead of public concerns.