OpenAI is recruiting for a “Head of Preparedness.” I applied earlier today.
The company describes the role as being “directly responsible leader for building and coordinating capability evaluations, threat models, and mitigations that form a coherent, rigorous, and operationally scalable safety pipeline.”
Of course, that’s mostly nonsense, or at best happy but unenforceable ethics, since said evaluations and models rely on mathematical probabilities and weighing of values for risks and rewards that may or may not conform to outcomes that the rest of us might deem acceptable.
Turbocharge the engineers making those analyses with the idiocy of Effective Altruism, which gives its proponents a false belief that they can (and should) decide what’s best for the world, and the disconnect between OpenAI’s development work and the safety and health of the world just gets worse.
So, I didn’t really apply for that job. I want a different one: Head of Preparedness, only for the world.
You see, AI is going to destroy it. That’s not a possibility, but rather a stated intention. AI will destroy the world as we know it and give us a different one, maybe better in some ways, maybe worse in others. Gazillions of dollars are being spent to realize this purpose.
The risk isn’t that AI malfunctions along the way. The risk is that it does exactly what it’s being designed to do.
And the world isn’t prepared for it.
This is in large part a result of the propaganda that we got early on from OpenAI and the other movers and shakers in the AI field. Many of them proclaimed that AI might annihilate humanity or save it, while asking that governments reign in developers’ worse angels while keeping their hands off the better ones.
This paralyzed us from reaching any meaningful conclusions about it — the differences were just so immense and uncertain — though it emboldened a cottage industry of academics and think tanks dedicated to making those differences even harder to reconcile.
Lots of “conversations” popped up that had no purpose other than encouraging more conversations. This served to help impede governments from imposing any meaningful regulation beyond the occasional hat doff to hoping that AI could wreak the same changes on all of us equally.
So, none of us are prepared for what’s coming.
Jobs aren’t just going away; rather, the very nature of work — how it’s done, where, and by whom/what — will be changed forever. Businesses will have to operate differently and demand new ways to value and insure them. Economies will be upended, as humanity faces a future that looks nothing like the past.
This will change how we see ourselves, and it’ll be augmented by our ever-closer relationships with AI. Our relationships will be rewritten and our philosophies and religions challenged. Laws will need to be reimagined and rewritten.
None of these changes have anything to do with deciding whether or not AI is “good” or “bad.”
AI just is, and will be.
I think it would be amazing if OpenAI took the lead in defining the next phase of the conversation about AI preparedness by elevating it from an internal technical practice to one of helping manage external experience.
If it were that leader, it would not pretend that it can engineer away AI’s capacity for destruction, but rather admit that destroying the world is a feature, not a bug, and that the world needs to start working on what that’ll mean for us.
Let the developers do their best to make sure that their machines don’t malfunction. But help the rest of us prepare for when they work just like they’re supposed to work.
I’d take the job in an instant, though I’m not holding my breath for a callback.