Mozilla has announced that it’s forming a “rebel alliance” of AI startups to combat Anthropic, OpenAI, and the other backers of deathstar AI.
It’s a cool idea that’s misguided and will ultimately fail.
What’s cool about it is that Mozilla is standing up for, well, everyone who doesn’t stand to make a killing automating everyone else’s lives (along with those of us who don’t want to live like we’re machines). It has a fledgling venture fund and reserves of about $1.4 billion, which is a lot but compared to the financial resources of its opponents, it’s not even a drop in the bucket, and is funding companies that will help keep AI “trustworthy & open.”
Mozilla earned its cred standing up to Microsoft, Google, and Apple (The Browser Wars). It is destined to lose this next fight.
The thing is, there’s no such thing as “trustworthy & open AI” and never will be. We are already being trained to be obedient to AI, often times because it’s opaque and unavoidable. No AI developer worth her or his stock options is going to make its LLM open to others, and the more advanced models don’t even make perfect sense when they’re analyzed anyway (so there’s no good way to predict what the AIs will do next).
AI is being developed to make fundamental and irrevocable changes to how we live, work, and interact. The idea that Mozilla can offer a more cozy alternative to this future is quaint.
What it should do instead is focus on slowing down the process. Less grand battle between the dark and light sides of The Force and more guerrilla resistance to an invading horde.
Mozilla should identify the key weaknesses of AI as it’s implemented now, whether on security or reliability (for instance), and fund entities that exploit them. Make them worse. Reveal them for the experiments on living subjects that they are.
Server farms sucking a gazillion gigawatts of power to run AI? Where are the gizmos that disrupt their operations? Where’s the software code that makes AIs act more insanely? How about some roving app(s) that corrupt AI collection of online data (or denigrate the synthetic data it creates to feed itself)?
There’s no way to stop AI and little chance that Mozilla (or any other institution) could offer a competitive solution that was somehow “better” for the world. But any slowing of the mad rush to AI domination of our world could only yield benefits. Awareness of weaknesses and flaws could prompt more attention and time getting spent on repairing them. Risks revealed are risks that are impossible to ignore and may therefore get folks addressing them.
I’m all for Mozilla’s rebel alliance, but it shouldn’t focus its time and money on destroying the Death Star.
Just impede its romp through the cosmos.