Why Did We Get HAL 9000, Not C-3P0?

Nicholas Carr wrote a great essay in the New York Times, in which he points out that we didn’t get the robots we were promised. It’s worth chewing on for a bit.

He argues that the humanoid robots we fantasied about in the 20th century reflected our desire for physical order, and that the talking coffee cans we got instead are better suited to lives of self-absorbed, internally-focused experience.

It didn’t help that human beings are horrendously complicated machines, so the tech challenge goes far beyond enabling them to think. Maybe that’s why today’s mobile robots are anthropomorphic mutants, like a Roomba that vacuums without arms and legs, or bomb-sniffer robots that use a single arm mounted on a tractor.

But I’d suggest the bigger issue is that we realized there was no reason to try to build them, as illustrated by the evolution of VR.

Back in the waning days of the last century, most of us assumed that our online presence would need to evidence some physical form. Even the early online games, like EMPIRE on the U of I’s Plato network, gave players rudimentary images of the spaceships they were piloting. When Second Life came around in the early 2000s, it allowed people to create online avatars that moved in 3D space.

Despite the glowing headlines about virtuality (including companies declaring that they were headquartered there), though, it was already outdated. SMS and online chat had enabled people to communicate, so there was no need for clunky cartoon representation. The whole point was to trade information — we could do people things without looking like people — and a text box was a far superior interface.

The same thing is happening with robotics in geophysical space.

We assumed initially that bringing artificial intelligence into our lives meant giving it familiar, human-like form. That meant also imagining that they’d do tasks the same way we would, which would require arms and legs and, ideally, a friendly smile.

Instead, robots are embedded in devices that are already in our lives; by giving them access to information, they’re made “smart.” Rosie, the maid from the Jetsons, is distributed between intelligent washers, dryers, vacuums, and other machines. The irritating Johnny Cab taxi driver in Total Recall is a set of sensors and circuits in hundreds of parts of a car. His brain exists in the cloud.

The ways these robots work are more subtle and their authority less obvious, but their effects are still overt. Cory Doctorow sees this development as potentially sinister, as intelligence working beyond user vision or understanding can do things like cheat us (or worse). He likens it to the days when medieval alchemists saw agency in nature as the work of demons.

It would be somewhat comforting to watch robots drive our cars and clean our homes, insomuch that appearing before us in friendly and familiar shapes would give us some sense of control.

But that control would be ephemeral. The motivations and potential of an artificial intelligence, even of limited capabilities, are opaque to users, irrespective of form. Discovering the mind of the Amazon Echo on your countertop is no easier than the machinations of Sonny in I, Robot. All of the decisions that robots working in Amazon’s warehouses make automatically by algorithm — affectionately called “machine learning” — happen in an instant, bestowing agency without human oversight.

Further, physical form has always been a poor guide for judging motivations or bestowing trust. Good-looking people can be ugly inside, and visa versa. Doctorow’s demons hold sway over human actions, only we call them souls.

So we did get the robots we were promised, only they don’t look the way we expected. They were never going to do things in the ways we expected either, insomuch there’s no way we could ever know their minds…just like we don’t know ourselves or one another.

Siri is HAL 9000, only with a sweeter voice. Let’s hope her intentions are nicer, too.

[This essay was originally published at Recapitalism]