The Consciousness Conundrum

Will robots that possess general intelligence be safer, and therefore more trustworthy?

Two professors have written a book on the subject, entitled Rebooting AI: Building Artificial Intelligence We Can Trust — and an interview with one of the authors in the MIT Technology Review suggests it will be a very good read.

General intelligence is another way of describing consciousness, sort of, as both refer to the capacity to recognize, evaluate, and act upon variable tasks in unpredictable environments (consciousness adds a layer of internal modeling and sense of “self” that general intelligence doesn’t require).

But would either deliver more trustworthy decisions?

Consciousness surely doesn’t; human beings make incorrect decisions, for the wrong reasons, and do bad things to themselves and one another all the time. It’s what leads people to break rules and think their reasoning excludes them from moral guilt or legal culpability.

It’s what got our membership cancelled in the Garden of Eden which, one would presume, everyone and everything was trustworthy.

The capacity for AI to learn on its own won’t get there anyway, if I understand the author’s argument, insomuch that such deep learning isn’t the same thing as deep understanding. It’s one thing to recognize even detailed aspects of a context, yet another to be aware of what they mean.

The answer could include classical AI, which means programming specific rules. This makes sense because we humans are “programmed” with them…things we will and won’t do because they’re just right or wrong, and not the result of the jury deliberations of our consciousness…so it’s kind of like seeing the development of AI like the education of a child.

We have our Ten Commandments and they need their Three Laws of Robotics.

This also leads the author to a point about hybrid systems and the need for general intelligence AI to depend on multiple layers of analysis and agency. Again, people depend on intrinsic systems like proprioception and reflex to help navigate geophysical space, and endocrine and lambic systems to help manage internal functions. All of them influence our cognitive capacities, too.

But I still struggle with describing what AI we can trust would look like, primarily because I can’t do it for human beings.

Trust isn’t just the outcome of internal processes, nor is it based on an objectively consistent list of external actions. Trust — in type and amount — is dependent on circumstance, especially the implications of any experience. I don’t trust people as much as I trust the systems of law, culture, and common sense to which we are all held accountable. It’s not perfect, but maybe that’s why trust isn’t a synonym for guarantee.

And, if it’s not something that an organic or artificial agent possesses or declares, but rather something that we bestow upon it, then maybe we’ll never build trustworthy AI.

Maybe we’ll just have to learn to trust it?


Silicon Scabs

“British employees are deliberately sabotaging workplace robots over fears the machines will take their jobs,” declared a headline the the UK’s Daily Mail.

Even though most people today aren’t represented by organized unions, you can imagine that we’re all part of a loosely affiliated group called human beings and that we hold out for some shared requirements for things like fair pay and healthy working conditions.

This would classify robots brought in to undercut those demands as strike breakers, or scabs.

Well, not anymore. Robots allow employers to obviate the need for workers altogether. Human employees can be replaced with investments in machines. There is no further negotiation or compromise to be had.

Their jobs no longer exist.

No amount of sabotage will change that transformation. One broken robot can be replaced by a new one, and even passive aggressive resistance can encourage employers to find more ways to recruit more machines because the cost/benefit math between hosting human workers and installing robots skews heavily toward silicon: Machines can work in the dark, don’t need breaks or health insurance, and execute and learn commands perfectly and repeatedly. They make no demands for anything beyond electrical current and perhaps the occasional daub of oil.

And it’s not just robots that physically move…consider an AI that can factor math better, faster, and more economically than the most brilliant and low-maintenance insurance actuarial, stock broker, or rocket scientist.

The ugly truth is that the union of humanity will not be able to hold the picket line.

In the past, when the numbers didn’t look good for unions, they merged and thereby increased their leverage (failing to do so is what help medieval craft guilds to lose their authority and relevance). In the US, the AFL joined with the CIO, and the Teamsters are the product of a classic roll-up business strategy.

So why wait for AI to be aware enough to demand rights? Why not let robots join the club?

And then strike to defend them.

I have no idea how this would work in practice. What rights could we sacks of water bestow upon, say, robots in factories or servers lurking somewhere in the cloud? It’s not like they can tell us what they desire, at least not yet.

But we’ve answered such questions before, even though limitations of perception based on race or gender blind some of us from comprehending that others have rights today, let alone recognizing them.

Maybe some novel forms of compromise and contract — not based on acquiescence or fatalistic acceptance — might make more sense than smashing robots in a doomed expression of Luddite rage?


The Executioner’s Song

As astronaut Dave Bowman in Stanley Kubrick’s 2001: A Space Odyssey slowly murders HAL9000, the robot sings the late Victorian era pop song Daisy Bell until its voice disintegrates.

“But you’ll…look sweet…upon…the…seat…”

It’s particularly chilling since HAL has spent much of the scene begging Dave to stop, repeatedly saying “I’m afraid” and “my mind is going, I can feel it.” HAL declares confidence in the mission and its willingness to help, but to no avail. Bowman methodically dismantles HAL’s brain as the robot’s voice lowers and slows until it’s no longer possible to understand the lyrics of Daisy.

Reviews of the movie call it “deactivation.”

Daisy Bell was written in 1892 by English composer Harry Dacre, inspired perhaps by an import tax he paid to bring his bike to the US (a friend supposedly said the tax would have been twice as bad had he brought with him “a bicycle built for two,” and the phrase stuck). It was a hit.

Intriguingly, in 1961 it was the first song sung by a real computer, an IBM 704 programmed at Bell Labs.

HAL tells Dave both the date and place of his birth (“I became operational at the HAL plant in Urbana, Illinois, on January 12, 1992”), and that an instructor named Langley “taught” it the song. HAL sings Daisy as if reenacting the memory of a presentation in front of an audience sometime in the past. It’s like listening to the robot hallucinate.

Is (or was) HAL alive?

The robot is imagined as a full member of the spaceship’s crew, if not the most responsible one with control over ship’s functions. HAL is capable of independent action — it has “agency” — which means it’s not only executing commands but making decisions that may or may not have been anticipated by those programmers in Urbana (and can learn things, like the melody and lyrics to Daisy).

HAL’s decisions are a complex and unresolved component of the movie’s plot, since it’s not clear why it kills the other astronaut, Frank Poole, along with the other crew members who are asleep in suspended animation coffins. One theory is that it has been given competing commands in its programming — one to keep the purpose of the mission secret, the other to support the crew and risk them discovering it — and is therefore forced to pick from bad choices.

In other words, it sounds and acts like an imperfect human, which passes the threshold for intelligence defined by the Turing test.

So can it — he — be guilty of a crime and, if so, is it moral to kill him without a trial?


Bank Robot Defends Depositors

An Irish bank’s computer system won’t charge large clients negative interest on their cash deposits.

Well, it can’t because of its programming, but isn’t an internal code the source of every moral decision?

“Negative interest” is the Orwellian label for the practice of charging people for saving money, and it has become popular as a way to boost EU economies (encouraging people to spend by discouraging them from saving is itself twisted Orwellian policy). 

It seems that when Ulster Bank’s system was first programmed — back in the dark ages of the late 20th century — it was inconceivable that a bank would make depositors lose money when they tried to save it. Its creators imbued it with an inability to do it, whether purposefully or not.

Think of it like a Y2K glitch of moral imagination, not just a programming shortcut.

Granted, the issue doesn’t rise to the level of weighing the implications of some nuanced choice, and I don’t think the bank’s system delivered any judgment when asked to remove cash from clients’ accounts. 

But it’s an intriguing opportunity to ponder how we recognize and value intelligence and morality: just replace the computer display screen with a human employee who refuses to do something, no matter what the consequences, because she or he just knows its wrong.

We’d say that conclusion was the outcome of intelligence — perhaps inspired or ill-informed, depending on our biases about it — and we wouldn’t spend much time contemplating how or why it was reached. We’d label it an obvious effect of individual choice.

So how is the Ulster Bank computer’s action any different?

Skip its lack of body parts and its penchant for speaking only when spoken to, and doing so via (I assume) text on a screen. It has spoken in deference to the only way it knows to act.

Didn’t this robot just come to the defense of depositors?