An Irish bank’s computer system won’t charge large clients negative interest on their cash deposits.
Well, it can’t because of its programming, but isn’t an internal code the source of every moral decision?
“Negative
interest” is the Orwellian label for the practice of charging people
for saving money, and it has become popular as a way to boost EU
economies (encouraging people to spend by discouraging them from saving
is itself twisted Orwellian policy).
It seems that when Ulster
Bank’s system was first programmed — back in the dark ages of the late
20th century — it was inconceivable that a bank would make depositors
lose money when they tried to save it. Its creators imbued it with an
inability to do it, whether purposefully or not.
Think of it like a Y2K glitch of moral imagination, not just a programming shortcut.
Granted,
the issue doesn’t rise to the level of weighing the implications of
some nuanced choice, and I don’t think the bank’s system delivered any
judgment when asked to remove cash from clients’ accounts.
But
it’s an intriguing opportunity to ponder how we recognize and value
intelligence and morality: just replace the computer display screen with
a human employee who refuses to do something, no matter what the
consequences, because she or he just knows its wrong.
We’d say
that conclusion was the outcome of intelligence — perhaps inspired or
ill-informed, depending on our biases about it — and we wouldn’t spend
much time contemplating how or why it was reached. We’d label it an
obvious effect of individual choice.
So how is the Ulster Bank computer’s action any different?
Skip
its lack of body parts and its penchant for speaking only when spoken
to, and doing so via (I assume) text on a screen. It has spoken in
deference to the only way it knows to act.
Didn’t this robot just come to the defense of depositors?