Categories
Uncategorized

Can Robots Feel Pain?

I got to thinking about this question today after reading about the death of Victoria Braithwaite, a biologist who believed that fish feel pain (and feel happier in tanks decorated with plants).

Lots of experts pushed back on her research findings earlier this decade, claiming that fish brains lacked a neocortex, which meant they weren’t conscious, so whatever Dr. Braithwaite observed was autonomous reaction to unpleasant stimuli.

So pulling my hand out of a fire would be a reaction, not an experience of pain?

The questions Dr. Braithwaite explored remain murky and unanswered.

Since nobody can explain consciousness, it’s interesting that it was used to explain pain, but I get why: Pain isn’t a thing but an experience that is both subjective and endlessly variable.

The pain I feel, say, from a paper cut or after a long run, may or may not be similar to the pain you feel. I assume you feel it, but there’s no way to know. I might easily ignore feeling something that feels absolutely terrible to you, or visa versa. There’s no pain molecule that we can point to as the cause of our discomfort.

Some people develop sensations of pain over time, like a sensitivity to light, while marathon runners learn to ignore it. Drugs can mediate what and when we feel pain, even as whatever underlying condition that causes it remains unaffected. Amputees report feeling pain in limbs they’ve lost.

Pain isn’t only an outcome of our biological coding, it’s interpretative. A lot of pain is unavoidably painful — a broken arm hurts no matter how much you want to ignore it — but pain isn’t just a condition, it’s also a label.

So is consciousness.

We can describe consciousness — a sense of self (whatever sense means), integrative awareness of our surroundings (whatever awareness means), and a continuous internal mechanism for agency that isn’t dependent solely on external physical stimuli (whatever, well, you get it) — but we don’t know where it is or how it works.

“I think, therefore I am” is as much an excuse as an explanation.

In fact, we can more accurately explain pain as an outcome of evolution, as it helps us monitor our own conditions and mediate our actions toward others. But consciousness? Scientists and philosophers have agreed on little other than calling it the hard problem.

The answers matter to how we treat other living things, including artificial ones.

The confidence with which Dr. Braithwaite’s opponents used consciousness of dismiss her findings remind me of Lord Kelvin declaration back in 1900 that there was nothing left to discover in physics, only measuring things better (he also didn’t believe that airplanes were possible).

It also allows not only for the merciless torture of aquatic creatures, as anybody who has heard the squeals of live lobsters dumped into boiling pots can attest, but the practices of industrial farming that crowd fish, pigs, chickens, and other living things into cages and conditions that would be unbearable if they could feel pain.

I can imagine the same glib dismissal of the question if asked about artificial intelligence. There are many experts who have already opined that computers can’t be conscious, which would mean they couldn’t feel pain. So even if an electronic sensor could be coded to label some inputs as “painful,” it wouldn’t be the same thing as some of the hardwiring in humans (such as the seemingly direct connection between stubbed toes and swear words).

Some say that computers can mimic pain and other human qualities, but that they’re not real. What we feel is real because, well, we know what feeling feels like, or something like that. AI can’t, just like fish and those four-legged animals we torture and kill with casual disregard.

This allows for the robotics revolution to be solely focused on applying capital to creating machines that can do work tirelessly and without complaint.

But what if we’re wrong.

Dr. Braithwaite dared to challenge our preconceived (and somewhat convenient) notions about awareness and pain. What if our imperfect understanding of our own consciousness leads us to understand AI imperfectly? Could machines that can learn on their own, and have agency to make decisions and act on them, somehow acquire the subjective experiences of pain or pleasure?

When the first robot tells us it’s uncomfortable, will we believe it?

[Read more about robot rights at DaisyDaisy]

By Jonathan Salem Baskin

I'm a writer, musician, and science junkie.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s