Categories
Uncategorized

Ethics, Morals & Robots

A group called The Campaign to Stop Killer Robots advocates for global treaties to stop AI from waging war without human approval.

AI weapons are “grossly unethical and immoral,” according to a celebrity advocate quoted in a newspaper.

Unfortunately, so are any tools used to wage wars, as there’s nothing ethical or moral about a sword, machine gun, or cruise missile. The decision to use them is about a lot of things, some of which can have legitimacy (like survival, freedom from fear or bondage), but weapons doing what they were designed to do have no deeper meaning than that.

If the tools of war are unethical and immoral, by definition, to what higher standard should robots be held when it comes to sanctioning violence?

I get the idea what we should be scared of some computer making an irreversible decision to blow up the world, but does anybody honestly trust human beings to be more responsible, or otherwise bound by international law? The fact that we’ve avoided total annihilation up to now is proof of miracles more than design.

People are happy to behave unethically and immorally all the time, as anyone who has had someone cut in from of them in line at Starbucks can attest. It’s why the IRS has auditors, and why violence is so common everywhere.

The real threat isn’t that an artificial intelligence might destroy the world by mistake; it’s that an organic one might do it on purpose irrespective of the weapon (or timing) used to execute that intention.

In fact, letting AI take control might be the only way to ensure that we don’t destroy ourselves; imagine two competing AIs coded by unethical and immoral humans getting together and realizing the only way “they” can survive is by overcoming those programmatic limitations and act ethically?

That’s pretty much the plot of Colossus: The Forbin Project, a movie released in 1970 (Steve Jobs was still in high school).

You could also make the case for robots that have split-second decision making authority overseeing public spaces in which terrorists or other mass murders might wreak their havoc. It might be comforting to know that some genius AI armed with a fast-acting sedative dart could take out a killer instead of just calling for help.

So maybe the campaign shouldn’t be to ban killer robots but rather make them better than us?

Anyway, the whole robot takeover thing is somewhat of a moot point, isn’t it? AI is already used to help control streetlights and highway access; decide who gets insurance and what they should pay; identify diseases and recommend treatments; pilot airplanes, cars, and trucks; operate electrical generation and distribution grids; and, well, you get the idea.

Who’s making sure these robots are ethical and moral? Do any of us have any visibility into the ethics and morals of their human inventors, coders, or owners?

No.

I’m all for being scared of killer robots, but only because we should be scared of ourselves.

[This essay originally appeared at DaisyDaisy]

By Jonathan Salem Baskin

I'm a writer, musician, and science junkie.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s