AI Apocalypse: Misaligned Objectives or Poor Quality Control?

An author claims to have distilled down AI risks of great importance, which they refer to as “Misaligned objectives

The point is: nobody ever intends for robots that look like Arnold Schwarznegger to murder everyone. It all starts off innocent enough – Google’s AI can now schedule your appointments over the phone – then, before you know it, we’ve accidentally created a superintelligent machine and humans are an endangered species.

Could this happen for real? There’s a handful of world-renowned AI and computer experts who think so. Oxford philosopher Nick Bostrom‘s Paperclip Maximizer uses the arbitrary example of an AI whose purpose is to optimize the process of manufacturing paperclips. Eventually the AI turns the entire planet into a paperclip factory in its quest to optimize its processes.

Ok, first it is false to say nobody intends for robots to murder everyone. Genocide is a very real thing. Mass murder is a very real thing. Automation is definitely part of those evil plans.

Second, it seems to me this article misses the point entirely. Mis-aligned objectives may in fact be aligned yet unexpected or sloppy in the ways people are reluctant to revise and clarify.

It reminds me of criticisms in economics of using poor measurements on productivity, which in Soviet Russia was a constant problem (e.g. window factories spit out panes that nobody could use). Someone is benefiting from massive paperclip production but who retains authorization over output?

If we’re meant to be saying a centrally-planned centrally-controlled system of paperclip production is disastrous for eveyrone but dear leader (in this case an algorithm), we might as well be talking about market theory texts from the 1980s,

Let’s move away from these theoretical depictions of AI as future Communism and instead consider a market based application today of automation that kills.

Cement trucks have automation in them. They repeatedly run over cyclists and kill because they operate with too wide a margin of error in society with vague accountability, not due to mis-aligned objectives.

Take for example how San Francisco has just declared a state of emergency over pedestrians and cyclists dying from automated killing machines roaming city streets.

As of August 31, the 2019 death toll from traffic fatalities in San Francisco was 22 people — but that number doesn’t include those who were killed in September and October, including Pilsoo Seong, 69, who died in the Mission last week after being hit by a truck. On Tuesday, the Board of Supervisors responded to public outcry over the issue by passing a resolution to declare a state of emergency for traffic safety in San Francisco.

Everyone has similar or same objectives of moving about on the street, it’s just that some are allowed to operate with such low levels of quality that they can indiscriminately murder others and say it’s within their expected operations.

I can give hundreds of similar examples. Jaywalking is an excellent example, as machines already have interpreted that racist law (human objective to criminalize non-white populations) as license to kill pedestrians without accountability.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.