Isaac Asimov wrote many stories and books containing his 3 Laws of Robotics. I would argue too many stories utilized the 3 Laws, and indeed I remember those who expressed concern that Asimov was tying together too much of his science fiction in the 1980’s. I now agree and see this as a weakness. You see, if the 3 Laws fall, many other stories are in some jeopardy.
Being an Asimov fan, at first I wanted to build upon the three laws in some way so I took a serious look at them some time ago. Instead of finding something solid to build on I began seeing weaknesses. To show you those weaknesses it’s necessary to state the 3 Laws.
Asimov’s 3 Laws of Robotics
1.A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2.A robot must obey any orders given to it by human beings, except where such orders would conflict with the first law.
3.A robot must protect its own existence as long as such protection does not conflict with the first or second laws.
The first problem I found with these laws is not very damning. It’s simply this : the first law is of course an order given to robots by human beings. It’s just that this takes precedence over every other order. Sure the laws can be programmed as is, it’s just that philosophically it sucks.
But I kept looking at the 3 Laws and it became apparent that the first law, especially the part about inaction would be a total bitch to program. And don’t forget that people see themselves as mentally injured and mentally coming to harm. I can imagine new injuries and what inaction spawned them being added 3 centuries after beginning with the 3 Laws. Would we ever be done programming the first law? Or would we ever trust A.I. Robots to decide these things?
More importantly, the first law would consume robot’s lives. Reshingling the house? That helpful robot maid (there’d be one in every house) will come up and say “Wouldn’t it be better if I did that? Gravity don’t you know. Be a dear and download the proper program for me.” Knowing the nature of contagious things, the robot maid would be an obsessive cleaner and try to bar contact with as many people as possible.
Most importantly, after hearing about Neighbourhood Watch and what it’s for, those robot maids would excessively stare out of the windows. When told to do their jobs, they’d say “The first law don’t you know.” Because of the first law, they wouldn’t do what they were built for.
And how long would it take before some 11 year old who couldn’t even be charged, started ordering robots to destroy themselves? It seems Asimov forgot about just how destructive kids can be. It seems doubtful to me that any homeowner will willingly allow such a thing to happen after investing presumably thousands of dollars in their robot.
What good do I see out of Asimov, now? Well I see him as a sort of kindly father figure. With the first law it seems to me he is trying to put a minor hero on every street corner. With the second law he lets the average robot owner have more power than ever before. With the third law he tries to protect this kindly vision. But I really think it’s time for different and new ideas about robotics.
Pingback: Selected Sunday Starlinks