The Development and Advancement of AI in I, Robot by Isaac Asimov
The science fiction, “I robot”, by Isaac Asimov is a must-read book for beginners to develop an interest in robotics. It gives a collection of nine short stories that imagine the development of a positronic brain for robots which possesses equivalent or superior intelligence than humans and talks about the moral implications of technology. Amongst them, I find the first story about Robbie, a robot nursemaid for Gloria, particularly interesting and most easy to relate with. Gloria’s mother loathes robots and as such, she conspires to get rid of it. Gloria becomes sad and hence her parents try to convince her that robots aren’t humans. But when Gloria is in danger, Robbie saves her life causing everyone to appreciate robots.
We can very well imagine ourselves in a similar situation in the next 40-50 years when our grandchildren would be looked after by such robots. A child is bound to get attached to toys, especially if they are human like. Every coin has two sides and I would like to discuss both of them here. On the one hand, a robot nanny is wanted but not to the extent that your children start disliking actual humans. We all know that a computer has far more intelligence than a human brain. Thus, leaving your child to develop under the care of a robot could be dangerous as they would imbibe more than required intelligence into the child at an age. Imagine that your child knows the theory of relativity, concepts of quantum physics at the age of 7 whereas the fellow children are still learning to do basic math. This would make the child feel a sense of superiority to others and its behavior would turn out to be arrogant not only towards its friends but also towards its own parents. Again, this will also disturb the social life of the child just like Gloria, who never wanted to go out with other children of her age but stick to Robbie. A mother is especially responsible for imbibing moral values in her children. But if left under the supervision of a robot what values will the child learn? Right or wrong? And as we have discussed in our previous book, building a perfect moral machine still seems difficult which can distinguish right from wrong. Moreover, from the chapter “Liar”, we know how the mind reading robot gave answers which the person wanted to hear, in order to give satisfaction thus respecting the first law of robotics of not hurting humans: physically as well as mentally. If a child does something unethical, the parents take a strong stand against it to teach a lesson. But a robot would rather support the wrong deed. Moreover, the second law of robotics says that the robot must follow human orders. So, when the child becomes a teenager and the robot is of the opposite sex, equally attractive then you can imagine what orders the teenager might give to satisfy lusty desires. This will affect the social behavior of the teenager.
At the other end of the scale, we have robots designed to provide social care to humans. More sophisticated robots can act as companions, that move along with their users as they fetch and carry, issue reminders about appointments and medication, and send out alarms if certain kinds of emergencies occur. They don’t expect any respect nor any salary. Today, we have robots that can detect our state of mind and emotions and correspond with possible solutions. This would be a good alternative for people who are lonely and depressed. Thus, I believe that there are both pros and cons of having a robot nursemaid.
I believe that even though, Asimov’s laws are organized around the moral value of preventing harm to humans, they are not easy to interpret. We need to stop viewing them as an adequate ethical basis for robotic interactions with people. Part of the reason Asimov’s laws seem plausible is the fear that robots might harm humans. I bet most of us have read about malfunctioning autonomous cars causing fatalities in the US.
Also, A.I mostly deals with training the robots to adjust its behavior to new situations, but obviously this behavior sometimes can be unpredictable. So Asimov was right to worry about unexpected robot behavior. But when we look more closely at how robots work and the tasks they are designed for, we find that Asimov’s laws fail to apply clearly. Take the example of military drones. These are robots directed by humans to kill other humans. The whole idea of military drones seems to violate Asimov’s first law, which prohibits robot injury to humans. But if a robot is being directed by a human controller to save the lives of its co-citizens by killing other attacking humans, it is both obeying and disobeying the first law. In this case, the equilibrium would shift back and forth among the first and second law and would result in a scenario described in the chapter “Runaround”. Nor is it clear if the drone is responsible when someone is killed in these circumstances. Perhaps the human controller of the drone is responsible. But a human cannot break Asimov’s laws, which are exclusively directed at robots. Meanwhile, it may be that armies equipped with drones will vastly reduce the amount of human life lost overall. Not only is it better to use robots rather than humans as cannon fodder, but there is arguably nothing wrong with destroying robots in war since they have no lives to lose and no personality or personal plans to sacrifice.
Also, during a robot-assisted surgery, the first law would be a problem since the skin needs to be cut in order to treat the person and therefore needs to be modified. Robots working in industry dealing with hazardous chemicals will experience frequent conflicts among the second and third law. I had read in an article that the US is trying to use robot judges in courts. How ethical would the robot be able to judge the person? Even in this case, the first law of robotics is followed and not followed to protect the citizens and punish the criminals who are both “humans”.
In the story of the Mercury expedition, Mike didn´t give emphasis on bringing selenium from the pool which was the cause of the entire fun. But in our daily life, we wouldn´t always remember to dictate each order emphatically. We would order in a casual way. So, the robot must have priority settings as well. Moreover, Mike and Powell were present there to resolve the conflict among the laws but what should be done when there is no human physically present.
The chapter “Reason” is thought-provoking since it shows how the robots’ faith about its Master changes due to its own weird logical reasoning. We are discussing about robots keeping in mind the fact that they would serve humans. But in this case, the robot Cutie, programmed as per the Asimov’s laws, wants a clear logic as to why humans are its Masters. Such faulty robots may corrupt other robots and thus develop their own army which threatens human civilization.
We live in a world where cybercrime is dominant, so anyone of high intellect can hack these powerful machines, change the laws and create havoc in the world. Ultimately, everything boils down to the idea of developing moral machines using top-down or bottom-up approaches wherein each task is divided into subtasks for programming. But again, that wouldn’t be feasible. There’s also the question of what counts as harming a human being. It can be emotional or psychological harm from other humans. But this harm has not been caused by the direct actions of the robot. So how will robots deal with such situations?
The other big issue with the laws is that we need a significant advancement in AI for robots to be able to follow them. We want to develop robots which can think and act rationally and like a human. But this field hasn’t been researched well yet. Thus, a robot could only operate within a very limited sphere. Also, huge computational power is required to implement such laws. Thus overall, in my opinion, the Asimov laws are necessary but not sufficient. Its efficient implementation is still a million-dollar question!
Cite this Essay
To export a reference to this article please select a referencing style below