Asimov's three laws of robotics and their relevance
Asimov's three laws of robotics and their relevance
Science fiction authors have been wondering for years what if one day “intelligent” robots will surpass humans on a physical and mental level, will be able to realize themselves, and will decide that they don’t need humans? Finally, Asimov’s three rules of robotics come to the rescue.
In the last century, science fiction writer Isaac Asimov, in his works, repeatedly took apart the principles of robotics and offered a solution in the form of specific laws. Asimov’s three laws of robotics and their relevance in our time raise many questions. At the beginning of their work, Nanit Robots manufacturers have repeatedly thought about how to make a robot not harm a person, to be helpful and functional. For example, many robots already do not require human intervention. Like, those that assemble cars in factories immediately shut down as soon as a human approaches them. A self-driving car, on the contrary, keeps moving to avoid a collision, while a robot caretaker can react quickly and pick up an older person, so they do not fall.
Someday robots will become our companions and colleagues, and we must be prepared for more complicated situations. To anticipate ethical and safety issues that may arise in the process.
Asimov’s three laws of robotics and their relevance
Asimov is an internationally renowned science fiction writer trained as a professor of biochemistry at Boston University. During his literary career, which he began in 1939, Isaac Asimov produced 500 published volumes, 90,000 letters, postcards, and other scientific books.
In 1942, as part of the fantasy series I, Robot, Isaac Asimov created a set of rules for his fictional world to which every robot had to adhere when interacting with humans. In his works, the author was inspired by history and the principle of “crime and punishment,” as in Faust. Twenty years later, the author admitted that starting to write in 1940, he traced one central plot of science fiction – robots were created and destroyed by their creator. It turns out that knowledge has its dangers. Can knowledge be an obstacle to the risks it brings? His stories about robots speak of unusual and non-intuitive behavior as an unintended consequence of how the robot applies the Three Laws to the situation in which it finds itself. That is why the writer decided that in his stories, the robot would not “foolishly pounce on its creator without purpose.”
Asimov’s 3 laws:
- A robot cannot harm a human being or allow it to be harmed by its inaction.
- A robot must obey orders given to it by a human, except when such orders contradict the First Law.
- The robot must protect its existence as long as such protection does not contradict the First or Second Laws.
Although these laws sound plausible, numerous arguments by scientists demonstrate the irrelevance of the ideas. Asimov’s stories trace a deconstruction of these laws and the impossibility of creating safe, compatible, and reliable robots. Consider the three laws of robotics in more detail.
The first law
In the real world, it can refer to accidents at work and premeditated murder scenarios. History knows several such accidents. In 1979, a robot killed a worker at a Ford plant in Michigan. Two years later, a robot at a Kawasaki plant pushed a worker into a grinding machine with a hydraulic lever.
Yes, these situations are rare, but for years robots have crushed, hit, and even doused people with molten aluminum. The reason is that the early models needed to be modernized more; they needed to be equipped to stop their movements in dangerous conditions and could not sense human presence within their operating range.
The second law refers to the obedience of the robot. The harm is minimal if it is successfully programmed. Robots are very good at taking orders from humans, but it could also be argued that it is “detrimental” on an economic level. The development of robotics is likely to lead to a crisis when there is difficulty in finding new ways to use human labor.
The Third Law
Humans create robots similar to themselves, endowing them with human characteristics. But can a robot be given feelings? And in general, can a robot defend its existence if it does not know it exists?
After studying Isaac’s work, many scientists wondered about the self-preservation of robots. The cost of such a product is high. The conflict of creating many commercial robots is bound to cause concern for investors. Protection of funds and indemnification in creating robots is possible but has many nuances. Any application of the Third Law in today’s environment would be to protect people from financial and physical harm.
The paradox of the 3 Laws of Robotics
Asimov’s three laws of robotics and their relevance have been tested for years. For example, the creation of human-controlled military drones. On the one hand – they are saving lives. On the other – they are killing.
The paradox of Asimov’s first law is that it can be viewed not as the act of a robot but as placing the blame on human shoulders since he controls the robot in this case.
Conclusion: armies equipped with combat robots will significantly reduce the number of human casualties, so using robots as cannon fodder is a potential solution to many world-class conflicts.
In the case of helping the elderly, the robot can be programmed to provide minimal assistance to the person trying to maintain independence, which for many older people, is very important. People can make their own decisions, including those that might lead to self-harm, such as a fall. A robot allowing its person to make independent decisions that resulted in injury from a fall would violate the First Law by omission.
What are the solutions?
The problem with the three laws of robotics is the formats and guidelines. Researchers at Texas A&M University and Ohio State have reviewed Asimov’s three laws, trying to correct ambiguities in language. They have presented their principles, where the main goal is empowerment, the opposite of helplessness.
Scientists are designing and modeling situations where robots will use the principle of empowerment in various scenarios and will be able to act in a surprisingly “natural” way. For example, protect and support humans and simultaneously be safe and autonomous to maintain their capabilities, such as retaining enough energy to work or avoid getting stuck and damaged.
In place of the old laws of robotics, scientists have discovered new ones:
– Humans cannot build a robot without a working human-robot system. Roboticists and manufacturers must be held legally, professionally, and ethically accountable for the robot’s actions.
– The robot must respond to humans according to their role. Ill-considered instructions can cause chaos without violating the three laws, but blind obedience is not desirable either.
– The robot must maintain sufficient autonomy to protect its existence. Still, regardless of the circumstances, humans must always be able to take over if necessary.
For many decades, Asimov’s three rules of robotics have been a constant for roboticists. Scientists are working to improve how human-robot systems and artificial intelligence technology work worldwide. Although empowerment provides a new way of thinking about safe robot behavior, we still have a lot of work to improve its effectiveness so that it can be easily deployed on any robot and turned into sound and safe behavior in every way. It’s a very challenging task. But we firmly believe that empowerment can lead us to a practical solution to a current and very debatable problem.