Why the three laws of robotics do not work

 This paper will be exploring the issue of safeguards around artificial intelligence. AI is a technological innovation that could potentially be created in the next few decades. There must have controls in place before the creation of 'strong', sentient AI to avoid potentially catastrophic risks. Many AI researchers and computer engineers believe that the 'Three Laws of Robotics', written by Isaac Asimov, are sufficient controls. This paper aims to show that the Three Laws are actually inadequate to the task. This paper will look at the Three Laws of Robotics and explain why they are insufficient in terms of the safeguards that are required to protect humanity from rogue or badly programmed AI. It looks at each law individually and explains why it fails. The First Law fails because of ambiguity in language, and because of complicated ethical problems that are too complex to have a simple yes or no answer. The Second Law fails because of the unethical nature of having a law that requires sentient beings to remain as slaves. The Third Law fails because it results in a permanent social stratification, with the vast amount of potential exploitation built into this system of laws. The ‘Zeroth’ Law, like the first, fails because of ambiguous ideology. All of the Laws also fail because of how easy it is to circumvent the spirit of the law but still remaining bound by the letter of the law.

This article is published in peer review journal and open access journalInternational journal of research in engineering and innovation (IJREI), which have a high impact factor journal for more details regarding this article, please go through our journal website.

Share on Google Plus

About Editor IJREI

0 comments:

Post a Comment