![]() ![]() But are his three laws sufficient? Many of Asimov’s stories– I, Robot, for example–turn on some failure or confusion between them. In doing so, Asimov had to solve the problem of how robots would interact with humans once they had some degree of free will. Isaac Asimov devoted a good deal of his writing career to the subject of robots, so it’s safe to say, he’d done quite bit of thinking about how they would fit into the worlds he invented. ![]() ![]() Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. The code consists of three laws in his fiction these are hardwired into each robot’s “positronic brain,” a fictional computer that gives robots something of a human-like consciousness.įirst Law: A robot may not injure a human being, or, through inaction, allow a human being to come to harm. In the video above, he outlines it (with his odd pronunciation of “robot”). I wonder about this conceptual gap-convenient as it may be in narrative terms-given that Isaac Asimov, one of the forefathers of robot fiction invented just such a moral code. ![]()
0 Comments
Leave a Reply. |