Teaching robots right from wrong. What about morality?

American scientists are exploring the possibility of inbuing robots with an understanding of moral imperatives.
Image courtesy of Morguefile
Date:15 May 2014 Tags:, , , , ,

Remember sci-fi writer Isaac Asimov’s oft-quoted Three Laws of Robotics? If not, here’s a refresher:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

So far, so good. But if you’ve seen the movie I, Robot, starring Will Smith, you will recall an apparently sentient machine agonising over a moral conflict, and here’s the thing: you actually get it. Now, researchers from Tufts University, Brown University and Rensselaer Polytechnic Institute are teaming with the US Navy to explore technology that would pave the way for developing robots capable of making moral decisions.

Tufts has announced a project funded by the US Office of Naval Research in which scientists will explore the challenges of infusing autonomous robots with a sense of right and wrong, and the consequences of both.

“Moral competence can be roughly thought about as the ability to learn, reason with, act upon and talk about the laws and societal conventions on which humans tend to agree,” says principal investigator Matthias Scheutz, professor of computer science at Tufts School of Engineering and director of the university’s Human-Robot Interaction Laboratory. “The question is whether machines – or any other artificial system, for that matter – can emulate and exercise these abilities.”

One scenario, says Scheutz, imagines a battlefield. A robot medic responsible for helping wounded soldiers is ordered to transport urgently needed medication to a nearby field hospital. En route, it encounters a Marine with a fractured leg. Should the robot abort the mission to assist the injured? Will it? As the Tufts researchers tell it, if the machine stops, a new set of questions arises: “The robot assesses the soldier’s physical state and determines that unless it applies traction, internal bleeding in the soldier’s thigh could prove fatal. However, applying traction will cause intense pain. Is the robot morally permitted to cause the soldier pain, even if it’s for the soldier’s wellbeing?”

Selmer Bringsjord, head of the Cognitive Science Department at RPI, and Naveen Govindarajulu, a post-doctoral researcher working with him, are focused on how to engineer ethics into a robot so that moral logic is intrinsic to these artificial beings. Since the scientific community has yet to establish what constitutes morality in humans, the challenge for Bringsjord and his team is quite daunting. The overall goal of the project, says Scheutz, “is to examine human moral competence and its components. If we can computationally model aspects of moral cognition in machines, we may be able to equip robots with the tools for better navigating real-world dilemmas”.

Of course, robotics is not all about pursuing the moral compass; it’s also about building machines that do clever, difficult things that we humans cannot or don’t want to do – stuff like fighting wars, searching for victims after earthquakes or man-made disasters (or both); think Fukushima and other hot spots. Then there’s a robot ape called iStruct, and a “social companion” robnot called Mobiserv. Whatever it takes, guys…

Source: Tufts University