We need robots to have morals. Could Shakespeare and Austen help?

John Mullan, professor of English literature at University College London, wrote an article on The Guardian titled “We need robots to have morals. Could Shakespeare and Austen help?”.

Using great literature to teach ethics to machines is a dangerous game. The classics are a moral minefield.

When he wrote the stories in I, Robot in the 1940s, Isaac Asimov imagined a world in which robots do all humanity’s tedious or unpleasant jobs for them, but where their powers have to be restrained. They are programmed to obey three laws. A robot may not injure another human being, even through inaction; a robot must obey a human being (except to contradict the previous law); a robot must protect itself (unless this contradicts either of the previous laws).

Unfortunately, scientists soon create a robot (Herbie) that understands the concept of “mental injury”. Like a character in a Thomas Hardy novel or an Ibsen play, the robot soon finds itself in a situation where truthfully answering a question put to it by the humans it serves will cause hurt – but so will not answering the question. A logical impasse. The robot screams piercingly and collapses into “a huddled heap of motionless metal”.

As we enter what many are predicting will be a new age of robotics, artificial intelligence researchers have started thinking about how to make a better version of Herbie. How might robots receive an education in ethical complexity – how might they acquire what we might call consciences? Experts are trying to teach artificial intelligences to think and act morally. What are the examples that can be fed to robots to teach them the right kind of behaviour?

READ MORE

https://www.theguardian.com/commentisfree/2017/jul/24/robots-ethics-shakespeare-austen-literature-classics

You May Also Like

Leave a Reply

Your email address will not be published. Required fields are marked *