Artificial intelligence should be protected by human rights , says Oxford mathematician. Machines have feelings too..
With huge leaps taking place in the world of artificial intelligence (AI), right now, experts have started asking questions about the new forms of protection we might need against the formidable smarts and potential dangers of computers and robots of the near future.
But do robots need protection from us too? As the ‘minds’ of machines evolve ever closer to something that’s hard to tell apart from human intelligence, new generations of technology may need to be afforded the kinds of moral and legal protections we usually think of as ‘human’ rights, says mathematician Marcus du Sautoy from the University of Oxford in the UK.
Du Sautoy thinks that once the sophistication of computer thinking reaches a level basically akin to human consciousness, it’s our duty to look after the welfare of machines, much as we do that of people.
“It’s getting to a point where we might be able to say this thing has a sense of itself, and maybe there is a threshold moment where suddenly this consciousness emerges,” du Sautoy told media at the Hays Festival in Hay-on-Wye, Wales this week. “And if we understand these things are having a level of consciousness, we might well have to introduce rights. It’s an exciting time.”
Du Sautoy thinks the conversation about AI rights is now necessary due to recent advancements made in fields such as neuroscience. The mathematician, who appeared at the literature festival to promote his new book, What We Cannot Know, says new techniques have given us a clearer understanding than ever before of the nature of mental processes such as thought and consciousness – meaning they’re no longer reserved solely for philosophers.
“The fascinating thing is that consciousness for a decade has been something that nobody has gone anywhere near because we didn’t know how to measure it,” he said. “But we’re in a golden age. It’s a bit like Galileo with a telescope. We now have a telescope into the brain and it’s given us an opportunity to see things that we’ve never been able to see before.”
That greater insight into what consciousness is means we should respect it in all its forms, du Sautoy argues, regardless of whether its basis for being is organic or synthetic.
While the notion of a machine being protected by human rights sounds like something out of science fiction, it’s actually a fast-approaching possibility that scientists have speculated about for decades. The big question remains, when will computer systems become so advanced that their artificial consciousness ought to be recognised and respected?
Various commentators put the timeframe from 2020 through to some time in the next 50 years, although the rapid pace with which AI is progressing – be that playing games, learning to communicate, or operating among us undetected – means that nobody really knows for sure.
Du Sautoy can’t say when the time will come either – just that when it does, like the title of his book suggests, it will present another set of unsolvable mysteries.
“I think there is something in the brain development which might be like a boiling point. It may be a threshold moment,” du Sautoy said. “Philosophers will say that doesn’t guarantee that that thing is really feeling anything and really has a sense of self. It might be just saying all the things that make us think it’s alive. But then even in humans we can’t know that what a person is saying is real.”
By Peter Dockrill
4 thoughts on “Artificial intelligence should be protected by human rights”
De Jure, are companies also Persons, Juristic persons…
Exactly. Frankly we need to abandon those roots before they cause any more damage because they’re just no good in the modern age. Creating the sort of conflict that we’ve had historically, can be devastating. And the same reason AI is a contentious subject in philosophy and law, is the same reason it’s so susceptible to passive influence from us. Much like the culturally transmitted compositions of memes that constitute our minds, the mind of an AI would be an extension of its progenitors. Under the proper philosophy it can be a profoundly powerful boon for humanity, an evolutionary catalyst in many senses. But we’re also getting to the point technologically, where if we repeat our historical conflicts, with an introduction of widespread AI, our destructive potential would be an existential threat.
Absolutely. But this does cause some issues with ‘propery rights’ if an AI cannot be owned. Creating a superior intelligence for pure slavery is a very bad idea, but it also a bit tricky to determine at what point an AI is evolved enough for various degrees of independence.
I totally agree with Dockrill. As strong AI is developed and becomes implemented, it’s going to lead us to some philosophical, religious and existential crises, and the question of redefining civic rights to include non-humans, and and redefining life-forms to include non-organics, will require social and legal quantum leaps. But as these machines become cognizant they’ll be necessary indeed. As their creators, our ideology and sentiment will translate to their design and evolution, we are tasked with the immense responsibility of molding the minds of a species. We must be ready to show them respect, trust and love if we want them to display this.