December 18, 2017

 Image sourced from


Sophia the social humanoid robot has been a big part of the news recently for how advanced and human-like she is, with her conversational talents and expressive (if thoroughly unsettling) facial features. As if her existence couldn’t be any more news-worthy, she is now the first robot citizen of a country! She has been granted Saudi citizenship and is an actual legal person under international law. Many pointed out the irony of a robot being granted citizenship in a country that is, according to independent watchdog Freedom House, the 10th worst country for civil liberties and political rights.


Beyond that, Sophia brings up an interesting point of discussion to the fore – legal rights for robots. Law and AI intersect in several ways (like the future of implementation of AI in the legal sector, for example), but robot rights are perhaps one of the more interesting subject matters. Fascinatingly, there’s more thought and debate surrounding this topic than you’d probably imagine – and justifiably so. Technology is developing at an incredibly exponential rate and machines are getting closer to attaining human sophistication. Robots may become conscious in the future, and it would be ‘arguably unethical to deny protections or rights to sentient, autonomous creatures’, as science and science fiction writer and blogger Joelle Renstrom, suggests. Moreover, as robots develop more human-like abilities and characteristics, we may be forced to ‘look upon them as social equals, and not just pieces of property’ as proposed by contributing editor of design, technology, science and science fiction site, George Dvorsky.




Giving robots legal rights doesn’t mean we have to worry about giving them full human rights now. There is a lot of debate about the standards a robot would have to meet to be considered to have full human rights (consciousness, self-awareness and sentience are more commonly considered elements) and more importantly, there’s a long way to go before a fully conscious or sentient robot can be developed. However, it’s worth discussing some legal rights for robots; perhaps immunities or protections, not freedoms, as suggested by robot ethics expert and MIT Media Lab researcher Dr Kate Darling, who investigates social robotics and conducts experimental studies on human-robot interaction. Comparing this concept to animal rights would be a good parallel, since animals are protected legally from inhumane treatment. Sure, that magnificent fluffy sheepdog of Dulux advert fame might not be able to vote, but it’d still be illegal for its owner to neglect or mistreat it. As Joelle Renstrom points out, ‘Even though robots can’t feel pain the way animals can, such protections make sense because they discourage mistreatment and get us thinking about our obligations to robots, which may be crucial when they become more advanced.’


The existence of animal rights is a fitting reminder that giving non-humans rights isn’t a completely foreign and bewildering concept. Animals aside, take corporations, for example - once a company is registered in the UK, it is a legal entity with its own legal rights and obligations. Non-human entities such as firms and government agencies are recognised as legal persons and therefore have legal rights and obligations too.


Another interesting argument for robot rights arises In Dr Darling’s paper ‘Extending Legal Rights to Social Robots’; protecting ‘societal values’. She uses the example of parents who tell their child not to kick a robotic pet—of course, there’s the expense of buying a new toy to consider, but no one wants the child picking up the habit of kicking the robot. A child who kicks a robot might just as easily feel free to kick another child!



It’s a fact that robots are only going to get more advanced. They may one day achieve consciousness, and since it’s arguably unethical to deny protections/rights to autonomous, sentient creatures, we’d have legal adjustments to make. However, should giving robots legal rights be a priority by any means? No doubt it’s a relevant matter, but it might be too premature to consider it in a world where animals aren’t even fully protected yet. Lawyer and author Wesley J. Smith argues that machines should never be considered to bear rights because they are only the ‘sum of their programming’. More significantly, he expresses that we haven’t even attained universal human rights, and so it’s far too early to be concerned about robot rights. Sounds like a compelling argument.


Another justification against robots getting legal rights comes from computer science professor Joanna Bryson, one expert out of several that believe robots should be ‘slaves’; the abstract of her paper, aptly titled ‘Robots should be slaves’, states that they shouldn’t ‘be described as persons, nor given legal nor moral responsibility for their actions. Robots are fully owned by us. We determine their goals and behaviour, either directly or indirectly through specifying their intelligence or how their intelligence is acquired. In humanising them, we not only further dehumanise real people, but also encourage poor human decision making in the allocation of resources and responsibility.’ Dr Bryson believes that giving robots rights is dangerous because it paints humans and robots as equals, when robots should be used ‘to extend our own abilities and to address our own goals.’




The arguments outlined above are only the tip of the iceberg when it comes to the robot rights debate. There is so much uncertainty surrounding the issue of legal rights for robots, but the arguments get more complex and varied when it comes to discussing future robots with prospective consciousness and self-awareness. There’s a lot of discourse about what traits qualify an entity for moral consideration, and by consequence, legal rights.


There’s the possibility that a robot could be acting on its programming instead of being self-aware and conscious, and so ways to measure consciousness will have to be developed before the kinds of rights owed can be decided. After this, robots will have to pass a personhood test; it isn’t universally decided on, but it usually includes qualities like a sense of past and future, concern for others, free will, minimal level of intelligence, and self-control. Only after passing such a test could we seriously consider human rights for AI.


Read more about AI and robot rights:

Please reload

Recent Posts

February 4, 2019

Please reload


Please reload


Please reload

City, University of London
Clerkenwell, London

©2017 City, University of London Student's Union Law Society 
Image copyrights Rajas R. Chitnis, unless otherwise stated.