You are currently viewing The Trial of Brobot

The Trial of Brobot

​This paper took a theoretical look at the legal personhood of AI. It was written in collaboration with Ritika Gopal, Associate Counsel at SXSW. You can find her at ritikagopal.com.

Introduction

Technology is being sophisticated at incredible rates. There are people who have lived through the invention of flight, landing on the moon, iPhones, and DeepMind.[i] Technology has moved on from tools of human advancement to full-on integration with the human body.[ii] The potential for body modifications to surpass the abilities of the natural human state are here.[iii] Likewise, artificial intelligence is advancing at an incredible rate[iv] and is getting closer to modeling the human learning process.[v] There has not been much thought given to whether or how a conglomeration of these advancing technologies would accumulate rights, or if rights can already be conferred under existing legal theories (largely because of the unanticipated rate of development). This memo explores legal theories establishing possible rights of robots.

I. An Overview of the Robotic Technologies

  A. The “Parts”

  1. Nervous System Integrated Technologies 

The spectrum of available biotechnology is widespread and growing. Cochlear implants work to replace the functionality of hairs on the inner ear, which vibrate the drum and translate into sound. The implant then sends a signal directly to the auditory nerve, and the brain interprets the signals as decipherable sound.[vi]

Like the Cochlear implants, the visual prosthesis works by sending signals from a camera feed directly to the optic nerve. The current technology is limited to sixty electrodes, which limits the resolution of the visual data available for the brain to process.[vii] Currently, people using visual prostheses can see general shapes as well as distinctions in light. The full extent of the capability of the device is still being studied, since the brain takes time to adjust to the information and patients have different outcomes when using the devices.[viii] It seems likely that the technology will only improve with time, and the limits of the technology are unknown. Technology has been implemented that allows people without limbs to be fitted with prostheses that include a sense of touch.[ix] The technology, again, works by fitting directly to the functional nerves in the patient’s affected limb. The brain learns to interpret signals sent by a processor which is managing the prosthesis’s movements. Interestingly, there have even been advances in technology that have allowed paralyzed monkeys to regain motor function.[x] There is also technology currently prototyped and tested that improves human motor functions beyond normal limits.[xi]

    2. External Technology with Functional Replacement 

People with nonverbal autism rely on tools, including technology based ones, to communicate with others.[xii] Unlike tools for the deaf or mute that may facilitate the expression of language already mentally formed, these tools provide a sort of communication mediation between the person with autism and a verbal person. In other words, they provide the functionality of language expression itself where the individual would otherwise have to resort to gesturing.[xiii]

There are several technologies already on the market which can be used to supplement standard biological functionality.[xiv] The technology serves as an aid to fully functional people as well as people with a variety of disorders or injuries. A lot of these technologies aim to mimic tasks that we as people do on a regular basis and have led researchers to attempt to mimic the entirety of human functionality.

    3.  Artificial Intelligence 

Artificial Intelligence is an attempt to synthetically recreate the human mind and thought processes.[xv] The technology is still a long way away from matching human intelligence, but that’s not to say it can’t match up on specific tasks. AI is not necessarily the overwhelming impression of the entirety of a human mind, but can include simple technology such as calculators or GPS enabled maps.[xvi] Researchers are also attempting to mimic the sensation of pain in AI as a beneficial operating function.[xvii]

  B. The “Whole” 

The examples above reflect a growing acceptance of tasking tech to perform traditionally human functions. Now, certain biotechnologies can account for things like human perception, sensory feeling, function, and more. As use of such technologies grows, the logical conclusion of replacing human body parts with tech, then, is a “human” that is entirely comprised of robot parts. These robots–popularly known as androids–have no element of life to them, yet bear striking resemblance to humans. Extending beyond physical similarity to humans, they are increasingly able to “mimic lifelike behavior, react to social gestures and use sounds, movement, and facial expressions to signal emotions in a way that we immediately recognize, [possibly targeting] our involuntary biological responses.”[xviii] As legal scholar Kate Darling puts it:

In general, as more robots enter our lives and our homes, we are experiencing an increase in robots designed to engage us socially. This trend is not likely to slow. Social abilities will continue to improve, and robotic companies will become more common as technology advances. Makers of toys, for example, have been working for decades to increase interactivity and engage children by creating the illusion of intentional behavior in robotic playthings. This type of interactivity design is not restricted to children’s markets, and has multiple potential uses.[xix]

As technology is further advanced and integrated with human life, it is easy to imagine a reality in which these human-like, “social” robots are remarkably sentient and autonomous.In fact, one could argue that robot sentience in a sense already exists today, albeit in a cruder form. For example, deep learning, which is a narrow subset of machine learning driving the current AI boom, solves problems by tapping into neural networks that simulate human-decision making.[xx] There is a demand for robots that we can relate to; the numbers speak for itself: In 2015, the world market for service robots was estimated to have been worth nearly $3.7 billion, and revenues are forecasted to rise to around $15 billion in 2020.[xxi]

Popular examples of “social” robots that are currently on the market or in development today include: the robotic seal Paro, a therapeutic device used in nursing homes that reacts to touches, words, and people’s actions and learns individual voices[xxii]; Rex, the world’s first bionic man complete with artificial organs, synthetic blood and robotic limbs[xxiii]; IBM’s Multi-Purpose Eldercare Robot Assistant, an intentionally cute, anthropomorphic robot that functions to help elderly people in their homes[xxiv]; Honda’s ASIMO robot, a humanoid robot capable of human-like running[xxv]; Toyota’s Kirobo Mini, a miniature talking robot meant to keep drivers company by taking into account and responding to its companion’s likes, dislikes, and facial expressions.[xxvi] The benefits of such technologies must not be understated: These robots interact with humans on a social level, mimicking intentional human-like behavior and decision-making as a means towards bettering human life and society.

People are already eager to project emotion into superficially humanized robots, and it is not far off when we integrate these superficially humanized robots with actually functional biological replacements–i.e., giving Paro a feeling arm, seeing eyes, and a sense of pain.

   C. Is the Sum of “Parts” Greater than the “Whole”?  

Here is where the law meets psychology. In the words of Gestalt psychologist Kurt Kofka, “The whole is other than the sum of the parts.” What this means is that the “whole” is imbued with a reality of its own, independent of the “parts” that compose it. An individual is more than just a collection of body parts and organs. Applied to the wide range of robot technologies described above, several questions arise: At what point along the spectrum does a robot attain its own so-called reality, similar to that of a human? Looked at through the lens of law and policy, at what stage of intelligence design should a robot be considered as possessing legal personhood and agency?

Consider the following thought experiment: If a person loses an arm due to an accident, he is still considered a “person” in the eyes of the law. That certainly does not change if he proceeds to lose an eye, another arm, both legs. Let us assume that he has had a very unfortunate life and has lost functionality in almost every body part and organ. Let us also assume that biotechnology is incredibly advanced. If he were to replace everything except his brain with biotechnologies, he would likely still be considered a “person,” as he can still think and feel like a human–much like RoboCop.

As we all probably know, the human brain is an organ in the body that processes and reacts to external stimuli; it also makes decisions based on past experiences. Many advanced machine processors, however, do just the same. It is harder than most people think to conceptually differentiate between the two. One could argue that the brain is distinguished from a processor through emotion, fear, or self-preservation, which are the things that are generally thought to contribute to personhood. However, Ashlyn Blocker and Olivia Farnsworth, girls who suffer from rare genetic disorders that prevent them from feeling pain, are direct counterexamples to that notion. Olivia Farnsworth was run over and dragged down the street by a car, yet got up without any complaints and started walking back to her mom confused as to why she was screaming.[xxvii] Her brain does not have the same conception of self-preservation and fear of danger as do normal human brains, though it is safe to say that she is still a “person” in the eyes of the law. Therefore, the primary issue is whether there are legally cognizable, substantive differences that distinguish a human brain from a humanoid processor.

II. Legal Ramifications: Personhood and Agency

  A.  Upgraded Persons 
Current technology already allows for the replacement of body parts at near-normal performance levels, and superior performance levels of replacement parts are sailing our way from the horizon. As these technologies are implemented, questions begin to arise about the legal ramifications of such technology.

    1. Product Liability for Defective Design 

One area of law that is affected by such technologies is product liability under tort law. Products liability law has a couple of tests when looking at defective designs.

Products liability may be premised upon a theory of design defect, manufacturing defect, or failure to warn. … Defective design may be established under two theories: (1) the consumer expectations test, which asks whether the product performed as safely as an ordinary consumer would expect when used in an intended and reasonably foreseeable manner; or (2) the risk/benefit test, which asks whether the benefits of the challenged design outweigh the risk of danger inherent in the design.[xxviii]

These tests can get really complicated quickly–especially when applied to technologies whose sole function is to simulate the human sensory experience, including pain. For example, when a person loses a limb, it goes without saying that they have suffered a great loss and restoration of that loss is something one would long for: anyone would struggle to answer the question of whether including the sensation of pain in a prosthesis is a valuable and reasonable aim in the design of a prosthesis. If a person could simply regrow their limb, they would, and that would include the sensation of pain. There are reasonable arguments on both sides as to whether including the sensation in a prosthesis is valuable.

    2. Privacy Considerations 
The Supreme Court has ruled on cases where thermal imaging was used to establish probable cause that a person was growing marijuana inside of their home.

We have said that the Fourth Amendment draws “a firm line at the entrance to the house.” That line, we think, must be not only firm but also bright—which requires clear specification of those methods of surveillance that require a warrant. While it is certainly possible to conclude from the videotape of the thermal imaging that occurred in this case that no “significant” compromise of the homeowner’s privacy has occurred, we must take the long view, from the original meaning of the Fourth Amendment forward. … Where, as here, the Government uses a device that is not in general public use, to explore details of the home that would previously have been unknowable without physical intrusion, the surveillance is a “search” and is presumptively unreasonable without a warrant.[xxix]

One can imagine a scenario in which the bionic eye has infrared capabilities, such as for being able to see at night. If that sort of technology becomes common for people experiencing blindness, the government may find itself no longer using “a device that is not in general public use.” Additionally, imagine a scenario in which a police officer has lost an eye and has been given the option for a bionic replacement by insurance. It may be wrong to limit the options of the officer to a lesser technology for the sake of 4th amendment concerns against unreasonable searches. Or, if the technology seems too outlandish, instead of an eye, imagine Cochlear implants reaching a point where they would be able to hear things in greater detail than any biologically standard person, and an officer ends up overhearing a defendant confessing to his attorney. These considerations will only broaden as time goes on.

    3. The Americans with Disabilities Act  
What will happen as time goes on and these bionics progress from crutch to advantage, “[a]n individual with disabilities is defined as: 1. A person who has a physical or mental impairment that substantially limits one or more major life activities; 2. A person who has a record of such an impairment; or 3. A person who is regarded as having such an impairment.”[xxx] At a certain point, hiring a police officer with a Cochlear implant will be more enticing than an officer with regular hearing. Airlines may begin hiring pilots who have better than standard vision. It may become the case that people without bionics will be “regarded as having such an impairment,” but the impairment will be a result of economics instead of a consequence of biology. It may well become the case that biologically standard people will be a protected class.

These concerns may even strengthen an argument for limiting the development of robots to strictly human capacities, in which case, we have some other questions to answer.

  B. New Persons 
This year, the legal affairs committee of the European parliament voted, 17-2, to put together “a set of regulations to govern the use and creation of robots and artificial intelligence, including a form of ‘electronic personhood’ to ensure rights and responsibilities for the most capable A.I.”[xxxi] Such legislative thinking would have been unthinkable in the 80s or 90s. The 1987 case Comptroller of the Treasury v. Family Entertainment Centers illustrates how far perceptions about robotics have come.

In that case, a Maryland special appeals court had to consider whether the life-sized, animatronic puppets that dance at Chuck E. Cheese children’s restaurants trigger a state tax on food “where there is furnished a performance.”[xxxii] Reasoning that a performance “has connotations of inherent human input that leaves room for spontaneous imperfections during the exhibition of skills or talent,” the court determined that the Chuck E. Cheese robots were outside of the scope of the statute, despite acknowledging that they “are designed to give the impression that they are performing.”[xxxiii] The court stated, in pertinent part:

[A] pre-programmed robot can perform a menial task but, because a pre-programmed robot has no “skill” and therefore leaves no room for spontaneous human flaw in an exhibition, it cannot “perform” a piece of music . . . Just as a wind-up tow does not perform for purposes of [the statute,] neither does a pre-programmed mechanical robot.[xxxiv]

Today’s robots have clearly surpassed Chuck E. Cheese-era animatronics. At this juncture, the overall question threading this analysis is whether and how legal conceptions of personhood change when “parts” are ratcheted up to a new, much more sentient “whole.” In other words, when is personhood gained or lost? And relatedly, do both occur at the same point? Felons, for example, can lose key aspects of legal personality including the right to vote, the right to serve on a jury, and the right to own and purchase firearms. In some cases, they can be forced to join a registry (e.g., sex offenders). People who are placed under guardianship can also lose many rights. These examples illustrate that there is a gradation to legal personhood. It is somewhere along that gradation that robots lie.

Minor children are a prime example of quasi-persons, as they do not enjoy the full rights of personhood that adults do. They cannot become parties to a contract or involved in various legal arrangements. In this sense, they are not legal persons. Yet, the killing of a child is considered murder in the same way that the killing of an adult is, and so a child is considered a legal person in this sense. In Garratt v. Dailey, a Washington district court had to consider when a child’s decision can be considered legally binding.[xxxv] There, a child had pulled a chair from under a woman, causing her to sustain serious injuries.

The court had to determine whether the child knew, with substantial certainty, at the time he removed the chair, that that the woman would attempt to sit down where the chair had been. In the court’s words:

[T]he case should be remanded for clarification of the findings to specifically cover the question of Brian’s knowledge, because intent could be inferred therefrom. If the court finds that he had such knowledge the necessary intent will be established and the plaintiff will be entitled to recover, even though there was no purpose to injure or embarrass the plaintiff. Vosburg v. Putney, supra.[xxxvi]

Crucially, the court inferred intent from knowledge, that too of a minor’s. Applied to robots, it seems reasonable to conclude that some robots will eventually become a sort of quasi-person or quasi-agent in the view of the law before reaching full personhood. To add to that, corporations are considered legal persons in the eyes of the law.[xxxvii] The larger point is that the sophistication of such robots warrants placing them on the spectrum of legal personality.

Even without the full intelligence of a person, the law considers protections for limited intelligence, limited functionality creatures through animal cruelty laws.[xxxviii] The inclination to protect living beings from sensations of pain or cruel mistreatment indicates a conceptual desire to protect things that feel pain from situations under our immediate control. There is no discussion in case law of the moral basis for why the protection is afforded to animals despite a lack of personhood.[xxxix] Possibly because it is such an intuitive idea that inflicting wanton pain and suffering on an animal is wrong. While silence on the moral basis does not help elucidate the legal motivations for protections against cruelty to animals, the fact that there is a seemingly arbitrary line drawn against causing pain to a creature implies a generally agreed upon notion against inflicting pain unnecessarily. If developing pain processors for robots can be a beneficial function for robots, as discussed above, or has some higher moral relevance for arguments of inclusion, then it stands to reason that the protection against pain should probably extend to robots.

III. Conclusion

No one would argue that the loss of a limb, or that a person with nonverbal autism is any less of a person, but the fact that we strive to replace these functions speaks to how each function contributes to personhood. At what point does the accumulation of functions contributory to personhood equate to personhood? And once it does equate to personhood, does the loss of a solitary function revert the new person back to non-personhood? Does granting personhood to a nontraditional person somehow take away from the value of personhood itself? No one needs to answer these questions just yet, but they are questions that should be considered while there is still time to do so without ethical pressures.

IV. Sources

[i] John M. Adams, “GRG World Supercentenarian Rankings List” Gerontology Research Group (Feb 23, 2017), http://supercentenarian-research-foundation.org/TableE.aspx.

[ii] “Girl Born Deaf Given The Gift Of Hearing With Cochlear Implants”, CBS New York (February 20, 2017),  http://newyork.cbslocal.com/2017/02/20/girl-born-deaf-given-the-gift-of-hearing-with-cochlear-implants/.

[iii] Dan Kedmey, “This Bionic Lens Could Give Everyone Perfect Vision”, Time (May 22, 2015), http://time.com/3894170/bionic-eye/.

[iv] Jack Clark, “Why 2015 Was a Breakthrough Year in Artificial Intelligence”, Bloomberg (December 8, 2015), https://www.bloomberg.com/news/articles/2015-12-08/why-2015-was-a-breakthrough-year-in-artificial-intelligence.

[v] Sam Shead, “Google DeepMind: What is it, how does it work and should you be scared?”, Techworld (Mar 15, 2016), http://www.techworld.com/personal-tech/google-deepmind-what-is-it-how-it-works-should-you-be-scared-3615354/.

[vi] “Understanding Cochlear Implants”, WebMD, http://www.webmd.com/healthy-aging/understanding-cochlear-implants.

[vii] Jim Stingl, “Blind the past 25 years, Wauwatosa man fitted with bionic eye to stimulate sight”, Journal Setinel (Feb. 25, 2017), http://www.jsonline.com/story/news/columnists/jim-stingl/2017/02/25/blind-past-25-years-wauwatosa-man-fitted-bionic-eye-simulate-sight/98363052.

[viii] Nicole Kobie, “Blind NHS patients to get bionic eyes”, ITPRO (22 Dec, 2016), http://www.itpro.co.uk/public-sector/healthcare/27828/blind-nhs-patients-to-get-bionic-eyes.

[ix] James Gallagher, “Bionic Am restores sense of feeling”, BBC Health (9 October 2014), http://www.bbc.com/news/health-29538385.

[x] Rae Ellen Bichell, “Monkeys Regain Control Of Paralyzed Legs With Help Of An Impant”, NPR (November 9, 2016), http://www.npr.org/sections/health-shots/2016/11/09/501029887/monkeys-regain-control-of-paralyzed-legs-with-help-of-an-implant.

[xi] Gregory Mone, “Building the Real Iron Man”, Popular Science (April 9, 2008), http://www.popsci.com/scitech/article/2008-04/building-real-iron-man.

[xii] “Proloque2Go”, http://www.assistiveware.com/product/proloquo2go.

[xiii] Lisa Jo Rudy, “About a Third of People with Autism Use Little or No Spoken Language” Verywell (December 30, 2016), https://www.verywell.com/what-is-nonverbal-autism-260032.

[xiv] Jeannie Krull, “Brain Injury and Assistive Technology: 10 Devices for Memory Loss”, Assistive Technology (August 14, 2014), http://ndipat.org/blog/brain-injury-and-assistive-technology-10-devices-for-memory-loss/.

[xv] Cade Metz, “The Best AI Still Flunks 8th Grade Science” Wired (Feb. 16, 2016), https://www.wired.com/2016/02/the-best-ai-still-flunks-8th-grade-science/.

[xvi] Arend Hintze, “Understanding the four types of AI, from reactive robots to self-aware beings”, The Conversation (November 13, 2016), http://theconversation.com/understanding-the-four-types-of-ai-from-reactive-robots-to-self-aware-beings-67616.

[xvii] Evan Ackerman, “Researchers Teaching Robots to Fell and React to Pain” IEEE Spectrum (24 May 2016), http://spectrum.ieee.org/automaton/robotics/robotics-software/researchers-teaching-robots-to-feel-and-react-to-pain.

[xviii] Darling, Kate, Extending Legal Protection to Social Robots: The Effects of Anthropormophism, Empathy, and Violent Behavior Towards Robotic Objects (April 23, 2012), 7. Robot Law, Calo, Froomkin, Kerr eds., Edward Elgar 2016; We Robot Conference 2012, University of Miami [hereinafter referred to as Extending Legal Protection to Social Robots].

[xix] Darling, Extending Legal Protections to Social Robots at 3 (internal citations omitted).

[xx] Hope Reese, “Understanding the differences between AI, machine learning, and deep learning”, TechRepublic (February 23, 2017), http://www.techrepublic.com/article/understanding-the-differences-between-ai-machine-learning-and-deep-learning/

[xxi] Wilmer Zhou, “Robots Are Going Their Two Separate Ways”, Robotics Tomorrow (Feb. 23, 2017), http://www.roboticstomorrow.com/article/2017/02/robots-are-going-their-two-separate-ways-/9541.

[xxii] Anne Tergesen & Miho Inada, “It’s Not a Stuffed Animal, It’s a $6,000 Medical Device”, The Wall Street Journal (June 21, 2010), https://www.wsj.com/articles/SB10001424052748704463504575301051844937276.

[xxiii] Barbara Ortutay, “‘Bionic Man’ walks, breathes with artificial parts”, NBC News (Oct 11, 2013), http://www.nbcnews.com/technology/bionic-man-walks-breathes-artificial-parts-8c11377326.

[xxiv] Chris Weller, “IBM is working on a robot that takes care of elderly people who live alone”, Business Insider (Dec. 28, 2016), http://www.businessinsider.com/ibm-pepper-robot-elder-care-2016-12.

[xxv] “Frequently Asked Question”, Asimo, https://asimo.honda.com/downloads/pdf/asimo-technical-faq.pdf.

[xxvi] Jethro Mullen, “Toyota wants thi sbaby robot to be your friend”, CNN Tech (October 4, 2016), http://money.cnn.com/2016/10/03/technology/toyota-robot-kirobo-mini/.

[xxvii] “Girl With Rare Chromosome Condition Dubbed ‘Bionic’ As She Rarely Sleeps Or Eats And Doesn’t Feel Pain”, The Huffington Post (18 January 2016), http://www.huffingtonpost.co.uk/2016/01/18/bionic-girl-chromosome-6-deletion-_n_9007464.html.

[xxviii] Saller v. Crown Cork & Seal Co., Inc., 187 Cal. App. 4th 1220, 1231–32 (2010).

[xxix] Kyllo v. United States, 533 U.S. 27, 40 (2001) (internal citations omitted).

[xxx] Covered Persons—Disability Defined, Legal Almanac: The Americans With Disabilities Act § 2:5.

[xxxi] Alex Hern, “Give robots ‘personhood status’, EU committee argues”, The Guardian (12 January 2017), https://www.theguardian.com/technology/2017/jan/12/give-robots-personhood-status-eu-committee-argues.

[xxxii] Comptroller of the Treasury v. Family Entm’t Ctrs., 519 A.2d 1337, 1338 (Md. Ct. Spec. App. 1987).

[xxxiii] Family Entm’t Ctrs., 519 A.2d at 1339.

[xxxiv]  Id.

[xxxv] Garratt v. Dailey, 46 Wash. 2d 197 (1955).

[xxxvi] Dailey, 46 Wash. 2d at 200.

[xxxvii] See First Nat. Bank of Boston v. Bellotii, 435 U.S. 765, 777 (1978) (“The inherent worth of the speech in terms of its capacity for informing the public does not depend upon the identity of its source, whether corporation, association, union, or individual.”); see also Citizens United v. Federal Election Comm’n, 558 U.S. 310 (2010) (where the Court implicitly recognized a corporation’s free speech rights as a legal person ).

[xxxviii] Nathan Heller, “If Animals Have Rights, Should Robots?” The New Yorker (November 28, 2016), http://www.newyorker.com/magazine/2016/11/28/if-animals-have-rights-should-robots.

[xxxix]See Gianna M. Ravenscroft, “Overview of Texas Animal Cruelty Laws”, Animal Legal and Historical Center (2002), https://www.animallaw.info/article/overview-texas-animal-cruelty-laws.