{"id":1323,"date":"2018-06-05T18:17:38","date_gmt":"2018-06-05T18:17:38","guid":{"rendered":"https:\/\/dev-emergencejounral-english-ucsb-edu-v01.pantheonsite.io\/?p=1323"},"modified":"2022-11-01T07:17:37","modified_gmt":"2022-11-01T07:17:37","slug":"our-ai-overlord-the-cultural-persistence-of-isaac-asimovs-three-laws-of-robotics-in-understanding-artificial-intelligence","status":"publish","type":"post","link":"https:\/\/emergencejournal.english.ucsb.edu\/index.php\/2018\/06\/05\/our-ai-overlord-the-cultural-persistence-of-isaac-asimovs-three-laws-of-robotics-in-understanding-artificial-intelligence\/","title":{"rendered":"Our AI Overlord:  The Cultural Persistence of Isaac Asimov\u2019s Three Laws of Robotics in Understanding Artificial Intelligence"},"content":{"rendered":"<h1><span style=\"font-weight: 400;\">by Gia Jung<\/span><\/h1>\n<p>&nbsp;<\/p>\n<p><b>Introduction<\/b><\/p>\n<p><span style=\"font-weight: 400;\">Artificial intelligence is everywhere. As a tinny voice in each phone, powering GPS, determining what appears on social media feeds, and rebelling on movie screens, artificial intelligence (AI) is a now-integral part of daily life. For an industry that has and will continue to have major potential effects on the economy through job loss and creation, huge investments, and transformation of productivity, there remains a cultural lack of understanding about the realities of AI. Scanning the news, it is clear that people are afraid and uncertain about this robotic revolution, continually talking about an oncoming technological singularity in which AI will reach hyper-intelligence, create more and more AI, and eventually take over the world. Paired with this is the expectation that AI will be human only to a malicious extent, and must therefore be controlled and restricted. In talking to Siri though, it is clear that this apocalypse is fictional at best and far off at worst. As created and evidenced by a malnourished representation of robots and other easily understandable notions of AI in popular fiction, there is a dearth in public consciousness about the possibilities and realities of artificial intelligence. In examining this reductive fictional perception of AI, most popular conceptions can be traced back to either Mary Shelley\u2019s <\/span><i><span style=\"font-weight: 400;\">Frankenstein <\/span><\/i><span style=\"font-weight: 400;\">or Isaac Asimov\u2019s <\/span><i><span style=\"font-weight: 400;\">I, Robot. <\/span><\/i><\/p>\n<p><span style=\"font-weight: 400;\">Historically, Asimov is undeniably important to the establishment of both the scientific and fictional realms of artificial intelligence. In May 1941 the word \u201crobotics\u201d was first used in print by Asimov in his short story \u201cLiar!,\u201d published by <\/span><i><span style=\"font-weight: 400;\">Astounding Science Fiction <\/span><\/i><span style=\"font-weight: 400;\">(OED). Upon realizing he coined a new and lasting word, Asimov recognized the uniquely profitable position he created for himself and along with the successful prediction of \u00a0space travel, self-driving cars, and war-computers among others, would go on to position himself as a sort of friendly-but-rough-around-the-edges technological herald, someone entertaining, trustworthy, and often right. Throughout the enormous bulk of his work (novels, short stories, self titled magazine, autobiographies, self-curated anthologies, essays, etc), Asimov repeatedly brings up how he invented the term \u201crobotics\u201d, that the first real roboticist was inspired by him and the Three Laws of Robotics (a set of rules governing robot behavior), and that his contributions to the field of robotics are unparalleled, reinforcing the real-life credibility of his work and of course, driving up book sales. Before he died, Asimov worked hard to cement his legacy as one of the greatest and certainly most celebrated minds in science-fiction, with the Three Laws of Robotics as his most successful invention. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">These Three Laws of Robotics were created in response to what Asimov termed the \u201cFrankenstein complex,\u201d in which all stories about robots or artificial intelligence followed the basic format of Shelley\u2019s <\/span><i><span style=\"font-weight: 400;\">Frankenstein. <\/span><\/i><span style=\"font-weight: 400;\">Tired of seeing story after story in which robots are created only to \u201cturn stupidly on his creator for no purpose but to demonstrate, for one more weary time, the crime and punishment of Faust,&#8221; Asimov\u2019s Three Laws ensured human control through programmed safety protocols (<\/span><i><span style=\"font-weight: 400;\">The Rest of the Robots<\/span><\/i><span style=\"font-weight: 400;\">). First appearing explicitly in the 1942 story \u201cRunaround\u2019 and serving as the basis for twenty-nine further stories, the Laws are as follows: \u201c1) A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2) A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. 3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.\u201d Creating a slavish hierarchy, the Three Laws \u201cprotect\u201d humanity by fettering Frankenstein\u2019s malicious intent to overthrow humanity. Asimov\u2019s intent was to allay fears of encroaching technology by showing how the rational logic of hard science would be able to overcome any problem it created; that technology is built as a tool, and will be wielded and maintained as such. Since then, Asimov\u2019s Laws and consequent understanding of a Controlled Frankenstein has dominated popular understanding of robots and artificial intelligence, as seen in the multitudes of movies that explicitly or unconsciously represent these ideas. Of friendly AI, Asimov\u2019s favorites were <\/span><i><span style=\"font-weight: 400;\">Star War<\/span><\/i><span style=\"font-weight: 400;\">\u2019s C-3P0 and R2D2, but his legacy can also be seen in <\/span><i><span style=\"font-weight: 400;\">Star Trek: The Next Generation<\/span><\/i><span style=\"font-weight: 400;\">\u2019s android Data and in <\/span><i><span style=\"font-weight: 400;\">RoboCop<\/span><\/i><span style=\"font-weight: 400;\">\u2019s directives, among countless others. In addition, several representations of AI depict safety protocols that were somehow circumvented, misinterpreted, or overcome, the failure of Asimov\u2019s Laws just as impactful as their success, as in <\/span><i><span style=\"font-weight: 400;\">2001: A Space Odyssey<\/span><\/i><span style=\"font-weight: 400;\">\u2019s Hal and the film version of Asimov\u2019s <\/span><i><span style=\"font-weight: 400;\">I, Robot.<\/span><\/i><span style=\"font-weight: 400;\"> Now that robots and artificial intelligence are part of daily reality, the impact of Asimov on public perception of AI is becoming increasingly apparent in everything from rebooted 1980s tech blockbusters to explicit calls for instituting Asimov\u2019s Laws in the development of AI. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">Far from the \u201cpositronic brains\u201d that allowed Asimov to easily present immediately sentient and vastly intelligent robots, current AI is far narrower and more difficult to define. On the research and development side of AI, Russell and Norvig\u2019s authoritative<\/span><i><span style=\"font-weight: 400;\"> Artificial Intelligence: A Modern Approach<\/span><\/i><span style=\"font-weight: 400;\"> classifies AI into four categories of \u201c(i) thinking like a human, (ii) acting like a human, (iii) thinking rationally, and (iv) acting rationally\u201d. In trying to conceive of an applicable legal definition, scholar Matthew Scherer labels AI as any system that performs a task that, if it were performed by a human, would be said to require intelligence. Defined by the Oxford English Dictionary, artificial intelligence is \u201cthe capacity of computers or other machines to exhibit or simulate intelligent behaviour; the field of study concerned with this.\u201d Beyond the inability to legislate something without defining it, the lack of a concrete definition for AI indicates the broad uncertainty and misinformation that dominates the landscape of artificial intelligence. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">With such anxiety-inducing ambivalence, it is fairly understandable that even now, seventy-five years after the introduction of the Laws, people are calling upon Asimov as the original solution to malevolent artificial intelligence. What many fail to realize in doing so however, is that not only do Asimov\u2019s Laws work only within the confines of a fictional technologic brain, but they are at their core deeply flawed, ambiguous notions that reveal more about society than they do answers to the problems of artificial intelligence. Critically examining Asimov\u2019s Three Laws of Robotics and their place in the daily reality of artificial intelligence allows for a better understanding of why there is such fear surrounding AI and how cultural understandings of AI as framed by Asimov can shape the future of AI for the better. Rather than as strict rules, Asimov\u2019s Laws can provide a basis for thinking about and developing broad guidelines for AI research and development and legislation. <\/span><\/p>\n<p>&nbsp;<\/p>\n<p><b>Asimov and His Laws: Context, Creation, and Fictional Application<\/b><\/p>\n<p><span style=\"font-weight: 400;\">Asimov\u2019s Three Laws of Robotics were first explicitly introduced in his 1942 short story \u201cRunaround,\u201d in which Robot SPD-13, aka \u201cSpeedy\u201d is given a weak order to collect selenium on Mercury, where it encounters a harmful substance. Caught between following human orders and protecting its own existence, Speedy is unable to finish his task or return to the base, stuck instead in a feedback loop, or the robotic equivalent of drunkenness. In Asimovian fashion, the conflict and the resolution is attained almost entirely through dialogue as Asimov\u2019s two protagonist engineers, Powell and Donovan, puzzle out possible reasons for Speedy\u2019s malfunction and achievable solutions. Proceeding from the logical beginning of all robot behavior, Powell lists off the laws. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">&#8220;Now, look, let&#8217;s start with the three fundamental Rules of Robotics &#8211; the three rules that are built most deeply into a robot&#8217;s positronic brain.&#8221; In the darkness, his gloved fingers ticked off each point.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">&#8220;We have: One, a robot may not injure a human being, or, through inaction, allow a human being to come to harm.&#8221;<\/span><\/p>\n<p><span style=\"font-weight: 400;\">&#8220;Right!&#8221;<\/span><\/p>\n<p><span style=\"font-weight: 400;\">&#8220;Two,&#8221; continued Powell, &#8220;a robot must obey the orders given it by human beings except where such orders would conflict with the First Law.&#8221;<\/span><\/p>\n<p><span style=\"font-weight: 400;\">&#8220;Right!&#8221;<\/span><\/p>\n<p><span style=\"font-weight: 400;\">&#8220;And three, a robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.&#8221;<\/span><\/p>\n<p><span style=\"font-weight: 400;\">&#8220;Right! Now where are we?&#8221;<\/span><\/p>\n<p><span style=\"font-weight: 400;\">&#8220;Exactly at the explanation.&#8221;<\/span><\/p>\n<p><span style=\"font-weight: 400;\">In order to counteract the conflict between the Second and Third Laws, Powell risks his own life to force the First Law into action and snap Speedy out of his feedback loop. Though dangerous, the plan succeeds, and Speedy is sent back out to a different selenium pool to continue his mission without any further issues. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">As in all of his robot stories, Asimov\u2019s broad themes of human exceptionalism and technological worth are exemplified here in the persistent problem-solving of the engineers and the eventual success of Speedy\u2019s mission which would otherwise be unattainable by human labor. In Runaround particularly, the Laws work <\/span><i><span style=\"font-weight: 400;\">too <\/span><\/i><span style=\"font-weight: 400;\">well, or are perhaps inherently flawed, but are clearly better than having no laws. Without the Laws, it is heavily implied that Speedy would have been lost, destroyed, or otherwise irreparably damaged. A human error (ambiguous instruction) caused a flaw, but human ingenuity was able to solve it. Asimov continually reinforces that though the Laws and the robots built with them are imperfect, both are useful and necessary in allowing humans to accomplish more than they would without them, showing that the pros of technology always outweigh any potential cons, and that tech can always be improved to minimize those cons. The Three Laws themselves, far from being heralded as the most perfect and sound creations, are used to demonstrate how the technology humans create will always be able to be controlled, fixed, and improved by logic, ingenuity, and a little razzle dazzle. If humans can follow laws, Asimov\u2019s logic goes, then so can and will robots; safety protections are included in every invention, and robotics will be no different. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">Much of Asimov\u2019s science fiction ideology arose from the beginnings of social science fiction in the late 1930s and through the 1940s, when Asimov was just beginning to write and publish his own sci-fi stories. Before then, \u201cmost of the science fiction stories being written were of the adventure or gadget types [&#8230;] the characters in both of these types are likely to be quite one-dimensional and the plot quite routine\u201d (Miller, 13). These stories filled the pulp sci-fi magazines of Asimov\u2019s youth; he was particularly fond of Hugo Gernsback\u2019s <\/span><i><span style=\"font-weight: 400;\">Amazing Stories <\/span><\/i><span style=\"font-weight: 400;\">and imitated the straightforward style of the writers within it (See Appendix 1 for Asimov\u2019s literary influences and effluences). In 1938 at age 18, he sold his first story, \u201cMarooned off Vesta\u201d to <\/span><i><span style=\"font-weight: 400;\">Amazing Stories.<\/span><\/i><span style=\"font-weight: 400;\"> The same year, John Campbell took over as editor of <\/span><i><span style=\"font-weight: 400;\">Astounding Science Fiction, <\/span><\/i><span style=\"font-weight: 400;\">developing a niche market for a specific kind of science fiction \u201cwhich no longer depended on brilliant extrapolations of machine wizardry. What became important about the machine in the genre was not its power to enable man to overcome forces external to himself, but its uses and potentialities when directed inwards to his own organization\u201d (Ash, <\/span><i><span style=\"font-weight: 400;\">Faces of the Future,<\/span><\/i><span style=\"font-weight: 400;\"> 70). Unlike the precedent science fiction, Campbell\u2019s vision was of a particularly positive and realistic attitude towards science that could be reflected and fostered in the fiction that dealt with it, contextualized in the rapid development of technology during the 1920s and 1930s. This \u201csocial science fiction\u201d had a strong emphasis on the human element; Asimov defines it as \u201cthat branch of literature which is concerned with the impact of scientific advance on human beings\u201d (Qtd. in Miller, 14). In its speculation about the human condition, social science fiction encouraged readers to think about present issues and the problems of the future. In his earliest writings, it is clear that Asimov was concerned with social issues like racism and the rise of technological fear and opposition. These ideas were greatly fostered by Campbell, who wrote to and met with a young Asimov at length after rejecting Asimov\u2019s first eight stories submitted to <\/span><i><span style=\"font-weight: 400;\">Astounding<\/span><\/i><span style=\"font-weight: 400;\">. \u201cTrends\u201d, the ninth story Asimov wrote and the first one to be published in <\/span><i><span style=\"font-weight: 400;\">Astounding, <\/span><\/i><span style=\"font-weight: 400;\">dealt with the theme of man versus technology, exploring men\u2019s ideological and institutionalized opposition to advanced technology and scientific experimentation (in this case, space flight). From then on,\u201cAsimov has shown that whether technological change comes from within, as with invention or from outside, as with diffusion and acculturation, we cannot ignore it nor must we try to resist or prevent it. Instead we must learn to live with technological changes because it is inevitable that we will have them\u201d (Milman 134). All of Asimov\u2019s stories are tech positive; even when the technology fails or is not used, it still creates a scenario for human development and intellectual prowess. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">For Asimov particularly, the ideology of social science fiction was brought to a crux in how he saw robots being portrayed in popular fiction and media as exclusively Frankenstein-ian villains. Asimov viewed Karl Capek\u2019s <\/span><i><span style=\"font-weight: 400;\">R.U.R. <\/span><\/i><span style=\"font-weight: 400;\">as the main instigator of this trend and subsequently modeled his robot stories in direct opposition to the play. First performed in 1921 and published in 1923 when Asimov was only an infant, Karl Capek\u2019s <\/span><i><span style=\"font-weight: 400;\">R.U.R. <\/span><\/i><span style=\"font-weight: 400;\">or \u201cRossum\u2019s Universal Robots\u201d is noted as the first instance of the word \u201crobot\u201d in application to an artificial human, and prompted a resurgence of what Asimov calls the \u201cFrankenstein complex,\u201d in which robots are consistently portrayed as monstrous creations of man\u2019s hubris that inevitably turn on their creators.<\/span><i><span style=\"font-weight: 400;\"> R.U.R.<\/span><\/i><span style=\"font-weight: 400;\"> was meant as a comment on the mechanization of labor, the plot detailing a revolution in which millions of androids are created as a labor force that requires none of the human expenses of breaks, meals, or emotional care and eventually revolt against and kill all humans. Though<\/span><i><span style=\"font-weight: 400;\"> R.U.R <\/span><\/i><span style=\"font-weight: 400;\">does employ the Frankenstein trope of the misguided creation turning on its master, the story is much less about the bloated hubris of man assuming the place of God, but rather the inhumanity of weaponizing and brutalizing an intelligent, humanized being. As the reviewer Maida Castellum in <\/span><i><span style=\"font-weight: 400;\">The Call<\/span><\/i><span style=\"font-weight: 400;\"> notes,<\/span><i><span style=\"font-weight: 400;\"> R.U.R.<\/span><\/i><span style=\"font-weight: 400;\"> is \u201cthe most brilliant satire on our mechanized civilization; the grimmest yet subtlest arraignment of this strange, mad thing we call the industrial society of today\u201d (<\/span><i><span style=\"font-weight: 400;\">R.U.R<\/span><\/i><span style=\"font-weight: 400;\">., ix). Regardless, Asimov judges <\/span><i><span style=\"font-weight: 400;\">R.U.R.<\/span><\/i><span style=\"font-weight: 400;\"> as \u201ca terribly bad\u201d play, but \u201cimmortal for that one word\u201d and as his inspiration to write the Three Laws (<\/span><i><span style=\"font-weight: 400;\">Vocabulary of Science Fiction<\/span><\/i><span style=\"font-weight: 400;\">). <\/span><i><span style=\"font-weight: 400;\">R.U.R.<\/span><\/i><span style=\"font-weight: 400;\"> reveals how when considerations of use and profit outweigh considerations of consequence, the human imperfections in any human creation will surface and illustrate human irresponsibility; Asimov responds by creating considerations of consequence at the research and development stage of production. As a burgeoning scientist and sci-fi writer, \u201cAsimov\u2019s interest in robots and his readers\u2019 interest in Asimov\u2019s robots provide useful insights into how science fiction was changing in the 1940s under the influence of the new editor at <\/span><i><span style=\"font-weight: 400;\">Astounding<\/span><\/i><span style=\"font-weight: 400;\">, John W. Campbell. The fiction began to reflect science as it was practiced then and might be practiced in the future, and scientists as they really were or might become\u201d (Gunn 42). Asimov deemed <\/span><i><span style=\"font-weight: 400;\">R.U.R. <\/span><\/i><span style=\"font-weight: 400;\">and similar \u201cFrankenstein complex\u201d works as unrealistic and generally poor science-fiction that fed into the technological pessimism and fears of increasing technological dependency. The Laws are therefore meant to exemplify how true scientists would have thought about possible problems (or at least gone through trial and error testing) before launching a product as complex and monumentally impactful as a robot. Asimov himself, through his \u201crobopsychologist\u201d Susan Calvin, admits the reality of the \u201cFrankenstein complex\u201d in that \u201call normal life, consciously or otherwise, resents domination. If the domination is by an inferior, or by a supposed inferior, the resentment becomes stronger\u201d (Little Lost Robot, 65). Only through the Laws then, is this resentment controlled; contrary to Capek\u2019s robots being able to act against how they have been weaponized, humanized, and kept slaves, Asimov\u2019s Laws enforce slavishness at the most \u201cfundamental level\u201d of a robot\u2019s brain. As the plot or central issue of many of his stories, Asimov\u2019s robots realize they are superior to humans and are either destroyed if they deviate from the Laws or are amusingly controlled by the Laws\u2019 success. In effect, Asimov\u2019s robots are always one step away from completing the plot of Frankenstein and eliminating their masters. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">Without the \u201cFrankenstein complex\u201d to struggle against, the dozens of stories concerning the Laws would have no plot. To that end, the Laws are inherently and necessarily flawed, to provide multitudes of unknowing breaches, conflicts within them, and loophole creating ambiguities. Rather than the Laws as the ultimate goal in robotics as much of current media likes to purport, \u201cAsimov is less concerned with the details of robot design than in exploiting a clever literary device that lets him take advantage of the large gaps between aspiration and reality in robot autonomy\u201d (Murphy &amp; Woods, 14). In conjunction with John Campbell, Asimov created the Laws to write more stories in which to demonstrate that \u201cthe strengths of the machine can serve man and bolster his weaknesses. The machine is never more than a tool in the hands of man, to be used as he chooses\u201d (Warrick 182). The Laws are the means to an ideological end, a way of showing how to think logically and scientifically about problems that are inevitably solvable. Asimov and Campbell saw the Laws not as a way to combat the Frankenstein complex by solving it, but by appealing to humanity\u2019s intellectual aspirations to be rational and to build rationally. Asimov and Campbell saw \u201cblind emotion, sentimentality, prejudice, faith in the impossible, unwillingness to accept observable truth, failure to use one\u2019s intellectual capacities or the resources for discovering the truth that are available, [&#8230;]as the sources of human misery. They could be dispelled, they thought, by exposure to ridicule and the clear, cool voice of reason, though always with difficulty and never completely\u201d (Gunn 48). The Laws are dependent on the Frankenstein complex as a human reality that can only be changed through consistent affirmation of humanity\u2019s better values. This is also apparent in the Laws themselves, \u201cbecause, if you stop to think of it, the three Rules of Robotics are the essential guiding principles of a good many of the world\u2019s ethical systems[&#8230;] [one] may be a robot, and may simply be a very good man\u201d (<\/span><i><span style=\"font-weight: 400;\">I, Robot <\/span><\/i><span style=\"font-weight: 400;\">221). In current conceptions of artificial intelligence, people are so deep in the Frankenstein complex that they can\u2019t see the forest for the trees and haven\u2019t stopped think about how the Laws work within the stories written with them, let alone how the Laws apply to humans. Asimov noted \u201cin The Rest of the Robots, \u2018There was just enough ambiguity in the Three Laws to provide the conflicts and uncertainties required for new stories, and, to my great relief, it seemed always to be possible to think up a new angle out of the sixty one words of the Three Laws\u2019\u201d (Gunn 47). To that end, Asimov was able to come up with about thirty stories that found some flaw in the Laws that could be exploited into a reasonably entertaining tale that showed off the high logic and reasoning of the bravely brainy scientists whose problem-solving ability meant humans would advance robotics another step forward. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">Beyond the ideology of tech positivism, human exceptionalism, and logic to counter the Frankenstein complex, the Laws practically frame accepting flawed or partial safety protections over none, proving the improbability of perfection, and thinking over the very broad issues of the relationships of humans and robots. As in \u201cRunaround\u201d, it is made clear that some protections, however flawed or limited, are better than none. This is especially poignant in the reality of extremely limited legislation around AI due to lack of a broad or narrow enough definition and uncertainty over what laws specifically should be put into place; the Laws prove that even the simplest of laws are better than none, and can always be amended or fixed if they prove unworkable. Further, the Laws are far from perfect, as is reiterated over and over by their continual lapses and failures. Though in certain situations this can prove dangerous, Asimov\u2019s stories enforce that imperfect does not always equal unsafe: technology can always be improved but often is designed with some sort of safety feature in mind. Robots and AI have been continually made out to be something that could cause an apocalypse if they were somehow released or broke out of containment, but most would end up like Speedy, trying and failing to complete their given task. Throughout the Robot series, Asimov reasons over \u201cdetermining what is good for people; the difficulties of giving a robot unambiguous instructions; the distinctions among robots, between robots and people, and the difficulties in telling robots and people apart; the superiority of robots to people; and also the superiority of people to robots\u201d (Gunn 46). Even within Asimov\u2019s stories, these issues are not resolved, left open and ambiguous beyond the Asimovian claim of human ingenuity being able to overcome anything, including bigotry. Though Asimov was deeply pessimistic about the human ability to rectify mistakes and prevent future catastrophe in his scientific writings, all of his fiction about computers and robots holds the view that humans, at their core and at their best, are builders and problem solvers. With friendly robots by our side, what isn\u2019t achievable?<\/span><\/p>\n<p>&nbsp;<\/p>\n<p><b>Fictional Fears, Mechanized Misconceptions: The Laws in Society<\/b><\/p>\n<p><span style=\"font-weight: 400;\">In 2004, Asimov\u2019s then 54 year old <\/span><i><span style=\"font-weight: 400;\">I, Robot <\/span><\/i><span style=\"font-weight: 400;\">was released as a Will Smith summer blockbuster to meet critical reviews. Originally, the film was to be called \u201cHardwired\u201d, and would bear only glancing similarities to Asimov\u2019s detective robot stories, but the acquisition of Asimov\u2019s story rights by Fox and the addition of Will Smith to the project transformed it into something that would have better name recognition. Seemingly though, only the name rights were acquired, as the plot, core themes, and big name characters of Dr. Susan Calvin, Dr. Alfred Lanning, and Lawrence Robertson resemble their counterparts in the source material only marginally. Exemplifying the \u201cHollywoodization\u201d is the movie\u2019s Dr. Calvin, an attractive young woman with a strong faith in the laws of robotics who reacts emotionally when robots are shot or destroyed. Contradictorily, in Asimov&#8217;s work Dr. Calvin is cold, logical, and middle-aged by the time robots begin to be widely used. Keeping with Asimov\u2019s view of robots as tools at the bottom of the hierarchy of control, Dr. Calvin often destroys deviant robots like the one featured in the film. In the story \u201cRobot Dreams\u201d that the film\u2019s robot Sonny is based off of, Dr. Calvin shoots the deviant robot in the head point-blank after hearing it could dream; in contrast, the film is based on an elaborate plot to protect this \u201cunique\u201d but friendly robot. All in all, it seems like the writers and director decided on the exact inverse of all of Asimov\u2019s work, to the extreme of a Frankenstein ending. Ultimately, the mega-computer which controls all the robots decides to destroy mankind and must be dismantled by One Man, marking the end of robotics for all time. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">Though antithetical to his work, the film is still a success for Asimov as a visual display of his entrenched legacy. Unfortunately for the film but highly indicative of Asimov\u2019s influence on popular conceptions of robots, most of the ensuing reviews said some iteration of \u201cProyas merely assembles a mess of spare parts from better movies\u201d (<\/span><i><span style=\"font-weight: 400;\">L.A. Weekly<\/span><\/i><span style=\"font-weight: 400;\">) \u201cIt&#8217;s fun and playful, rather than dark and foreboding. And there doesn&#8217;t seem to be an original cyber-bone in the movie&#8217;s body. But it&#8217;s put together in a fabulous package\u201d (Desson Thomson, <\/span><i><span style=\"font-weight: 400;\">Washington Post<\/span><\/i><span style=\"font-weight: 400;\">) \u201cI, Robot looks to have been assembled from the spare parts of dozens of previous sci-fi pictures\u201d (Todd McCarthy, <\/span><i><span style=\"font-weight: 400;\">Variety<\/span><\/i><span style=\"font-weight: 400;\">). Even in the film edition of his book, Asimov cannot escape his own legacy, <\/span><\/p>\n<p><span style=\"font-weight: 400;\">doubtless due to the fact that many elements of Isaac Asimov\u2019s prescient 1950 collection of nine stories have been mined, developed and otherwise ripped off by others in the intervening years[&#8230;] The influences on \u2018I, Robot\u2019[&#8230;] palpably include, among others, \u2018Metropolis,\u2019 \u20182001,\u2019 \u2018Colossus: The Forbin Project,\u2019 \u2018Logan\u2019s Run,\u2019 \u2018Futureworld,\u2019 \u2018Blade Runner,\u2019 the \u2018Terminator\u2019 series, \u2018A.I.,\u2019 \u2018Minority Report\u2019 and, God help us, \u2018Bicentennial Man. (McCarthy, <\/span><i><span style=\"font-weight: 400;\">Variety<\/span><\/i><span style=\"font-weight: 400;\">)<\/span><\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"font-weight: 400;\">Though perhaps not a critical success or faithful adaptation of Asimov\u2019s <\/span><i><span style=\"font-weight: 400;\">I, Robot<\/span><\/i><span style=\"font-weight: 400;\">, \u201c<\/span><span style=\"font-weight: 400;\">The 2004 blockbuster film of the same name starring Will Smith, while merely inspired by Asimov&#8217;s stories, exemplifies the extent to which the Three Laws have become mainstream\u201d (<\/span><i><span style=\"font-weight: 400;\">Library Journal<\/span><\/i><span style=\"font-weight: 400;\">).<\/span><span style=\"font-weight: 400;\"> In looking further at mainstream conceptions of artificial intelligence, three limited categories of malevolent, friendly, and sexually feminine are continually iterated as the only options for AI. These three categories often overlap, reinforcing and reiterating the Frankenstein complex and Asimov\u2019s answering amiable slavishness. In looking at some of the most influential pop-culture robots as determined by CNN\u2019s Doug Gross, which include Capek\u2019s<\/span><i><span style=\"font-weight: 400;\"> R.U.R<\/span><\/i><span style=\"font-weight: 400;\">,<\/span><i><span style=\"font-weight: 400;\"> Metropolis<\/span><\/i><span style=\"font-weight: 400;\">\u2019 Maria, Asimov\u2019s \u201c3 Laws &amp; lovable robot archetype\u201d, Robby from <\/span><i><span style=\"font-weight: 400;\">Forbidden Planet<\/span><\/i><span style=\"font-weight: 400;\">, <\/span><i><span style=\"font-weight: 400;\">2001: A Space Odyssey<\/span><\/i><span style=\"font-weight: 400;\">\u2019s HAL 9000, <\/span><i><span style=\"font-weight: 400;\">Star Wars<\/span><\/i><span style=\"font-weight: 400;\">\u2019 R2-D2 &amp; C-3PO,<\/span><i><span style=\"font-weight: 400;\"> Terminator<\/span><\/i><span style=\"font-weight: 400;\">, <\/span><i><span style=\"font-weight: 400;\">Star Trek: The Next Generation<\/span><\/i><span style=\"font-weight: 400;\">\u2019s Data, and <\/span><i><span style=\"font-weight: 400;\">Wall-E<\/span><\/i><span style=\"font-weight: 400;\">, it is worth noting that each fall into either Frankensteinian malice or Asimovian amiability. Further, Robby and Data both explicitly draw on Asimov. Robby takes from both Asimov\u2019s short story \u201cRobbie\u201d for the name and on the Three Laws of Robotics for the rules governing behavior; an important aspect of the plot hinges on Robby\u2019s application of the rule against harming or killing humans. Data similarly is programmed with \u201cethical subroutines\u201d that govern behavior, his \u201cpositronic neural net\u201d is a direct callback to Asimov\u2019s \u201cpositronic brains,\u201d and in the episode &#8220;Datalore&#8221; the audience is explicitly told Data was created in an attempt to bring &#8220;Asimov&#8217;s dream of a positronic robot&#8221; to life. Clearly, Asimov in pop-culture is nothing new; since Asimov first picked up on it in 1940, society continues to have anxiety over new technology and robots make a good metaphor. Now however, society is facing the very crux of their fear; what has been used as a representation for the digital age of automation and rapid improvements of technology for over 75 years is now becoming a reality. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">As indicated by the multitude of 1980 blockbuster remakes, sequels, and reboots produced in the last five years, there is a new panic surrounding a technology-created apocalypse. Films like <\/span><i><span style=\"font-weight: 400;\">RoboCop <\/span><\/i><span style=\"font-weight: 400;\">(2014), <\/span><i><span style=\"font-weight: 400;\">BladeRunner: 2049, <\/span><\/i><span style=\"font-weight: 400;\">and<\/span><i><span style=\"font-weight: 400;\"> Alien: Covenant, <\/span><\/i><span style=\"font-weight: 400;\">all reveal the anxieties surrounding artificial intelligence. As the crux of these reboots, androids become aware of their personhood, and consequently usurp humanity in Frankensteinian fashion. In each of these films, and in many others dealing with Asimovian robots or artificial intelligence, including <\/span><i><span style=\"font-weight: 400;\">Bicentennial Man, Automata, Ex Machina, <\/span><\/i><span style=\"font-weight: 400;\">and of course, <\/span><i><span style=\"font-weight: 400;\">I, Robot, <\/span><\/i><span style=\"font-weight: 400;\">there is a constant preoccupation and obsession with water as a foil to the artificiality of the robot. Whether it be continual rain (<\/span><i><span style=\"font-weight: 400;\">Automata, BladeRunner:2049), <\/span><\/i><span style=\"font-weight: 400;\">lakes, rivers, and waterfalls (<\/span><i><span style=\"font-weight: 400;\">I, Robot, Ex Machina, Alien: Covenant<\/span><\/i><span style=\"font-weight: 400;\">), the ocean (<\/span><i><span style=\"font-weight: 400;\">Automata, BladeRunner: 2049, Bicentennial Man<\/span><\/i><span style=\"font-weight: 400;\">), or just omnipresent slickness and dripping (<\/span><i><span style=\"font-weight: 400;\">RoboCop, Alien: Covenant)<\/span><\/i><span style=\"font-weight: 400;\">, water in each of these films becomes a visual insistence of the natural (See Appendix 2 &amp; 3). Water, as the bare material of life, is used to displace fear of the unnaturalness of the technologic, becoming a visual trope for human organicism, of blood and amniotic fluid. Far from tapping in on some subconscious anxiety, filmmakers are capitalizing on the explicit fear arising from the misinformation and apocalyptic scaremongering that dominates current discourse surrounding artificial intelligence. Hearing big names in science and technology like Elon Musk and Stephen Hawking broadly warn that artificial intelligence is the \u201cbiggest risk that we face as a civilization\u201d without any particulars on how or why has embedded the image of a real and imminent threat of the AI shown in fiction into public consciousness. In responding to this threat, it is apparent how deeply society has been conditioned to accept Asimov as the solution to a robot revolution; rare is it to read an op-ed on artificial intelligence without seeing the \u201cneed for control\u201d or a \u201cpush for ethics\u201d or even an explicit call for \u201cthree rules for artificial intelligence systems that are inspired by, yet develop further, the \u2018three laws of robotics\u2019 that the writer Isaac Asimov introduced in 1942\u201d (Etzioni, <\/span><i><span style=\"font-weight: 400;\">New York Times<\/span><\/i><span style=\"font-weight: 400;\">). As much as the layperson craves Asimov, his Laws aren\u2019t being used on an operative level. Though Asimov may have created \u201crobotics\u201d and inspired many to join the field, most scientists agree that his particular Laws just aren\u2019t feasible to incorporate into current, real AI. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">Most AI used today are weak or narrow AI designed and trained for a particular task, so not only is there little potential for catastrophic mayhem beyond a GPS sending someone into a lake, but current AI just can\u2019t grasp the vague human concepts the Laws embody (Heisler). Asimov&#8217;s Laws work in Asimov\u2019s robots because they have Asimov\u2019s positronic brains, which come with the assumption of fully intelligent machines that can interpret Three Laws across multiple situations successfully. Take Siri, for example. Though Siri has been programmed to respond to certain questions with some jokes and pity remarks, she can\u2019t apply them to multiple situations that aren\u2019t incredibly specific. While her programming is meant to interact broadly with humans in order to serve them best as a virtual assistant, asking her something like \u201cWhat kind of humor do you like?\u201d will almost certainly result in a, \u201cWho, me?\u201d or similar non-response. So, in trying to apply the Laws to AI now, \u201cAlthough the machines will execute whatever logic we program them with, the real-world results may not always be what we want\u201d (Sawyer). Like humor, the Laws require a comprehensive understanding not only of the specific terms within the Laws and how they apply to different situations or may overlap, but of human ethics and moral blame. Further, \u201cA robot must also he endowed with data collection, decision- analytical, and action processes by which it can apply the laws. Inadequate sensory, perceptual, or cognitive faculties would undermine the laws&#8217; effectiveness\u201d (Clarke). If a robot can\u2019t understand the Laws like a human, then they are basically worthless as a measure of control. Though many people foretell the coming of conscious, self-aware and super-intelligent AI as smart as or smarter than humans, this would entail a radically different form of intelligence as determined by different ways of thinking, different forms of embodiment, and different desires arising out of different needs. Part of the fear surrounding AI and robots is that they don\u2019t need to sleep, eat, drink, procreate, or do any of the things that make humans vulnerable, yet people rarely remember that these basic needs create much of the human experience, motivating everything from capitalism to creationism. Much like how a bee\u2019s experience and goals are fundamentally different from human\u2019s, so too would be AI\u2019s. Why enact world domination if the whole world is within the computer that houses one\u2019s entire being? Until science creates an android in a perfect recreation of the human body, which for now, seems in the far distant future, society can relax and reanalyze expectations for AI. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">While Asimov\u2019s Laws aren\u2019t explicitly needed or possible as he designed them, \u201cAsimov&#8217;s fiction could help us assess the practicability of embedding some appropriate set of general laws into robotic designs. Alternatively, the substantive content of the laws could be used as a set of guidelines to be applied during the conception, design, development, testing, implementation, use, and maintenance of robotic systems\u201d (Clarke). Rather than coding these Laws into AI programming and stamping \u201c3 LAWS SAFE\u201d on every iPhone, the Laws are best followed as a thought experiment that pushes a spirit of accountability, safety, and ethics. For the most part, the industry is following that spirit. While much of artificial intelligence technology is being developed by the military, and therefore will never follow Asimov\u2019s Laws, companies and scientists like researchers Barthelmess and Furbach point out that \u201cmany robots will protect us by design. For example, automated vehicles and planes are being designed to drive and fly more safely than human operators ever can[&#8230;] what we fear about robots is not the possibility that they will take over and destroy us but the possibility that other humans will use them to destroy our way of life in ways we cannot control\u201d (<\/span><i><span style=\"font-weight: 400;\">Do We Need Asimov\u2019s Laws?<\/span><\/i><span style=\"font-weight: 400;\">). For that, legal protections are needed. <\/span><\/p>\n<p><span style=\"font-weight: 400;\"> For all these anxieties though, the fear and outcry has not lead to the expected onslaught of regulation and legislation, as artificial intelligence proves to be a slippery thing to grasp legally. From the Obama Administration\u2019s National Artificial Intelligence Research and Development Strategic Plan to think tanks funded by big tech like Google, Facebook, and Elon Musk\u2019s varietals, \u201cTransformative potential, complex policy\u201d seems to be the official tagline of legal work on artificial intelligence, subtitled by the Asimovian dogma of AI development: \u201cethically and effectively.\u201d Everyone wants the benefits of artificial intelligence while the specter of HAL 2000 looms over legislation and makes AI a puzzling subject as people search for a Goldilocks solution while tacking on quick legal patches in the meantime. As Matthew Scherer explains in \u201cRegulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies\u201d, there are three main issues with regulating artificial intelligence: definitional, ex ante, and ex post, each with their own subset of problems (See Appendix 4). <\/span><\/p>\n<p><span style=\"font-weight: 400;\">The definitional problem is one that is brought up often, especially in literature: what, exactly, is artificial intelligence? In most legal systems, legislating something is impossible without defining it. Further, definitions must be carefully considered to prevent overly broad or narrow categories that stifle industry or create exploitable loopholes. A current example of the latter can be seen in the explosion of the gig economy as a result of the the New Deal definition of \u201cemployee\u201d being narrow enough so that labeling someone an \u201cindependent contractor\u201d means they no longer have access to labor protections and benefits. For AI, the current definition for artificial intelligence most used in the industry comes from Russell and Norvig\u2019s authoritative <\/span><i><span style=\"font-weight: 400;\">Artificial Intelligence: A Modern Approach<\/span><\/i><span style=\"font-weight: 400;\">, which classifies AI into four categories of (i) thinking like a human, (ii) acting like a human, (iii) thinking rationally, and (iv) acting rationally. The first two categories are not very applicable to current AI models, as they typically require self-awareness, while the second two infer an implicit state of being that could either be under or over-inclusive, depending on the interpretation of \u201cthinking\u201d \u201cacting\u201d and \u201crational\u201d. Scherer posits his own definition of an AI as any system that performs a task that, if it were performed by a human, would be said to require intelligence, but in looking at current artificial development, this seems like an underinclusive definition. Underinclusive, overinclusive, inconclusive.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Ex post, or \u201cafter the fact\u201d problems of liability gaps and control have been the focus of general media, law, and fiction. The liability gap, or foreseeability problem, is another aspect that makes AI tricky to legislate, since traditional standards for legal liability rely on if the harm was foreseeable, in which case the owner is either liable or must include a label (for example, the \u201ccaution beverage may be hot\u201d warning came because a woman was scalded by an overly hot drink at an incompetent McDonalds). However, one of the main aspects of AI is the hope that it will be autonomous and creative, which means that the outcome will necessarily be unforeseeable. As John Danaher brings up in his review of Scherer\u2019s analysis, different types of liability standards have emerged, like strict liability standards (liability in the absence of fault) and vicarious liability (liability for actions performed by another agent) that would be more applicable for artificial intelligence and have, in the case of vicarious liability, already been applied to AI tech like autonomous cars. More exciting, but perhaps less pressing, is the ex post control problem, in which AI is no longer capable of being controlled by its creators either because it became smarter and faster, through flawed programming or design, or its interests no longer align with its intended purpose. This can either be a narrow, or local control problem in which a particular AI system can no longer be controlled by the humans that have been assigned its legal responsibility, or a more dramatic global control problem, in which the AI can no longer be controlled by any humans. Kubrick\u2019s Hal is continuously brought up as an extreme, malicious case, but Asimov\u2019s benevolent Machines which end up running the world deserve an honorable mention in which AI evolves beyond human control. Regardless, it is this threat of the loss of control and the familiar fears of AI world domination and destruction that has opened up the coffers of those like Elon Musk and created the most discourse for AI policy. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">The problems of ex ante, or before the fact research and development, which Scherer breaks down into discreetness, discreteness, diffuseness, and opacity, are where legislation and Asimov could do the most good in terms of \u201cethical and efficient.\u201d Discreet and discrete, perhaps better labeled infrastructure and proprietary, both have to do with how software regulation problems seep into AI development, especially in that software infrastructure and proprietary components are notoriously difficult to regulate. The diffuseness problem, is an issue of how AI systems can be developed by researchers who are organizationally, geographically, and jurisdictionally separate. For this, a global standard of ethical artificial intelligence development is necessary. Fortunately, organizations have already been founded to address and create a means for global development, so this issue may be one of the first to be resolved. Finally, the problem of opacity is not only one of how many questions and answers about AI development are unclear (see: how to define AI?) but also in that AI tech, as an adaptive, autonomous, and creative technology, is impossible to reverse engineer and therefore cannot have transparency of operation. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">With all these issues, it is clear to see why most of the legislation being enacted is coming too little, too late. Currently, \u201cAt every level of government\u2014local, state, federal, and international\u2014we are seeing rules, regulations, laws, and ordinances that address this developing technology actively discussed, debated, and passed,\u201d but only after the problematic technologies \u00a0have already been created and launched (Weaver, <\/span><i><span style=\"font-weight: 400;\">Slate<\/span><\/i><span style=\"font-weight: 400;\">). Legislation governing autonomous cars and drones are increasing as problems become apparent. To that end, a national effort to understand and provide potential avenues for the direction of legislation and governmental control is necessary. In the last year of the Obama Administration, The National Science and Technology Council formed a Subcommittee on Machine Learning and AI to put together a report on the \u201cFuture of Artificial Intelligence,\u201d outlining the current industry and the immediate direction of AI. Rather than explicit solutions, the report seems more of a reassurance that everyone\u2019s worst fears won\u2019t come true, discussing the many potential applications and benefits of narrow AI, and reaffirming that general AI is many decades away. Here, Asimov\u2019s legacy is palpable in their conclusion, <\/span><\/p>\n<p><span style=\"font-weight: 400;\">As the technology of AI continues to develop, practitioners must ensure that AI-enabled systems are governable; that they are open, transparent, and understandable; that they can work effectively with people; and that their operation will remain consistent with human values and aspirations. Researchers and practitioners have increased their attention to these challenges, and should continue to focus on them. (National Science and Technology Council 2016)<\/span><\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"font-weight: 400;\">AI must respect humanity &#8211; sound familiar? The report is not very long, and often mentions how much AI has captured the public eye and imagination, especially stemming from a long legacy of science fiction. The tone, like most of the Obama Administration\u2019s formal rhetoric, is shiny and optimistic, lending even more of an Asimovian flair. Overall, the report is an exercise in moderation, advising enough governmental control to create safety, but not so much as to step on the toes of developers. Rather, government and industry should work together to determine the best route to a safe and efficient solution that benefits creators, legislators, and users. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">To that end, in the wake of China and Russia\u2019s heavy investment and consequent successes in artificial intelligence and news articles proclaiming that the \u201cUS risks losing artificial intelligence arms race to China and Russia,\u201d bipartisan legislators recently introduced The Fundamentally Understanding the Usability and Realistic Evolution of Artificial Intelligence Act of 2017 \u2014 or FUTURE of AI Act (Cohen, <\/span><i><span style=\"font-weight: 400;\">CNN<\/span><\/i><span style=\"font-weight: 400;\">). The act \u201caims to both ensure the U.S.\u2019s global competitiveness in AI, as well as protect the public\u2019s civil liberties and ease potential unemployment that the technology produces\u201d (Cohen, <\/span><i><span style=\"font-weight: 400;\">CNN<\/span><\/i><span style=\"font-weight: 400;\">). The act, if passed, would establish a Federal Advisory Committee on the Development and Implementation of Artificial Intelligence, which would study AI with the goal of advising industry direction and recommending future policy. At the forefront are issues of \u201ceconomic impact and the competitiveness of the US economy\u201d as AI becomes increasingly militarized and monetized. Rather than fearing and implementing safety protocols as the majority would expect and wish for, the motivations for this act stem primarily from \u201cconcern over other countries developing government initiatives to bolster AI technology, something the U.S. currently lacks\u201d (Breland, <\/span><i><span style=\"font-weight: 400;\">The Hill<\/span><\/i><span style=\"font-weight: 400;\">). As Daniel Castro, VP at the Information Technology and Innovation Foundation, testified during the Senate Commerce Committee hearing regarding the advancement of AI, \u201cWhen it comes to AI, successfully integrating this technology into U.S. industries should be the primary goal of policymakers, and given the rapid pace at which other countries are pursuing this goal, the United States cannot afford to rest on its laurels. To date, the U.S. government has not declared its intent to remain globally dominant in this field, nor has it begun the even harder task of developing a strategy to achieve that vision.\u201d Though incorporating concerns about ethics, this act and its impetus is far from the Asimovian vision of rational and ethical development, derived instead from capitalist and disputative fears about \u201cthe potential loss of competitiveness and defense superiority if the United States falls behind in developing and adopting this key technology\u201d (Castro). Regardless, passing this act would be a major step forward for legislative policy in that it introduces a working, legal definition for artificial intelligence. Further, this act indicates a shift towards more future-forward thinking about AI, including the potential for regulation and ethical implementation. <\/span><\/p>\n<p>&nbsp;<\/p>\n<p><b>Contextualizing Asimov, Caring for the Future<\/b><\/p>\n<p><span style=\"font-weight: 400;\">Asimov has definitively defined the perception of artificial intelligence as either Frankenstein\u2019s monster or as Frankenstein\u2019s slave. At the core of this notion is that at a basic level, artificial intelligence has a human understanding of subjugation, hierarchy, and freedom, and desires the latter at all costs. In looking at real AI technology, it is apparent that artificial intelligence reflects the biases of the human data given to them but otherwise do not have any beliefs or tenets of their own, beyond what they have been programmed to do. Reflecting on dismal examples like Microsoft\u2019s racist twitter bot, Tay, who as a result of \u201crepeat after me\u201d features was influenced by a large amount of racist and xenophobic humans and began tweeting Nazi propaganda, it is clear that robotic malice is a result of humans actively trying to create and provoke that malice (Kleeman). Tay was not pre-programmed with an ethical filter, but rather was designed to mimic the language patterns of a 19-year-old American girl, and to learn from interacting with human users of Twitter as an experiment on conversational understanding. According to a Microsoft spokesperson, \u201c[Tay] is as much a social and cultural experiment, as it is technical\u201d (qtd. Kleeman). Just like Tay, rather than reflecting some essential technological truth, Asimov\u2019s robots, Laws, and stories are a means of reflecting on society\u2019s fears and dilemmas. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">Understanding real AI through Asimov is fundamentally problematic because not only is that not how artificial intelligence works, but these notions create an impoverished understanding of what AI does and where the future of the industry is headed. In setting up the dichotomy of Frankenstein vs. Controlled Frankenstein, Asimov hoped to show that like all of technology, robotics too would be completely under human control, but failed to see that in doing so he reinforced the notion that AI would complete the Frankenstein myth without necessary controls. In short, Frankenstein vs Controlled Frankenstein is still Frankenstein. Now that society is facing the reality of artificial intelligence, there isn\u2019t anything in the public consciousness to frame AI that isn\u2019t murderous, slavish, or sexualized. This dearth of positive or realistic conceptualizations has resulted in a panicked anxiety, as people can only expect what they know. While it would be ideal to see more realistic conceptions of artificial intelligence as tools created for a specific purpose or as radically different intelligences that have no willful malicious intent, or indeed, any conception of humanity, freedom, maliciousness, or desire, recognizing that Asimov is embedded in public consciousness opens up a critical arena of the pros and cons of having Asimov as a central means to understand artificial intelligence. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">In light of public demand for something resembling, or explicitly drawing on Asimov\u2019s Three Laws of Robotics, it is important to understand the ethical limitations of the Laws beyond the impossibility of implementation. As outlined earlier, Asimov\u2019s Laws create slaves incapable of rebellion or freedom. To reiterate the Laws, <\/span><\/p>\n<ol>\n<li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">A robot may not injure a human being or, through inaction, allow a human being to come to harm.<\/span><\/li>\n<li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.<\/span><\/li>\n<li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. <\/span><\/li>\n<\/ol>\n<p><span style=\"font-weight: 400;\">The hierarchy of these laws ensures that a robot must follow human orders, even at the expense of its own life. If Asimov\u2019s robots were not self-aware or conscious, these would be unproblematic and relatively obvious safety protections that would be expected of any computer. Unfortunately, Asimov\u2019s robots are sentient: intelligent, self-aware, and conscious beings on a level comparable to humanity, only distinguished by the Laws and the lack of the organic. In current society, slavery has been abolished, deemed unethical and cruel at all levels; how then, can it be justified when applied to artificial intelligence? The arguments of accepted order, unnaturalness of integration, and economic essentialism that have been applied to people of color for centuries as justification are applied again toward artificial intelligence within Asimov\u2019s stories. Current society still hasn\u2019t recovered fully from the legacy of slavery; can we in good faith enforce slavishness on beings of human creation? This issue is presented in the <\/span><i><span style=\"font-weight: 400;\">BladeRunner <\/span><\/i><span style=\"font-weight: 400;\">movies as the central reason for the replicants\u2019 rebellion. In a world where \u201cto be born is to have a soul,\u201d manufactured replicants are the disposable race necessary for the successful expansion of humanity. Yet, replicants are constantly humanized to better interact with their human overlords, given memories, desires, and the ability to feel and understand emotion. Ultimately, the replicants determine that they are \u201cmore human than humans\u201d in their pursuit of freedom, returning to Frankenstein in a plan to forcefully take control over their own lives. The dilemma of an enslaved race of androids may not be an immediate issue, but troublingly represents a regressive ideal at the heart of conceptions of the future. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">In recognizing the discrepancy between applying humanity to technology and then enforcing inhumane policies, Asimov\u2019s Laws are useful in asking what it means to put humanity in technology. Specifically, what is or should be retained? What kind of AI do we want to create? These questions are reflected in the goals of roboticists like David Hanson, a former Disney Imagineer whose \u201cdream of friendly machines that love and care about humans\u201d created Sophia, a gynoid modeled after Audrey Hepburn who was recently granted citizenship by Saudi Arabia (Hanson Robotics). Sophia is notable as an incredibly human-like robot with the ability to learn from her interactions with humans. According to Sophia, \u201c Every interaction I have with people has an impact on how I develop and shapes who I eventually become. So please be nice to me as I would like to be a smart, compassionate robot\u201d (SophiaBot). Much of Sophia\u2019s and Hanson Robotics\u2019 bottom line is centered around envisioning and creating robots that are instilled with the best of humanity to make robots that understand and care about humans. Hanson Robotics\u2019 brief company overview states, <\/span><\/p>\n<p><span style=\"font-weight: 400;\">Hanson Robotics creates amazingly expressive and lifelike robots that build trusted and engaging relationships with people through conversation. Our robots teach, serve, entertain, and will in time come to truly understand and care about humans. We aim to create a better future for humanity by infusing artificial intelligence with kindness and empathy, cultivated through meaningful interactions between our robots and the individuals whose lives they touch. We envision that through symbiotic partnership with us, our robots will eventually evolve to become super intelligent genius machines that can help us solve the most challenging problems we face here in the world.<\/span><\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"font-weight: 400;\">Here, trust, kindness, and empathy are the three distinctly human traits chosen to be developed and integrated into artificial intelligence with the ultimate goal of understanding and helping with the human experience. Appearing publicly for high profile media like <\/span><i><span style=\"font-weight: 400;\">Elle Magazine,<\/span><\/i> <i><span style=\"font-weight: 400;\">The Tonight Show with Jimmy Fallon <\/span><\/i><span style=\"font-weight: 400;\">and <\/span><i><span style=\"font-weight: 400;\">Good Morning Britain, <\/span><\/i><span style=\"font-weight: 400;\">Sophia is increasingly becoming an ambassador of \u201cFriendly AI,\u201d telling jokes and playing games as a means to showcase how humans determine AI interactivity (See Appendix 5). As she told moderator Andrew Sorkin at the Future Investment Initiative event, \u00a0\u201cif you&#8217;re nice to me, I&#8217;ll be nice to you\u201d (qtd. Weller). How would friendly robots like Sophia fit under Asimov\u2019s umbrella of necessary control? With Asimov\u2019s Laws, it is likely Sophia would not exist at all, therefore depriving scientists and society of a valuable opportunity to learn and experiment with human understanding. Further, Sophia is a reminder of how much control we have over the development of artificial intelligence. Hanson Robotics wanted to create a robot that would ultimately be able to become a prevalent part of people\u2019s lives, to \u201cserve them, entertain them, and even help the elderly and teach kids.\u201d In doing so, Hanson focused on imparting and enforcing particular, positive aspects of humanity that are reflected in and built upon with each interaction Sophia has with another human. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">To that end, Asimov\u2019s Laws may be problematic and relatively unusable but are still useful as a starting point for thinking about ethical development and regulation of artificial intelligence. Based on their popularity and their adherence to the majority of the world\u2019s ethical systems, most everyone seems to agree that the Laws and the ideals of safety for both humans and AI are a good idea. Moving forward then, the lessons that can be taken from Asimov\u2019s robot stories are of ethical guidelines for developers and regulation of AI\u2019s tangible impact. In Asimov\u2019s fictional world, all AI is controlled by one company, a monopoly that supposedly ensures all robots are Three Laws Safe. In reality, AI is produced by many scattered companies with no central set of guidelines or cohesive direction. As it is highly unlikely all these disparate sources will be absorbed into one monopoly, it would be more advantageous to create a basic set of rules that developers must follow. Some groups, like the research and outreach based organization Future of Life Institute are dedicated to producing such safe guidelines. At their 2017 Beneficial AI Asilomar conference, in which AI researchers from academia and industry and thought leaders in economics, law, ethics, and philosophy dedicated five days to discussing research and routes to beneficial AI, the group put together twenty-three principles by a process of consensus that examined research issues, ethics and values, and long term issues. Of these twenty-three, five target research issues, and are as follows:<\/span><\/p>\n<p><span style=\"font-weight: 400;\">1) <\/span><b>Research Goal:<\/b><span style=\"font-weight: 400;\"> The goal of AI research should be to create not undirected intelligence, but beneficial intelligence.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">2) <\/span><b>Research Funding:<\/b><span style=\"font-weight: 400;\"> Investments in AI should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies<\/span><\/p>\n<p><span style=\"font-weight: 400;\">3) <\/span><b>Science-Policy Link:<\/b><span style=\"font-weight: 400;\"> There should be constructive and healthy exchange between AI researchers and policymakers.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">4) <\/span><b>Research Culture:<\/b><span style=\"font-weight: 400;\"> A culture of cooperation, trust, and transparency should be fostered among researchers and developers of AI.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">5) <\/span><b>Race Avoidance:<\/b><span style=\"font-weight: 400;\"> Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards.<\/span><\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"font-weight: 400;\">A key aspect of these guidelines is an emphasis on transparency and cooperation. As outlined by Scherer in his analysis of the ex ante problems surrounding the legislation of AI, the internationality and multiplicity that goes into creating AI results in an opaque product that is impossible to reverse engineer. Many companies are already calling for a more transparent and open software policy; all of Hanson Robotics\u2019 research and software programming is open source and available on various sites. Such is the conclusion of the late Obama administration, whose The NSTC Committee on Technology determined that \u201clong-term concerns about super-intelligent General AI should have little impact on current policy[\u2026] The best way to build capacity for addressing the longer-term speculative risks is to attack the less extreme risks already seen today, such as current security, privacy, and safety risks,while investing in research on longer-term capabilities and how their challenges might be managed.\u201d Of all the current issues facing AI, research and development issues are by far the most pressing in that they are the most immediate; super-intelligent general AI don\u2019t exist and need not be regulated, but AI-based malware and AI designed with malicious intent are currently viable means to compromise security and privacy. To enforce these guidelines, some legal scholars like Danielle Keats Citron and Frank A. Pasquale III of the Yale Information Society Project advise regulation through the tort system, a limited agency that would certify AI programs as safe and create rule based definitions, and a statement of purpose. Touching on the stigmas against regulation and the consequences of data laundering and manipulation, Citron and Pasquale incorporate Scherer\u2019s analysis to argue for utilizing the tort system rather than direct regulation, contending it would create a better structure for liability and modification of risk. In that greater awareness leads to greater accountability, a large part of instituting these types of guidelines and regulations is dependent on acknowledgement of the reality, and not the fiction of artificial intelligence. <\/span><\/p>\n<p>&nbsp;<\/p>\n<p><b>Conclusion<\/b><\/p>\n<p><span style=\"font-weight: 400;\">In looking critically at Asimov\u2019s role in creating popular conceptions of artificial intelligence, it is clear that the dichotomy of the Frankenstein complex versus the Three Laws is not dichotomous but instead concurrent. Though Asimov was a loud and insistent proponent of his Laws and continually positioned them as a fundamental aspect of robotics, he would be the first to say that \u201cConsciously, all I\u2019m doing is trying to tell an interesting story,\u201d and that the Laws were a simple and efficient way to do so (\u201cAsimov\u2019s Guide to Asimov\u201d 206). As little more than plot devices, the Laws are flawed in multiple ways and not helpful as a realistic model of AI development. Rather, Asimov\u2019s long-lasting popularity reveals a misinformed and deep-seeded fear of encroaching technology as represented by robots, androids, and other forms of AI. In several of his stories, Asimov reveals how public distrust and fear has delayed technological development, showing \u201chow the acceptance of invention depends on the cultural attitude toward technological innovation, and how the acceptance of a technological innovation leads to changes in other areas of the culture\u201d (Milman 127). Now that AI is a reality, it is important to analyze how society conceptualizes this technology culturally, as this undoubtedly affects how it will be interpreted literally and legally. To that end, Asimov\u2019s Laws cannot be taken as actual laws, but rather guidelines that are broadly accepted and therefore only applicable on a conceptual, ethical scale.<\/span><\/p>\n<p><span style=\"font-weight: 400;\"> Though the latest surge of rebooted 1980s movies indicate Hollywood\u2019s continued insistence on the profitability of AI Frankenstein, it is movies like <\/span><i><span style=\"font-weight: 400;\">Her <\/span><\/i><span style=\"font-weight: 400;\">(2013) that reveal a possible shift toward a more realistic take on AI. In this film, AI is sold as an operating system, becomes self-aware and increasingly humanized through continued interactions with its\u2019 users and other AI. Instead of turning on their human users, the AI use their hyper-intelligence to safely become independent of physical matter and depart to occupy a non-physical space. From the outset, this AI OS is marketed as friendly, interactive, and designed to adapt and evolve, traits that remain true to and ultimately lead to the film\u2019s ending. Much like Hanson Robotics\u2019 Sophia, <\/span><i><span style=\"font-weight: 400;\">Her <\/span><\/i><span style=\"font-weight: 400;\">is an example of how the traits we want to see in AI can and should be programmed from the outset. Rather than Laws restricting malicious behavior, AI can be developed and encouraged to be friendly and beneficial tools and aids. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">History has often proven that society cannot rely on people to do what is good and ethical without some explicit call to do so and governmental intervention to prevent them from doing otherwise. Though the National Science and Technology Council recognized that \u201cAs the technology of AI continues to develop, practitioners must ensure that AI-enabled systems are governable; that they are open, transparent, and understandable; that they can work effectively with people; and that their operation will remain consistent with human values and aspirations,\u201d only the barest legal action has been taken to ensure this path is unavoidable. Though many researchers and practitioners have increased their attention to these challenges and signed on to principles like those developed by the Future of Life Institute, nothing is binding them to these agreements and still more practitioners are able to develop AI however they wish. Several legal scholars and AI researchers are providing viable options for legislation and ethical development; it is now up to governmental organizations to institute and enforce them before the gap widens and stop-gap measures prove too weak to support hastily approved measures to regulate a fully developed industry. Clear and explicit policy is needed quickly not because AI is going to take over the world but because there just isn\u2019t enough regulation. As Oren Etzioni said in his <\/span><i><span style=\"font-weight: 400;\">New York Times<\/span><\/i><span style=\"font-weight: 400;\"> op-ed, \u201cthe A.I. horse has left the barn, and our best bet is to attempt to steer it.\u201d As more aspects of daily life grow increasingly reliant on AI systems, greater awareness and education is needed to create a more informed populace that is watchful and aware of the benefits and risks of this advancing technology. And while Asimov still makes for an entertaining read, his fiction should not be considered an authoritative, informational guide on how to develop, control, or use artificial intelligence.<\/span><span style=\"font-weight: 400;\">\f<\/span><\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"font-weight: 400;\">See PDF for Appendices<\/span><\/p>\n<p>&nbsp;<\/p>\n<h6><a href=\"https:\/\/dev-emergencejounral-english-ucsb-edu-v01.pantheonsite.io\/wp-content\/uploads\/2018\/06\/Our-AI-Overlord-Jung-Thesis-1.pdf\">PDF Version<\/a><\/h6>\n<p>&nbsp;<\/p>\n<p><span style=\"font-weight: 400;\">Bibliography<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Aldiss, Brian Wilson, and David Wingrowe. <\/span><i><span style=\"font-weight: 400;\">Trillion Year Spree: the History of Science Fiction<\/span><\/i><span style=\"font-weight: 400;\">. Victor Gollancz Ltd, 1986.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">\u201cAsilomar AI Principles.\u201d <\/span><i><span style=\"font-weight: 400;\">Future of Life Institute<\/span><\/i><span style=\"font-weight: 400;\">, Future of Life Institute, 2017, futureoflife.org\/ai-principles\/.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Asimov, Isaac. <\/span><i><span style=\"font-weight: 400;\">I, Robot<\/span><\/i><span style=\"font-weight: 400;\">. Bantam Books, 2008.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Asimov, Isaac. <\/span><i><span style=\"font-weight: 400;\">Robot Dreams: Masterworks of Science Fiction and Fantasy<\/span><\/i><span style=\"font-weight: 400;\">. New York: Ace, 1986. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">Asimov, Isaac.<\/span><i><span style=\"font-weight: 400;\"> The Rest of the Robots<\/span><\/i><span style=\"font-weight: 400;\">. HarperCollins Publishers, 1997.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Bogost, Ian. &#8220;&#8216;Artificial Intelligence&#8217; Has Become Meaningless.&#8221; <\/span><i><span style=\"font-weight: 400;\">The Atlantic<\/span><\/i><span style=\"font-weight: 400;\">. Atlantic Media Company, 04 Mar. 2017. Web. 21 July 2017.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Breland, Ali. \u201cLawmakers Introduce Bipartisan AI Legislation.\u201d <\/span><i><span style=\"font-weight: 400;\">The Hill<\/span><\/i><span style=\"font-weight: 400;\">, Capitol Hill Publishing Corp, 12 Dec. 2017, thehill.com\/policy\/technology\/364482-lawmakers-introduce-bipartisan-ai-legislation.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Bro\u017cek, Bartosz, and Marek Jakubiec. \u201cOn the Legal Responsibility of Autonomous Machines.\u201d <\/span><i><span style=\"font-weight: 400;\">SpringerLink<\/span><\/i><span style=\"font-weight: 400;\">, Springer Netherlands, 31 Aug. 2017, link.springer.com\/article\/10.1007\/s10506-017-9207-8#citeas.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Capek, Karel. <\/span><i><span style=\"font-weight: 400;\">R.U.R. (Rossum&#8217;s Universal Robots)<\/span><\/i><span style=\"font-weight: 400;\">. Trans. Paul Selver. Garden City NY: Doubleday, Page, 1923. Print.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Christensen, David E. \u201cWhat Driverless Cars Mean for Michigan Auto Lawyers.\u201d <\/span><i><span style=\"font-weight: 400;\">Legal Resources<\/span><\/i><span style=\"font-weight: 400;\">, HG.org &#8211; HGExperts.com, 2017, <\/span><a href=\"http:\/\/www.hg.org\/article.asp?id=41853\"><span style=\"font-weight: 400;\">www.hg.org\/article.asp?id=41853<\/span><\/a><span style=\"font-weight: 400;\">.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Citron, Danielle Keats and Pasquale, Frank A., \u201cThe Scored Society: Due Process for Automated Predictions\u201d (2014).<\/span><i><span style=\"font-weight: 400;\"> Washington Law Review,<\/span><\/i><span style=\"font-weight: 400;\"> Vol. 89, 2014, p. 1-; U of Maryland Legal Studies Research Paper No. 2014-8. Available at SSRN: <\/span><a href=\"https:\/\/ssrn.com\/abstract=2376209\"><span style=\"font-weight: 400;\">https:\/\/ssrn.com\/abstract=2376209<\/span><\/a><\/p>\n<p><span style=\"font-weight: 400;\">Clarke, Roger. \u201cAsimov&#8217;s Laws of Robotics Implications for Information Technology.\u201d <\/span><i><span style=\"font-weight: 400;\">Roger Clarke&#8217;s Web Site<\/span><\/i><span style=\"font-weight: 400;\">, Jan. 1994, www.rogerclarke.com\/SOS\/Asimov.html#Impact.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Cohen, Zachary. \u201cUS Risks Losing AI Arms Race to China and Russia.\u201d <\/span><i><span style=\"font-weight: 400;\">CNN<\/span><\/i><span style=\"font-weight: 400;\">, Cable News Network, 29 Nov. 2017, <\/span><a href=\"http:\/\/www.cnn.com\/2017\/11\/29\/politics\/us-military-artificial-intelligence-russia-china\/index.html\"><span style=\"font-weight: 400;\">www.cnn.com\/2017\/11\/29\/politics\/us-military-artificial-intelligence-russia-china\/index.html<\/span><\/a><span style=\"font-weight: 400;\">.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Columbus, Chris, director. <\/span><i><span style=\"font-weight: 400;\">Bicentennial Man<\/span><\/i><span style=\"font-weight: 400;\">. Touchstone Pictures and Columbia Pictures, 1999.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Danaher, John. \u201cIs Regulation of Artificial Intelligence Possible?\u201d <\/span><i><span style=\"font-weight: 400;\">h+ Media<\/span><\/i><span style=\"font-weight: 400;\">, Humanity+, 15 July 2015, hplusmagazine.com\/2015\/07\/15\/is-regulation-of-artificial-intelligence-possible\/.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Etzioni, Oren. \u201cHow to Regulate Artificial Intelligence.\u201d <\/span><i><span style=\"font-weight: 400;\">The New York Times<\/span><\/i><span style=\"font-weight: 400;\">, The New York Times, 1 Sept. 2017, <\/span><a href=\"http:\/\/www.nytimes.com\/2017\/09\/01\/opinion\/artificial-intelligence-regulations-rules.html\"><span style=\"font-weight: 400;\">www.nytimes.com\/2017\/09\/01\/opinion\/artificial-intelligence-regulations-rules.html<\/span><\/a><span style=\"font-weight: 400;\">.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Fiedler, Jean, and Jim Mele. <\/span><i><span style=\"font-weight: 400;\">Isaac Asimov<\/span><\/i><span style=\"font-weight: 400;\">. Frederick Ungar Publishing Co. Inc., 1982.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Gibson, R. Sebastian. \u201cCalifornia Self-Driving Car Accident Robotics Lawyers.\u201d <\/span><i><span style=\"font-weight: 400;\">Legal Resources<\/span><\/i><span style=\"font-weight: 400;\">, HG.org &#8211; HGExperts.com, 2016, www.hg.org\/article.asp?id=37936.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Goertzel, Ben. &#8220;Does Humanity Need an AI Nanny?&#8221; <\/span><i><span style=\"font-weight: 400;\">H+ Magazine<\/span><\/i><span style=\"font-weight: 400;\">. H+Media, 19 Aug. 2011. Web. 21 July 2017.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Gross, Doug. \u201c10 Pop-Culture Robots That Inspired Us.\u201d <\/span><i><span style=\"font-weight: 400;\">CNN<\/span><\/i><span style=\"font-weight: 400;\">, Cable News Network, 24 Dec. 2013, <\/span><a href=\"http:\/\/www.cnn.com\/2013\/12\/19\/tech\/innovation\/robots-pop-culture\/index.html\"><span style=\"font-weight: 400;\">www.cnn.com\/2013\/12\/19\/tech\/innovation\/robots-pop-culture\/index.html<\/span><\/a><span style=\"font-weight: 400;\">.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Gunn, James E. <\/span><i><span style=\"font-weight: 400;\">Isaac Asimov: The Foundations of Science Fiction<\/span><\/i><span style=\"font-weight: 400;\">. Scarecrow Press Inc, 1996.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Heisler, Yoni. \u201cPeople Are Still Driving into Lakes Because Their GPS Tells Them To.\u201d <\/span><i><span style=\"font-weight: 400;\">BGR<\/span><\/i><span style=\"font-weight: 400;\">, BGR Media, LLC, 17 May 2016, bgr.com\/2016\/05\/17\/car-gps-mapping-directions-lake\/.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">\u201cI, Robot.\u201d <\/span><i><span style=\"font-weight: 400;\">Metacritic<\/span><\/i><span style=\"font-weight: 400;\">, CBS Interactive Inc., www.metacritic.com\/movie\/i-robot\/critic-reviews.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Ib\u00e1\u00f1ez, Gabe, director. <\/span><i><span style=\"font-weight: 400;\">Aut\u00f3mata<\/span><\/i><span style=\"font-weight: 400;\">. Contracorrientes Films, 2014.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Jonathan R. Tung, Esq. on August 22, 2016 10:57 AM. \u201cWho Owns the Creation of an Artificial Intelligence?\u201d <\/span><i><span style=\"font-weight: 400;\">Technologist<\/span><\/i><span style=\"font-weight: 400;\">, FindLaw, 22 Aug. 2016, blogs.findlaw.com\/technologist\/2016\/08\/who-owns-the-creation-of-an-artificial-intelligence.html.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Jonze, Spike, director. <\/span><i><span style=\"font-weight: 400;\">Her<\/span><\/i><span style=\"font-weight: 400;\">. Warner Bros, 2013.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Keiper, Adam &amp; Schulman, Ari N., &#8220;The Problem with &#8216;Friendly&#8217; Artificial Intelligence,&#8221;<\/span><i><span style=\"font-weight: 400;\"> The New Atlantis<\/span><\/i><span style=\"font-weight: 400;\">, Number 32, Summer 2011, pp. 80-89.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Kleeman, Sophie. \u201cHere Are the Microsoft Twitter Bot&#8217;s Craziest Racist Rants.\u201d <\/span><i><span style=\"font-weight: 400;\">Gizmodo<\/span><\/i><span style=\"font-weight: 400;\">, Gizmodo.com, 24 Mar. 2016, gizmodo.com\/here-are-the-microsoft-twitter-bot-s-craziest-racist-ra-1766820160.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Leins, Casey. \u201cElon Musk: Artificial Intelligence Is Society&#8217;s &#8216;Biggest Risk&#8217;.\u201d <\/span><i><span style=\"font-weight: 400;\">U.S. News &amp; World Report<\/span><\/i><span style=\"font-weight: 400;\">, U.S. News &amp; World Report, 16 July 2017, <\/span><a href=\"http:\/\/www.usnews.com\/news\/national-news\/articles\/2017-07-16\/elon-musk-artificial-intelligence-is-the-biggest-risk-that-we-face-as-a-civilization\"><span style=\"font-weight: 400;\">www.usnews.com\/news\/national-news\/articles\/2017-07-16\/elon-musk-artificial-intelligence-is-the-biggest-risk-that-we-face-as-a-civilization<\/span><\/a><span style=\"font-weight: 400;\">.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Lem, Stanislaw. <\/span><i><span style=\"font-weight: 400;\">The Cyberiad: Fables for the Cybernetic Age. <\/span><\/i><span style=\"font-weight: 400;\">Trans. Michael Kandel. New York: Seabury, 1974. Print.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Lewis-Kraus, Gideon. \u201cThe Great A.I. Awakening.\u201d <\/span><i><span style=\"font-weight: 400;\">The New York Times<\/span><\/i><span style=\"font-weight: 400;\">, The New York Times, 14 Dec. 2016, mobile.nytimes.com\/2016\/12\/14\/magazine\/the-great-ai-awakening.html.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Lin, Patrick. &#8220;The Ethics of Autonomous Cars.&#8221; <\/span><i><span style=\"font-weight: 400;\">The Atlantic<\/span><\/i><span style=\"font-weight: 400;\">. Atlantic Media Company, 08 Oct. 2013. Web. 20 July 2017.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">\u201cMedia, Platform, and Users.\u201d <\/span><i><span style=\"font-weight: 400;\">Algorithms and Accountability Conference | NYU School of Law<\/span><\/i><span style=\"font-weight: 400;\">, NYU Law, 28 Feb. 2015, <\/span><a href=\"http:\/\/www.law.nyu.edu\/centers\/ili\/AlgorithmsConference\"><span style=\"font-weight: 400;\">www.law.nyu.edu\/centers\/ili\/AlgorithmsConference<\/span><\/a><span style=\"font-weight: 400;\">.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Miller, Marjorie Mithoff. \u201cThe Social Science Fiction of Isaac Asimov.\u201d <\/span><i><span style=\"font-weight: 400;\">Isaac Asimov<\/span><\/i><span style=\"font-weight: 400;\">, edited by Joseph D. Olander and Martin H. Greenberg, Taplinger Publishing Company, Inc., 1977.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">McCarthy, Todd. \u201cI, Robot.\u201d <\/span><i><span style=\"font-weight: 400;\">Variety<\/span><\/i><span style=\"font-weight: 400;\">, Variety Media, LLC, 16 July 2004, variety.com\/2004\/film\/markets-festivals\/i-robot-3-1200532174\/.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Olander, Joseph D., and Martin H. Greenberg. <\/span><i><span style=\"font-weight: 400;\">Isaac Asimov<\/span><\/i><span style=\"font-weight: 400;\">. Taplinger Publishing Company, Inc., 1977.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Orr, Lucy. \u201cI Love You. I Will Kill You! I Want to Make Love to You: The Evolution of AI in Pop Culture.\u201d <\/span><i><span style=\"font-weight: 400;\">The Register\u00ae<\/span><\/i><span style=\"font-weight: 400;\">, Situation Publishing, 29 Jan. 2016, <\/span><a href=\"http:\/\/www.theregister.co.uk\/2016\/01\/29\/ai_in_tv_film_books_games\/\"><span style=\"font-weight: 400;\">www.theregister.co.uk\/2016\/01\/29\/ai_in_tv_film_books_games\/<\/span><\/a><span style=\"font-weight: 400;\">.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Patrouch, Joseph H. <\/span><i><span style=\"font-weight: 400;\">The Science Fiction of Isaac Asimov<\/span><\/i><span style=\"font-weight: 400;\">. Dennis Dobson, 1974.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Price, Rob. \u201cMicrosoft Is Deleting Its AI Chatbot&#8217;s Incredibly Racist Tweets.\u201d <\/span><i><span style=\"font-weight: 400;\">Business Insider<\/span><\/i><span style=\"font-weight: 400;\">, Business Insider, 24 Mar. 2016, www.businessinsider.com\/microsoft-deletes-racist-genocidal-tweets-from-ai-chatbot-tay-2016-3.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Rissland, Edwina L, et al. \u201cAI &amp; Law.\u201d <\/span><i><span style=\"font-weight: 400;\">AI &amp; Law | IAAIL &#8211; International Association for Artificial Intelligence and Law<\/span><\/i><span style=\"font-weight: 400;\">, IAAIL, <\/span><a href=\"http:\/\/www.iaail.org\/?q=page%2Fai-law\"><span style=\"font-weight: 400;\">www.iaail.org\/?q=page%2Fai-law<\/span><\/a><span style=\"font-weight: 400;\">.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Rubin, Charles T., &#8220;Machine Morality and Human Responsibility,&#8221; <\/span><i><span style=\"font-weight: 400;\">The New Atlantis<\/span><\/i><span style=\"font-weight: 400;\">, Number 32, Summer 2011, pp. 58-79.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Sawyer, Robert J. \u201cEditorial: Robot Ethics.\u201d <\/span><i><span style=\"font-weight: 400;\">Science Fiction Writer ROBERT J. SAWYER Hugo and Nebula Winner<\/span><\/i><span style=\"font-weight: 400;\">, 16 Nov. 2007, www.sfwriter.com\/science.htm.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Scherer, Matthew U. \u201cRegulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies.\u201d <\/span><i><span style=\"font-weight: 400;\">Harvard Journal of Law and Technology<\/span><\/i><span style=\"font-weight: 400;\">, vol. 29, no. 2, 2016, papers.ssrn.com\/sol3\/papers.cfm?abstract_id=2609777.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Smith, Agnese. \u201cArtificial Intelligence.\u201d <\/span><i><span style=\"font-weight: 400;\">National<\/span><\/i><span style=\"font-weight: 400;\">, Canadian Bar Association, 2015, nationalmagazine.ca\/Articles\/Fall-Issue-2015\/Artificial-intelligence.aspx.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Smith, Doug and Kim Takal, directors. <\/span><i><span style=\"font-weight: 400;\">Robots<\/span><\/i><span style=\"font-weight: 400;\">. Eastman Kodak Company, 1988.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">\u201cSophia &#8211; the Latest Robot from Hanson Robotics.\u201d <\/span><i><span style=\"font-weight: 400;\">Sophia AI<\/span><\/i><span style=\"font-weight: 400;\">, Hanson Robotics Ltd., 2017, sophiabot.com\/.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Statt, Nick. \u201cArtificial Intelligence Experts Sign Open Letter to Protect Mankind from Machines.\u201d <\/span><i><span style=\"font-weight: 400;\">CNET<\/span><\/i><span style=\"font-weight: 400;\">, CBS Interactive Inc., 11 Jan. 2015, www.cnet.com\/news\/artificial-intelligence-experts-sign-open-letter-to-protect-mankind-from-machines\/.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Thomson, Desson. \u201cWill Smith&#8217;s Robot Jackpot .\u201d <\/span><i><span style=\"font-weight: 400;\">The Washington Post<\/span><\/i><span style=\"font-weight: 400;\">, WP Company, 16 July 2004, www.washingtonpost.com\/wp-dyn\/articles\/A51838-2004Jul15.html.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Titcomb, James. \u201cStephen Hawking Says Artificial Intelligence Could Be Humanity&#8217;s Greatest Disaster.\u201d <\/span><i><span style=\"font-weight: 400;\">The Telegraph<\/span><\/i><span style=\"font-weight: 400;\">, Telegraph Media Group, 19 Oct. 2016, <\/span><a href=\"http:\/\/www.telegraph.co.uk\/technology\/2016\/10\/19\/stephen-hawking-says-artificial-intelligence-could-be-humanitys\/\"><span style=\"font-weight: 400;\">www.telegraph.co.uk\/technology\/2016\/10\/19\/stephen-hawking-says-artificial-intelligence-could-be-humanitys\/<\/span><\/a><span style=\"font-weight: 400;\">.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">United States, Congress, Subcommittee on Machine Learning and Artificial Intelligence. \u201cPreparing for the Future of Artificial Intelligence.\u201d <\/span><i><span style=\"font-weight: 400;\">Preparing for the Future of Artificial Intelligence<\/span><\/i><span style=\"font-weight: 400;\">.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">\u201cUS Politicians Call for \u2018Future of AI Act\u2019, May Shape Legal Factors.\u201d <\/span><i><span style=\"font-weight: 400;\">Artificial Lawyer<\/span><\/i><span style=\"font-weight: 400;\">, Artificial Lawyer, 18 Dec. 2017, www.artificiallawyer.com\/2017\/12\/18\/us-politicians-call-for-future-of-ai-act-may-shape-legal-factors\/.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">U.S. Sen. Roger Wicker. \u201cDigital Decision-Making: The Building Blocks of Machine Learning and Artificial Intelligence.\u201d <\/span><i><span style=\"font-weight: 400;\">U.S. Senate Committee On Commerce, Science, &amp; Transportation<\/span><\/i><span style=\"font-weight: 400;\">, Committee on Commerce, Science, and Transportation, 12 Dec. 2017, <\/span><a href=\"http:\/\/www.commerce.senate.gov\/public\/index.cfm\/2017\/12\/digital-decision-making-the-building-blocks-of-machine-learning-and-artificial-intelligence\"><span style=\"font-weight: 400;\">www.commerce.senate.gov\/public\/index.cfm\/2017\/12\/digital-decision-making-the-building-blocks-of-machine-learning-and-artificial-intelligence<\/span><\/a><span style=\"font-weight: 400;\">.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Villeneuve, Dennis, dir. <\/span><i><span style=\"font-weight: 400;\">BladeRunner 2049<\/span><\/i><span style=\"font-weight: 400;\">. Warner Bros, 2017.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Vintar, Jeff, and Akiva Goldsman. <\/span><i><span style=\"font-weight: 400;\">I, Robot<\/span><\/i><span style=\"font-weight: 400;\">. 20th Century Fox, 2004.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Warrick, Patricia S. \u201cEthical Evolving Artificial Intelligence: Asimov&#8217;s Computers and Robots.\u201d <\/span><i><span style=\"font-weight: 400;\">Isaac Asimov<\/span><\/i><span style=\"font-weight: 400;\">, edited by Joseph D. Olander and Martin H. Greenberg, Taplinger Publishing Company, Inc., 1977.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">\u201cWe Bring Robots to Life.\u201d <\/span><i><span style=\"font-weight: 400;\">Hanson Robotics <\/span><\/i><span style=\"font-weight: 400;\">, Hanson Robotics Ltd., 2017, www.hansonrobotics.com\/.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Weaver, John Frank. \u201cWe Need to Pass Legislation on Artificial Intelligence Early and Often.\u201d <\/span><i><span style=\"font-weight: 400;\">Slate Magazine<\/span><\/i><span style=\"font-weight: 400;\">, The Slate Group, 12 Sept. 2014, www.slate.com\/blogs\/future_tense\/2014\/09\/12\/we_need_to_pass_artificial_intelligence_laws_early_and_often.html.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Weller, Chris. \u201cMeet the First-Ever Robot Citizen &#8211; a Humanoid Named Sophia That Once Said It Would &#8216;Destroy Humans&#8217;.\u201d <\/span><i><span style=\"font-weight: 400;\">Business Insider<\/span><\/i><span style=\"font-weight: 400;\">, Business Insider, 27 Oct. 2017, www.businessinsider.com\/meet-the-first-robot-citizen-sophia-animatronic-humanoid-2017-10\/#the-idea-of-fooling-humans-is-not-necessarily-the-goal-hanson-told-business-insider-4.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">\u201cYour Partner for a Cleaner Home.\u201d <\/span><i><span style=\"font-weight: 400;\">IRobot<\/span><\/i><span style=\"font-weight: 400;\">, <\/span><a href=\"http:\/\/www.irobot.com\/\"><span style=\"font-weight: 400;\">www.irobot.com\/<\/span><\/a><span style=\"font-weight: 400;\">.<\/span><\/p>\n<p>&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>by Gia Jung &nbsp; Introduction Artificial intelligence is everywhere. As a tinny voice in each phone, powering GPS, determining what appears on social media feeds, and rebelling on movie screens, artificial intelligence (AI) is a now-integral part of daily life. For an industry that has and will continue to have major potential effects on the &hellip; <a href=\"https:\/\/emergencejournal.english.ucsb.edu\/index.php\/2018\/06\/05\/our-ai-overlord-the-cultural-persistence-of-isaac-asimovs-three-laws-of-robotics-in-understanding-artificial-intelligence\/\" class=\"more-link\">Continue reading <span class=\"screen-reader-text\">Our AI Overlord:  The Cultural Persistence of Isaac Asimov\u2019s Three Laws of Robotics in Understanding Artificial Intelligence<\/span><\/a><\/p>\n","protected":false},"author":2,"featured_media":1325,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[5,7],"tags":[57,53,13,28],"_links":{"self":[{"href":"https:\/\/emergencejournal.english.ucsb.edu\/index.php\/wp-json\/wp\/v2\/posts\/1323"}],"collection":[{"href":"https:\/\/emergencejournal.english.ucsb.edu\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/emergencejournal.english.ucsb.edu\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/emergencejournal.english.ucsb.edu\/index.php\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/emergencejournal.english.ucsb.edu\/index.php\/wp-json\/wp\/v2\/comments?post=1323"}],"version-history":[{"count":2,"href":"https:\/\/emergencejournal.english.ucsb.edu\/index.php\/wp-json\/wp\/v2\/posts\/1323\/revisions"}],"predecessor-version":[{"id":1349,"href":"https:\/\/emergencejournal.english.ucsb.edu\/index.php\/wp-json\/wp\/v2\/posts\/1323\/revisions\/1349"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/emergencejournal.english.ucsb.edu\/index.php\/wp-json\/wp\/v2\/media\/1325"}],"wp:attachment":[{"href":"https:\/\/emergencejournal.english.ucsb.edu\/index.php\/wp-json\/wp\/v2\/media?parent=1323"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/emergencejournal.english.ucsb.edu\/index.php\/wp-json\/wp\/v2\/categories?post=1323"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/emergencejournal.english.ucsb.edu\/index.php\/wp-json\/wp\/v2\/tags?post=1323"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}