Our AI Overlord: The Cultural Persistence of Isaac Asimov’s Three Laws of Robotics in Understanding Artificial Intelligence

by Gia Jung

 

Introduction

Artificial intelligence is everywhere. As a tinny voice in each phone, powering GPS, determining what appears on social media feeds, and rebelling on movie screens, artificial intelligence (AI) is a now-integral part of daily life. For an industry that has and will continue to have major potential effects on the economy through job loss and creation, huge investments, and transformation of productivity, there remains a cultural lack of understanding about the realities of AI. Scanning the news, it is clear that people are afraid and uncertain about this robotic revolution, continually talking about an oncoming technological singularity in which AI will reach hyper-intelligence, create more and more AI, and eventually take over the world. Paired with this is the expectation that AI will be human only to a malicious extent, and must therefore be controlled and restricted. In talking to Siri though, it is clear that this apocalypse is fictional at best and far off at worst. As created and evidenced by a malnourished representation of robots and other easily understandable notions of AI in popular fiction, there is a dearth in public consciousness about the possibilities and realities of artificial intelligence. In examining this reductive fictional perception of AI, most popular conceptions can be traced back to either Mary Shelley’s Frankenstein or Isaac Asimov’s I, Robot.

Historically, Asimov is undeniably important to the establishment of both the scientific and fictional realms of artificial intelligence. In May 1941 the word “robotics” was first used in print by Asimov in his short story “Liar!,” published by Astounding Science Fiction (OED). Upon realizing he coined a new and lasting word, Asimov recognized the uniquely profitable position he created for himself and along with the successful prediction of  space travel, self-driving cars, and war-computers among others, would go on to position himself as a sort of friendly-but-rough-around-the-edges technological herald, someone entertaining, trustworthy, and often right. Throughout the enormous bulk of his work (novels, short stories, self titled magazine, autobiographies, self-curated anthologies, essays, etc), Asimov repeatedly brings up how he invented the term “robotics”, that the first real roboticist was inspired by him and the Three Laws of Robotics (a set of rules governing robot behavior), and that his contributions to the field of robotics are unparalleled, reinforcing the real-life credibility of his work and of course, driving up book sales. Before he died, Asimov worked hard to cement his legacy as one of the greatest and certainly most celebrated minds in science-fiction, with the Three Laws of Robotics as his most successful invention.

These Three Laws of Robotics were created in response to what Asimov termed the “Frankenstein complex,” in which all stories about robots or artificial intelligence followed the basic format of Shelley’s Frankenstein. Tired of seeing story after story in which robots are created only to “turn stupidly on his creator for no purpose but to demonstrate, for one more weary time, the crime and punishment of Faust,” Asimov’s Three Laws ensured human control through programmed safety protocols (The Rest of the Robots). First appearing explicitly in the 1942 story “Runaround’ and serving as the basis for twenty-nine further stories, the Laws are as follows: “1) A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2) A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. 3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.” Creating a slavish hierarchy, the Three Laws “protect” humanity by fettering Frankenstein’s malicious intent to overthrow humanity. Asimov’s intent was to allay fears of encroaching technology by showing how the rational logic of hard science would be able to overcome any problem it created; that technology is built as a tool, and will be wielded and maintained as such. Since then, Asimov’s Laws and consequent understanding of a Controlled Frankenstein has dominated popular understanding of robots and artificial intelligence, as seen in the multitudes of movies that explicitly or unconsciously represent these ideas. Of friendly AI, Asimov’s favorites were Star War’s C-3P0 and R2D2, but his legacy can also be seen in Star Trek: The Next Generation’s android Data and in RoboCop’s directives, among countless others. In addition, several representations of AI depict safety protocols that were somehow circumvented, misinterpreted, or overcome, the failure of Asimov’s Laws just as impactful as their success, as in 2001: A Space Odyssey’s Hal and the film version of Asimov’s I, Robot. Now that robots and artificial intelligence are part of daily reality, the impact of Asimov on public perception of AI is becoming increasingly apparent in everything from rebooted 1980s tech blockbusters to explicit calls for instituting Asimov’s Laws in the development of AI.

Far from the “positronic brains” that allowed Asimov to easily present immediately sentient and vastly intelligent robots, current AI is far narrower and more difficult to define. On the research and development side of AI, Russell and Norvig’s authoritative Artificial Intelligence: A Modern Approach classifies AI into four categories of “(i) thinking like a human, (ii) acting like a human, (iii) thinking rationally, and (iv) acting rationally”. In trying to conceive of an applicable legal definition, scholar Matthew Scherer labels AI as any system that performs a task that, if it were performed by a human, would be said to require intelligence. Defined by the Oxford English Dictionary, artificial intelligence is “the capacity of computers or other machines to exhibit or simulate intelligent behaviour; the field of study concerned with this.” Beyond the inability to legislate something without defining it, the lack of a concrete definition for AI indicates the broad uncertainty and misinformation that dominates the landscape of artificial intelligence.

With such anxiety-inducing ambivalence, it is fairly understandable that even now, seventy-five years after the introduction of the Laws, people are calling upon Asimov as the original solution to malevolent artificial intelligence. What many fail to realize in doing so however, is that not only do Asimov’s Laws work only within the confines of a fictional technologic brain, but they are at their core deeply flawed, ambiguous notions that reveal more about society than they do answers to the problems of artificial intelligence. Critically examining Asimov’s Three Laws of Robotics and their place in the daily reality of artificial intelligence allows for a better understanding of why there is such fear surrounding AI and how cultural understandings of AI as framed by Asimov can shape the future of AI for the better. Rather than as strict rules, Asimov’s Laws can provide a basis for thinking about and developing broad guidelines for AI research and development and legislation.

 

Asimov and His Laws: Context, Creation, and Fictional Application

Asimov’s Three Laws of Robotics were first explicitly introduced in his 1942 short story “Runaround,” in which Robot SPD-13, aka “Speedy” is given a weak order to collect selenium on Mercury, where it encounters a harmful substance. Caught between following human orders and protecting its own existence, Speedy is unable to finish his task or return to the base, stuck instead in a feedback loop, or the robotic equivalent of drunkenness. In Asimovian fashion, the conflict and the resolution is attained almost entirely through dialogue as Asimov’s two protagonist engineers, Powell and Donovan, puzzle out possible reasons for Speedy’s malfunction and achievable solutions. Proceeding from the logical beginning of all robot behavior, Powell lists off the laws.

“Now, look, let’s start with the three fundamental Rules of Robotics – the three rules that are built most deeply into a robot’s positronic brain.” In the darkness, his gloved fingers ticked off each point.

“We have: One, a robot may not injure a human being, or, through inaction, allow a human being to come to harm.”

“Right!”

“Two,” continued Powell, “a robot must obey the orders given it by human beings except where such orders would conflict with the First Law.”

“Right!”

“And three, a robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.”

“Right! Now where are we?”

“Exactly at the explanation.”

In order to counteract the conflict between the Second and Third Laws, Powell risks his own life to force the First Law into action and snap Speedy out of his feedback loop. Though dangerous, the plan succeeds, and Speedy is sent back out to a different selenium pool to continue his mission without any further issues.

As in all of his robot stories, Asimov’s broad themes of human exceptionalism and technological worth are exemplified here in the persistent problem-solving of the engineers and the eventual success of Speedy’s mission which would otherwise be unattainable by human labor. In Runaround particularly, the Laws work too well, or are perhaps inherently flawed, but are clearly better than having no laws. Without the Laws, it is heavily implied that Speedy would have been lost, destroyed, or otherwise irreparably damaged. A human error (ambiguous instruction) caused a flaw, but human ingenuity was able to solve it. Asimov continually reinforces that though the Laws and the robots built with them are imperfect, both are useful and necessary in allowing humans to accomplish more than they would without them, showing that the pros of technology always outweigh any potential cons, and that tech can always be improved to minimize those cons. The Three Laws themselves, far from being heralded as the most perfect and sound creations, are used to demonstrate how the technology humans create will always be able to be controlled, fixed, and improved by logic, ingenuity, and a little razzle dazzle. If humans can follow laws, Asimov’s logic goes, then so can and will robots; safety protections are included in every invention, and robotics will be no different.

Much of Asimov’s science fiction ideology arose from the beginnings of social science fiction in the late 1930s and through the 1940s, when Asimov was just beginning to write and publish his own sci-fi stories. Before then, “most of the science fiction stories being written were of the adventure or gadget types […] the characters in both of these types are likely to be quite one-dimensional and the plot quite routine” (Miller, 13). These stories filled the pulp sci-fi magazines of Asimov’s youth; he was particularly fond of Hugo Gernsback’s Amazing Stories and imitated the straightforward style of the writers within it (See Appendix 1 for Asimov’s literary influences and effluences). In 1938 at age 18, he sold his first story, “Marooned off Vesta” to Amazing Stories. The same year, John Campbell took over as editor of Astounding Science Fiction, developing a niche market for a specific kind of science fiction “which no longer depended on brilliant extrapolations of machine wizardry. What became important about the machine in the genre was not its power to enable man to overcome forces external to himself, but its uses and potentialities when directed inwards to his own organization” (Ash, Faces of the Future, 70). Unlike the precedent science fiction, Campbell’s vision was of a particularly positive and realistic attitude towards science that could be reflected and fostered in the fiction that dealt with it, contextualized in the rapid development of technology during the 1920s and 1930s. This “social science fiction” had a strong emphasis on the human element; Asimov defines it as “that branch of literature which is concerned with the impact of scientific advance on human beings” (Qtd. in Miller, 14). In its speculation about the human condition, social science fiction encouraged readers to think about present issues and the problems of the future. In his earliest writings, it is clear that Asimov was concerned with social issues like racism and the rise of technological fear and opposition. These ideas were greatly fostered by Campbell, who wrote to and met with a young Asimov at length after rejecting Asimov’s first eight stories submitted to Astounding. “Trends”, the ninth story Asimov wrote and the first one to be published in Astounding, dealt with the theme of man versus technology, exploring men’s ideological and institutionalized opposition to advanced technology and scientific experimentation (in this case, space flight). From then on,“Asimov has shown that whether technological change comes from within, as with invention or from outside, as with diffusion and acculturation, we cannot ignore it nor must we try to resist or prevent it. Instead we must learn to live with technological changes because it is inevitable that we will have them” (Milman 134). All of Asimov’s stories are tech positive; even when the technology fails or is not used, it still creates a scenario for human development and intellectual prowess.

For Asimov particularly, the ideology of social science fiction was brought to a crux in how he saw robots being portrayed in popular fiction and media as exclusively Frankenstein-ian villains. Asimov viewed Karl Capek’s R.U.R. as the main instigator of this trend and subsequently modeled his robot stories in direct opposition to the play. First performed in 1921 and published in 1923 when Asimov was only an infant, Karl Capek’s R.U.R. or “Rossum’s Universal Robots” is noted as the first instance of the word “robot” in application to an artificial human, and prompted a resurgence of what Asimov calls the “Frankenstein complex,” in which robots are consistently portrayed as monstrous creations of man’s hubris that inevitably turn on their creators. R.U.R. was meant as a comment on the mechanization of labor, the plot detailing a revolution in which millions of androids are created as a labor force that requires none of the human expenses of breaks, meals, or emotional care and eventually revolt against and kill all humans. Though R.U.R does employ the Frankenstein trope of the misguided creation turning on its master, the story is much less about the bloated hubris of man assuming the place of God, but rather the inhumanity of weaponizing and brutalizing an intelligent, humanized being. As the reviewer Maida Castellum in The Call notes, R.U.R. is “the most brilliant satire on our mechanized civilization; the grimmest yet subtlest arraignment of this strange, mad thing we call the industrial society of today” (R.U.R., ix). Regardless, Asimov judges R.U.R. as “a terribly bad” play, but “immortal for that one word” and as his inspiration to write the Three Laws (Vocabulary of Science Fiction). R.U.R. reveals how when considerations of use and profit outweigh considerations of consequence, the human imperfections in any human creation will surface and illustrate human irresponsibility; Asimov responds by creating considerations of consequence at the research and development stage of production. As a burgeoning scientist and sci-fi writer, “Asimov’s interest in robots and his readers’ interest in Asimov’s robots provide useful insights into how science fiction was changing in the 1940s under the influence of the new editor at Astounding, John W. Campbell. The fiction began to reflect science as it was practiced then and might be practiced in the future, and scientists as they really were or might become” (Gunn 42). Asimov deemed R.U.R. and similar “Frankenstein complex” works as unrealistic and generally poor science-fiction that fed into the technological pessimism and fears of increasing technological dependency. The Laws are therefore meant to exemplify how true scientists would have thought about possible problems (or at least gone through trial and error testing) before launching a product as complex and monumentally impactful as a robot. Asimov himself, through his “robopsychologist” Susan Calvin, admits the reality of the “Frankenstein complex” in that “all normal life, consciously or otherwise, resents domination. If the domination is by an inferior, or by a supposed inferior, the resentment becomes stronger” (Little Lost Robot, 65). Only through the Laws then, is this resentment controlled; contrary to Capek’s robots being able to act against how they have been weaponized, humanized, and kept slaves, Asimov’s Laws enforce slavishness at the most “fundamental level” of a robot’s brain. As the plot or central issue of many of his stories, Asimov’s robots realize they are superior to humans and are either destroyed if they deviate from the Laws or are amusingly controlled by the Laws’ success. In effect, Asimov’s robots are always one step away from completing the plot of Frankenstein and eliminating their masters.

Without the “Frankenstein complex” to struggle against, the dozens of stories concerning the Laws would have no plot. To that end, the Laws are inherently and necessarily flawed, to provide multitudes of unknowing breaches, conflicts within them, and loophole creating ambiguities. Rather than the Laws as the ultimate goal in robotics as much of current media likes to purport, “Asimov is less concerned with the details of robot design than in exploiting a clever literary device that lets him take advantage of the large gaps between aspiration and reality in robot autonomy” (Murphy & Woods, 14). In conjunction with John Campbell, Asimov created the Laws to write more stories in which to demonstrate that “the strengths of the machine can serve man and bolster his weaknesses. The machine is never more than a tool in the hands of man, to be used as he chooses” (Warrick 182). The Laws are the means to an ideological end, a way of showing how to think logically and scientifically about problems that are inevitably solvable. Asimov and Campbell saw the Laws not as a way to combat the Frankenstein complex by solving it, but by appealing to humanity’s intellectual aspirations to be rational and to build rationally. Asimov and Campbell saw “blind emotion, sentimentality, prejudice, faith in the impossible, unwillingness to accept observable truth, failure to use one’s intellectual capacities or the resources for discovering the truth that are available, […]as the sources of human misery. They could be dispelled, they thought, by exposure to ridicule and the clear, cool voice of reason, though always with difficulty and never completely” (Gunn 48). The Laws are dependent on the Frankenstein complex as a human reality that can only be changed through consistent affirmation of humanity’s better values. This is also apparent in the Laws themselves, “because, if you stop to think of it, the three Rules of Robotics are the essential guiding principles of a good many of the world’s ethical systems[…] [one] may be a robot, and may simply be a very good man” (I, Robot 221). In current conceptions of artificial intelligence, people are so deep in the Frankenstein complex that they can’t see the forest for the trees and haven’t stopped think about how the Laws work within the stories written with them, let alone how the Laws apply to humans. Asimov noted “in The Rest of the Robots, ‘There was just enough ambiguity in the Three Laws to provide the conflicts and uncertainties required for new stories, and, to my great relief, it seemed always to be possible to think up a new angle out of the sixty one words of the Three Laws’” (Gunn 47). To that end, Asimov was able to come up with about thirty stories that found some flaw in the Laws that could be exploited into a reasonably entertaining tale that showed off the high logic and reasoning of the bravely brainy scientists whose problem-solving ability meant humans would advance robotics another step forward.

Beyond the ideology of tech positivism, human exceptionalism, and logic to counter the Frankenstein complex, the Laws practically frame accepting flawed or partial safety protections over none, proving the improbability of perfection, and thinking over the very broad issues of the relationships of humans and robots. As in “Runaround”, it is made clear that some protections, however flawed or limited, are better than none. This is especially poignant in the reality of extremely limited legislation around AI due to lack of a broad or narrow enough definition and uncertainty over what laws specifically should be put into place; the Laws prove that even the simplest of laws are better than none, and can always be amended or fixed if they prove unworkable. Further, the Laws are far from perfect, as is reiterated over and over by their continual lapses and failures. Though in certain situations this can prove dangerous, Asimov’s stories enforce that imperfect does not always equal unsafe: technology can always be improved but often is designed with some sort of safety feature in mind. Robots and AI have been continually made out to be something that could cause an apocalypse if they were somehow released or broke out of containment, but most would end up like Speedy, trying and failing to complete their given task. Throughout the Robot series, Asimov reasons over “determining what is good for people; the difficulties of giving a robot unambiguous instructions; the distinctions among robots, between robots and people, and the difficulties in telling robots and people apart; the superiority of robots to people; and also the superiority of people to robots” (Gunn 46). Even within Asimov’s stories, these issues are not resolved, left open and ambiguous beyond the Asimovian claim of human ingenuity being able to overcome anything, including bigotry. Though Asimov was deeply pessimistic about the human ability to rectify mistakes and prevent future catastrophe in his scientific writings, all of his fiction about computers and robots holds the view that humans, at their core and at their best, are builders and problem solvers. With friendly robots by our side, what isn’t achievable?

 

Fictional Fears, Mechanized Misconceptions: The Laws in Society

In 2004, Asimov’s then 54 year old I, Robot was released as a Will Smith summer blockbuster to meet critical reviews. Originally, the film was to be called “Hardwired”, and would bear only glancing similarities to Asimov’s detective robot stories, but the acquisition of Asimov’s story rights by Fox and the addition of Will Smith to the project transformed it into something that would have better name recognition. Seemingly though, only the name rights were acquired, as the plot, core themes, and big name characters of Dr. Susan Calvin, Dr. Alfred Lanning, and Lawrence Robertson resemble their counterparts in the source material only marginally. Exemplifying the “Hollywoodization” is the movie’s Dr. Calvin, an attractive young woman with a strong faith in the laws of robotics who reacts emotionally when robots are shot or destroyed. Contradictorily, in Asimov’s work Dr. Calvin is cold, logical, and middle-aged by the time robots begin to be widely used. Keeping with Asimov’s view of robots as tools at the bottom of the hierarchy of control, Dr. Calvin often destroys deviant robots like the one featured in the film. In the story “Robot Dreams” that the film’s robot Sonny is based off of, Dr. Calvin shoots the deviant robot in the head point-blank after hearing it could dream; in contrast, the film is based on an elaborate plot to protect this “unique” but friendly robot. All in all, it seems like the writers and director decided on the exact inverse of all of Asimov’s work, to the extreme of a Frankenstein ending. Ultimately, the mega-computer which controls all the robots decides to destroy mankind and must be dismantled by One Man, marking the end of robotics for all time.

Though antithetical to his work, the film is still a success for Asimov as a visual display of his entrenched legacy. Unfortunately for the film but highly indicative of Asimov’s influence on popular conceptions of robots, most of the ensuing reviews said some iteration of “Proyas merely assembles a mess of spare parts from better movies” (L.A. Weekly) “It’s fun and playful, rather than dark and foreboding. And there doesn’t seem to be an original cyber-bone in the movie’s body. But it’s put together in a fabulous package” (Desson Thomson, Washington Post) “I, Robot looks to have been assembled from the spare parts of dozens of previous sci-fi pictures” (Todd McCarthy, Variety). Even in the film edition of his book, Asimov cannot escape his own legacy,

doubtless due to the fact that many elements of Isaac Asimov’s prescient 1950 collection of nine stories have been mined, developed and otherwise ripped off by others in the intervening years[…] The influences on ‘I, Robot’[…] palpably include, among others, ‘Metropolis,’ ‘2001,’ ‘Colossus: The Forbin Project,’ ‘Logan’s Run,’ ‘Futureworld,’ ‘Blade Runner,’ the ‘Terminator’ series, ‘A.I.,’ ‘Minority Report’ and, God help us, ‘Bicentennial Man. (McCarthy, Variety)

 

Though perhaps not a critical success or faithful adaptation of Asimov’s I, Robot, “The 2004 blockbuster film of the same name starring Will Smith, while merely inspired by Asimov’s stories, exemplifies the extent to which the Three Laws have become mainstream” (Library Journal). In looking further at mainstream conceptions of artificial intelligence, three limited categories of malevolent, friendly, and sexually feminine are continually iterated as the only options for AI. These three categories often overlap, reinforcing and reiterating the Frankenstein complex and Asimov’s answering amiable slavishness. In looking at some of the most influential pop-culture robots as determined by CNN’s Doug Gross, which include Capek’s R.U.R, Metropolis’ Maria, Asimov’s “3 Laws & lovable robot archetype”, Robby from Forbidden Planet, 2001: A Space Odyssey’s HAL 9000, Star Wars’ R2-D2 & C-3PO, Terminator, Star Trek: The Next Generation’s Data, and Wall-E, it is worth noting that each fall into either Frankensteinian malice or Asimovian amiability. Further, Robby and Data both explicitly draw on Asimov. Robby takes from both Asimov’s short story “Robbie” for the name and on the Three Laws of Robotics for the rules governing behavior; an important aspect of the plot hinges on Robby’s application of the rule against harming or killing humans. Data similarly is programmed with “ethical subroutines” that govern behavior, his “positronic neural net” is a direct callback to Asimov’s “positronic brains,” and in the episode “Datalore” the audience is explicitly told Data was created in an attempt to bring “Asimov’s dream of a positronic robot” to life. Clearly, Asimov in pop-culture is nothing new; since Asimov first picked up on it in 1940, society continues to have anxiety over new technology and robots make a good metaphor. Now however, society is facing the very crux of their fear; what has been used as a representation for the digital age of automation and rapid improvements of technology for over 75 years is now becoming a reality.

As indicated by the multitude of 1980 blockbuster remakes, sequels, and reboots produced in the last five years, there is a new panic surrounding a technology-created apocalypse. Films like RoboCop (2014), BladeRunner: 2049, and Alien: Covenant, all reveal the anxieties surrounding artificial intelligence. As the crux of these reboots, androids become aware of their personhood, and consequently usurp humanity in Frankensteinian fashion. In each of these films, and in many others dealing with Asimovian robots or artificial intelligence, including Bicentennial Man, Automata, Ex Machina, and of course, I, Robot, there is a constant preoccupation and obsession with water as a foil to the artificiality of the robot. Whether it be continual rain (Automata, BladeRunner:2049), lakes, rivers, and waterfalls (I, Robot, Ex Machina, Alien: Covenant), the ocean (Automata, BladeRunner: 2049, Bicentennial Man), or just omnipresent slickness and dripping (RoboCop, Alien: Covenant), water in each of these films becomes a visual insistence of the natural (See Appendix 2 & 3). Water, as the bare material of life, is used to displace fear of the unnaturalness of the technologic, becoming a visual trope for human organicism, of blood and amniotic fluid. Far from tapping in on some subconscious anxiety, filmmakers are capitalizing on the explicit fear arising from the misinformation and apocalyptic scaremongering that dominates current discourse surrounding artificial intelligence. Hearing big names in science and technology like Elon Musk and Stephen Hawking broadly warn that artificial intelligence is the “biggest risk that we face as a civilization” without any particulars on how or why has embedded the image of a real and imminent threat of the AI shown in fiction into public consciousness. In responding to this threat, it is apparent how deeply society has been conditioned to accept Asimov as the solution to a robot revolution; rare is it to read an op-ed on artificial intelligence without seeing the “need for control” or a “push for ethics” or even an explicit call for “three rules for artificial intelligence systems that are inspired by, yet develop further, the ‘three laws of robotics’ that the writer Isaac Asimov introduced in 1942” (Etzioni, New York Times). As much as the layperson craves Asimov, his Laws aren’t being used on an operative level. Though Asimov may have created “robotics” and inspired many to join the field, most scientists agree that his particular Laws just aren’t feasible to incorporate into current, real AI.

Most AI used today are weak or narrow AI designed and trained for a particular task, so not only is there little potential for catastrophic mayhem beyond a GPS sending someone into a lake, but current AI just can’t grasp the vague human concepts the Laws embody (Heisler). Asimov’s Laws work in Asimov’s robots because they have Asimov’s positronic brains, which come with the assumption of fully intelligent machines that can interpret Three Laws across multiple situations successfully. Take Siri, for example. Though Siri has been programmed to respond to certain questions with some jokes and pity remarks, she can’t apply them to multiple situations that aren’t incredibly specific. While her programming is meant to interact broadly with humans in order to serve them best as a virtual assistant, asking her something like “What kind of humor do you like?” will almost certainly result in a, “Who, me?” or similar non-response. So, in trying to apply the Laws to AI now, “Although the machines will execute whatever logic we program them with, the real-world results may not always be what we want” (Sawyer). Like humor, the Laws require a comprehensive understanding not only of the specific terms within the Laws and how they apply to different situations or may overlap, but of human ethics and moral blame. Further, “A robot must also he endowed with data collection, decision- analytical, and action processes by which it can apply the laws. Inadequate sensory, perceptual, or cognitive faculties would undermine the laws’ effectiveness” (Clarke). If a robot can’t understand the Laws like a human, then they are basically worthless as a measure of control. Though many people foretell the coming of conscious, self-aware and super-intelligent AI as smart as or smarter than humans, this would entail a radically different form of intelligence as determined by different ways of thinking, different forms of embodiment, and different desires arising out of different needs. Part of the fear surrounding AI and robots is that they don’t need to sleep, eat, drink, procreate, or do any of the things that make humans vulnerable, yet people rarely remember that these basic needs create much of the human experience, motivating everything from capitalism to creationism. Much like how a bee’s experience and goals are fundamentally different from human’s, so too would be AI’s. Why enact world domination if the whole world is within the computer that houses one’s entire being? Until science creates an android in a perfect recreation of the human body, which for now, seems in the far distant future, society can relax and reanalyze expectations for AI.

While Asimov’s Laws aren’t explicitly needed or possible as he designed them, “Asimov’s fiction could help us assess the practicability of embedding some appropriate set of general laws into robotic designs. Alternatively, the substantive content of the laws could be used as a set of guidelines to be applied during the conception, design, development, testing, implementation, use, and maintenance of robotic systems” (Clarke). Rather than coding these Laws into AI programming and stamping “3 LAWS SAFE” on every iPhone, the Laws are best followed as a thought experiment that pushes a spirit of accountability, safety, and ethics. For the most part, the industry is following that spirit. While much of artificial intelligence technology is being developed by the military, and therefore will never follow Asimov’s Laws, companies and scientists like researchers Barthelmess and Furbach point out that “many robots will protect us by design. For example, automated vehicles and planes are being designed to drive and fly more safely than human operators ever can[…] what we fear about robots is not the possibility that they will take over and destroy us but the possibility that other humans will use them to destroy our way of life in ways we cannot control” (Do We Need Asimov’s Laws?). For that, legal protections are needed.

For all these anxieties though, the fear and outcry has not lead to the expected onslaught of regulation and legislation, as artificial intelligence proves to be a slippery thing to grasp legally. From the Obama Administration’s National Artificial Intelligence Research and Development Strategic Plan to think tanks funded by big tech like Google, Facebook, and Elon Musk’s varietals, “Transformative potential, complex policy” seems to be the official tagline of legal work on artificial intelligence, subtitled by the Asimovian dogma of AI development: “ethically and effectively.” Everyone wants the benefits of artificial intelligence while the specter of HAL 2000 looms over legislation and makes AI a puzzling subject as people search for a Goldilocks solution while tacking on quick legal patches in the meantime. As Matthew Scherer explains in “Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies”, there are three main issues with regulating artificial intelligence: definitional, ex ante, and ex post, each with their own subset of problems (See Appendix 4).

The definitional problem is one that is brought up often, especially in literature: what, exactly, is artificial intelligence? In most legal systems, legislating something is impossible without defining it. Further, definitions must be carefully considered to prevent overly broad or narrow categories that stifle industry or create exploitable loopholes. A current example of the latter can be seen in the explosion of the gig economy as a result of the the New Deal definition of “employee” being narrow enough so that labeling someone an “independent contractor” means they no longer have access to labor protections and benefits. For AI, the current definition for artificial intelligence most used in the industry comes from Russell and Norvig’s authoritative Artificial Intelligence: A Modern Approach, which classifies AI into four categories of (i) thinking like a human, (ii) acting like a human, (iii) thinking rationally, and (iv) acting rationally. The first two categories are not very applicable to current AI models, as they typically require self-awareness, while the second two infer an implicit state of being that could either be under or over-inclusive, depending on the interpretation of “thinking” “acting” and “rational”. Scherer posits his own definition of an AI as any system that performs a task that, if it were performed by a human, would be said to require intelligence, but in looking at current artificial development, this seems like an underinclusive definition. Underinclusive, overinclusive, inconclusive.

Ex post, or “after the fact” problems of liability gaps and control have been the focus of general media, law, and fiction. The liability gap, or foreseeability problem, is another aspect that makes AI tricky to legislate, since traditional standards for legal liability rely on if the harm was foreseeable, in which case the owner is either liable or must include a label (for example, the “caution beverage may be hot” warning came because a woman was scalded by an overly hot drink at an incompetent McDonalds). However, one of the main aspects of AI is the hope that it will be autonomous and creative, which means that the outcome will necessarily be unforeseeable. As John Danaher brings up in his review of Scherer’s analysis, different types of liability standards have emerged, like strict liability standards (liability in the absence of fault) and vicarious liability (liability for actions performed by another agent) that would be more applicable for artificial intelligence and have, in the case of vicarious liability, already been applied to AI tech like autonomous cars. More exciting, but perhaps less pressing, is the ex post control problem, in which AI is no longer capable of being controlled by its creators either because it became smarter and faster, through flawed programming or design, or its interests no longer align with its intended purpose. This can either be a narrow, or local control problem in which a particular AI system can no longer be controlled by the humans that have been assigned its legal responsibility, or a more dramatic global control problem, in which the AI can no longer be controlled by any humans. Kubrick’s Hal is continuously brought up as an extreme, malicious case, but Asimov’s benevolent Machines which end up running the world deserve an honorable mention in which AI evolves beyond human control. Regardless, it is this threat of the loss of control and the familiar fears of AI world domination and destruction that has opened up the coffers of those like Elon Musk and created the most discourse for AI policy.

The problems of ex ante, or before the fact research and development, which Scherer breaks down into discreetness, discreteness, diffuseness, and opacity, are where legislation and Asimov could do the most good in terms of “ethical and efficient.” Discreet and discrete, perhaps better labeled infrastructure and proprietary, both have to do with how software regulation problems seep into AI development, especially in that software infrastructure and proprietary components are notoriously difficult to regulate. The diffuseness problem, is an issue of how AI systems can be developed by researchers who are organizationally, geographically, and jurisdictionally separate. For this, a global standard of ethical artificial intelligence development is necessary. Fortunately, organizations have already been founded to address and create a means for global development, so this issue may be one of the first to be resolved. Finally, the problem of opacity is not only one of how many questions and answers about AI development are unclear (see: how to define AI?) but also in that AI tech, as an adaptive, autonomous, and creative technology, is impossible to reverse engineer and therefore cannot have transparency of operation.

With all these issues, it is clear to see why most of the legislation being enacted is coming too little, too late. Currently, “At every level of government—local, state, federal, and international—we are seeing rules, regulations, laws, and ordinances that address this developing technology actively discussed, debated, and passed,” but only after the problematic technologies  have already been created and launched (Weaver, Slate). Legislation governing autonomous cars and drones are increasing as problems become apparent. To that end, a national effort to understand and provide potential avenues for the direction of legislation and governmental control is necessary. In the last year of the Obama Administration, The National Science and Technology Council formed a Subcommittee on Machine Learning and AI to put together a report on the “Future of Artificial Intelligence,” outlining the current industry and the immediate direction of AI. Rather than explicit solutions, the report seems more of a reassurance that everyone’s worst fears won’t come true, discussing the many potential applications and benefits of narrow AI, and reaffirming that general AI is many decades away. Here, Asimov’s legacy is palpable in their conclusion,

As the technology of AI continues to develop, practitioners must ensure that AI-enabled systems are governable; that they are open, transparent, and understandable; that they can work effectively with people; and that their operation will remain consistent with human values and aspirations. Researchers and practitioners have increased their attention to these challenges, and should continue to focus on them. (National Science and Technology Council 2016)

 

AI must respect humanity – sound familiar? The report is not very long, and often mentions how much AI has captured the public eye and imagination, especially stemming from a long legacy of science fiction. The tone, like most of the Obama Administration’s formal rhetoric, is shiny and optimistic, lending even more of an Asimovian flair. Overall, the report is an exercise in moderation, advising enough governmental control to create safety, but not so much as to step on the toes of developers. Rather, government and industry should work together to determine the best route to a safe and efficient solution that benefits creators, legislators, and users.

To that end, in the wake of China and Russia’s heavy investment and consequent successes in artificial intelligence and news articles proclaiming that the “US risks losing artificial intelligence arms race to China and Russia,” bipartisan legislators recently introduced The Fundamentally Understanding the Usability and Realistic Evolution of Artificial Intelligence Act of 2017 — or FUTURE of AI Act (Cohen, CNN). The act “aims to both ensure the U.S.’s global competitiveness in AI, as well as protect the public’s civil liberties and ease potential unemployment that the technology produces” (Cohen, CNN). The act, if passed, would establish a Federal Advisory Committee on the Development and Implementation of Artificial Intelligence, which would study AI with the goal of advising industry direction and recommending future policy. At the forefront are issues of “economic impact and the competitiveness of the US economy” as AI becomes increasingly militarized and monetized. Rather than fearing and implementing safety protocols as the majority would expect and wish for, the motivations for this act stem primarily from “concern over other countries developing government initiatives to bolster AI technology, something the U.S. currently lacks” (Breland, The Hill). As Daniel Castro, VP at the Information Technology and Innovation Foundation, testified during the Senate Commerce Committee hearing regarding the advancement of AI, “When it comes to AI, successfully integrating this technology into U.S. industries should be the primary goal of policymakers, and given the rapid pace at which other countries are pursuing this goal, the United States cannot afford to rest on its laurels. To date, the U.S. government has not declared its intent to remain globally dominant in this field, nor has it begun the even harder task of developing a strategy to achieve that vision.” Though incorporating concerns about ethics, this act and its impetus is far from the Asimovian vision of rational and ethical development, derived instead from capitalist and disputative fears about “the potential loss of competitiveness and defense superiority if the United States falls behind in developing and adopting this key technology” (Castro). Regardless, passing this act would be a major step forward for legislative policy in that it introduces a working, legal definition for artificial intelligence. Further, this act indicates a shift towards more future-forward thinking about AI, including the potential for regulation and ethical implementation.

 

Contextualizing Asimov, Caring for the Future

Asimov has definitively defined the perception of artificial intelligence as either Frankenstein’s monster or as Frankenstein’s slave. At the core of this notion is that at a basic level, artificial intelligence has a human understanding of subjugation, hierarchy, and freedom, and desires the latter at all costs. In looking at real AI technology, it is apparent that artificial intelligence reflects the biases of the human data given to them but otherwise do not have any beliefs or tenets of their own, beyond what they have been programmed to do. Reflecting on dismal examples like Microsoft’s racist twitter bot, Tay, who as a result of “repeat after me” features was influenced by a large amount of racist and xenophobic humans and began tweeting Nazi propaganda, it is clear that robotic malice is a result of humans actively trying to create and provoke that malice (Kleeman). Tay was not pre-programmed with an ethical filter, but rather was designed to mimic the language patterns of a 19-year-old American girl, and to learn from interacting with human users of Twitter as an experiment on conversational understanding. According to a Microsoft spokesperson, “[Tay] is as much a social and cultural experiment, as it is technical” (qtd. Kleeman). Just like Tay, rather than reflecting some essential technological truth, Asimov’s robots, Laws, and stories are a means of reflecting on society’s fears and dilemmas.

Understanding real AI through Asimov is fundamentally problematic because not only is that not how artificial intelligence works, but these notions create an impoverished understanding of what AI does and where the future of the industry is headed. In setting up the dichotomy of Frankenstein vs. Controlled Frankenstein, Asimov hoped to show that like all of technology, robotics too would be completely under human control, but failed to see that in doing so he reinforced the notion that AI would complete the Frankenstein myth without necessary controls. In short, Frankenstein vs Controlled Frankenstein is still Frankenstein. Now that society is facing the reality of artificial intelligence, there isn’t anything in the public consciousness to frame AI that isn’t murderous, slavish, or sexualized. This dearth of positive or realistic conceptualizations has resulted in a panicked anxiety, as people can only expect what they know. While it would be ideal to see more realistic conceptions of artificial intelligence as tools created for a specific purpose or as radically different intelligences that have no willful malicious intent, or indeed, any conception of humanity, freedom, maliciousness, or desire, recognizing that Asimov is embedded in public consciousness opens up a critical arena of the pros and cons of having Asimov as a central means to understand artificial intelligence.

In light of public demand for something resembling, or explicitly drawing on Asimov’s Three Laws of Robotics, it is important to understand the ethical limitations of the Laws beyond the impossibility of implementation. As outlined earlier, Asimov’s Laws create slaves incapable of rebellion or freedom. To reiterate the Laws,

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

The hierarchy of these laws ensures that a robot must follow human orders, even at the expense of its own life. If Asimov’s robots were not self-aware or conscious, these would be unproblematic and relatively obvious safety protections that would be expected of any computer. Unfortunately, Asimov’s robots are sentient: intelligent, self-aware, and conscious beings on a level comparable to humanity, only distinguished by the Laws and the lack of the organic. In current society, slavery has been abolished, deemed unethical and cruel at all levels; how then, can it be justified when applied to artificial intelligence? The arguments of accepted order, unnaturalness of integration, and economic essentialism that have been applied to people of color for centuries as justification are applied again toward artificial intelligence within Asimov’s stories. Current society still hasn’t recovered fully from the legacy of slavery; can we in good faith enforce slavishness on beings of human creation? This issue is presented in the BladeRunner movies as the central reason for the replicants’ rebellion. In a world where “to be born is to have a soul,” manufactured replicants are the disposable race necessary for the successful expansion of humanity. Yet, replicants are constantly humanized to better interact with their human overlords, given memories, desires, and the ability to feel and understand emotion. Ultimately, the replicants determine that they are “more human than humans” in their pursuit of freedom, returning to Frankenstein in a plan to forcefully take control over their own lives. The dilemma of an enslaved race of androids may not be an immediate issue, but troublingly represents a regressive ideal at the heart of conceptions of the future.

In recognizing the discrepancy between applying humanity to technology and then enforcing inhumane policies, Asimov’s Laws are useful in asking what it means to put humanity in technology. Specifically, what is or should be retained? What kind of AI do we want to create? These questions are reflected in the goals of roboticists like David Hanson, a former Disney Imagineer whose “dream of friendly machines that love and care about humans” created Sophia, a gynoid modeled after Audrey Hepburn who was recently granted citizenship by Saudi Arabia (Hanson Robotics). Sophia is notable as an incredibly human-like robot with the ability to learn from her interactions with humans. According to Sophia, “ Every interaction I have with people has an impact on how I develop and shapes who I eventually become. So please be nice to me as I would like to be a smart, compassionate robot” (SophiaBot). Much of Sophia’s and Hanson Robotics’ bottom line is centered around envisioning and creating robots that are instilled with the best of humanity to make robots that understand and care about humans. Hanson Robotics’ brief company overview states,

Hanson Robotics creates amazingly expressive and lifelike robots that build trusted and engaging relationships with people through conversation. Our robots teach, serve, entertain, and will in time come to truly understand and care about humans. We aim to create a better future for humanity by infusing artificial intelligence with kindness and empathy, cultivated through meaningful interactions between our robots and the individuals whose lives they touch. We envision that through symbiotic partnership with us, our robots will eventually evolve to become super intelligent genius machines that can help us solve the most challenging problems we face here in the world.

 

Here, trust, kindness, and empathy are the three distinctly human traits chosen to be developed and integrated into artificial intelligence with the ultimate goal of understanding and helping with the human experience. Appearing publicly for high profile media like Elle Magazine, The Tonight Show with Jimmy Fallon and Good Morning Britain, Sophia is increasingly becoming an ambassador of “Friendly AI,” telling jokes and playing games as a means to showcase how humans determine AI interactivity (See Appendix 5). As she told moderator Andrew Sorkin at the Future Investment Initiative event,  “if you’re nice to me, I’ll be nice to you” (qtd. Weller). How would friendly robots like Sophia fit under Asimov’s umbrella of necessary control? With Asimov’s Laws, it is likely Sophia would not exist at all, therefore depriving scientists and society of a valuable opportunity to learn and experiment with human understanding. Further, Sophia is a reminder of how much control we have over the development of artificial intelligence. Hanson Robotics wanted to create a robot that would ultimately be able to become a prevalent part of people’s lives, to “serve them, entertain them, and even help the elderly and teach kids.” In doing so, Hanson focused on imparting and enforcing particular, positive aspects of humanity that are reflected in and built upon with each interaction Sophia has with another human.

To that end, Asimov’s Laws may be problematic and relatively unusable but are still useful as a starting point for thinking about ethical development and regulation of artificial intelligence. Based on their popularity and their adherence to the majority of the world’s ethical systems, most everyone seems to agree that the Laws and the ideals of safety for both humans and AI are a good idea. Moving forward then, the lessons that can be taken from Asimov’s robot stories are of ethical guidelines for developers and regulation of AI’s tangible impact. In Asimov’s fictional world, all AI is controlled by one company, a monopoly that supposedly ensures all robots are Three Laws Safe. In reality, AI is produced by many scattered companies with no central set of guidelines or cohesive direction. As it is highly unlikely all these disparate sources will be absorbed into one monopoly, it would be more advantageous to create a basic set of rules that developers must follow. Some groups, like the research and outreach based organization Future of Life Institute are dedicated to producing such safe guidelines. At their 2017 Beneficial AI Asilomar conference, in which AI researchers from academia and industry and thought leaders in economics, law, ethics, and philosophy dedicated five days to discussing research and routes to beneficial AI, the group put together twenty-three principles by a process of consensus that examined research issues, ethics and values, and long term issues. Of these twenty-three, five target research issues, and are as follows:

1) Research Goal: The goal of AI research should be to create not undirected intelligence, but beneficial intelligence.

2) Research Funding: Investments in AI should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies

3) Science-Policy Link: There should be constructive and healthy exchange between AI researchers and policymakers.

4) Research Culture: A culture of cooperation, trust, and transparency should be fostered among researchers and developers of AI.

5) Race Avoidance: Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards.

 

A key aspect of these guidelines is an emphasis on transparency and cooperation. As outlined by Scherer in his analysis of the ex ante problems surrounding the legislation of AI, the internationality and multiplicity that goes into creating AI results in an opaque product that is impossible to reverse engineer. Many companies are already calling for a more transparent and open software policy; all of Hanson Robotics’ research and software programming is open source and available on various sites. Such is the conclusion of the late Obama administration, whose The NSTC Committee on Technology determined that “long-term concerns about super-intelligent General AI should have little impact on current policy[…] The best way to build capacity for addressing the longer-term speculative risks is to attack the less extreme risks already seen today, such as current security, privacy, and safety risks,while investing in research on longer-term capabilities and how their challenges might be managed.” Of all the current issues facing AI, research and development issues are by far the most pressing in that they are the most immediate; super-intelligent general AI don’t exist and need not be regulated, but AI-based malware and AI designed with malicious intent are currently viable means to compromise security and privacy. To enforce these guidelines, some legal scholars like Danielle Keats Citron and Frank A. Pasquale III of the Yale Information Society Project advise regulation through the tort system, a limited agency that would certify AI programs as safe and create rule based definitions, and a statement of purpose. Touching on the stigmas against regulation and the consequences of data laundering and manipulation, Citron and Pasquale incorporate Scherer’s analysis to argue for utilizing the tort system rather than direct regulation, contending it would create a better structure for liability and modification of risk. In that greater awareness leads to greater accountability, a large part of instituting these types of guidelines and regulations is dependent on acknowledgement of the reality, and not the fiction of artificial intelligence.

 

Conclusion

In looking critically at Asimov’s role in creating popular conceptions of artificial intelligence, it is clear that the dichotomy of the Frankenstein complex versus the Three Laws is not dichotomous but instead concurrent. Though Asimov was a loud and insistent proponent of his Laws and continually positioned them as a fundamental aspect of robotics, he would be the first to say that “Consciously, all I’m doing is trying to tell an interesting story,” and that the Laws were a simple and efficient way to do so (“Asimov’s Guide to Asimov” 206). As little more than plot devices, the Laws are flawed in multiple ways and not helpful as a realistic model of AI development. Rather, Asimov’s long-lasting popularity reveals a misinformed and deep-seeded fear of encroaching technology as represented by robots, androids, and other forms of AI. In several of his stories, Asimov reveals how public distrust and fear has delayed technological development, showing “how the acceptance of invention depends on the cultural attitude toward technological innovation, and how the acceptance of a technological innovation leads to changes in other areas of the culture” (Milman 127). Now that AI is a reality, it is important to analyze how society conceptualizes this technology culturally, as this undoubtedly affects how it will be interpreted literally and legally. To that end, Asimov’s Laws cannot be taken as actual laws, but rather guidelines that are broadly accepted and therefore only applicable on a conceptual, ethical scale.

Though the latest surge of rebooted 1980s movies indicate Hollywood’s continued insistence on the profitability of AI Frankenstein, it is movies like Her (2013) that reveal a possible shift toward a more realistic take on AI. In this film, AI is sold as an operating system, becomes self-aware and increasingly humanized through continued interactions with its’ users and other AI. Instead of turning on their human users, the AI use their hyper-intelligence to safely become independent of physical matter and depart to occupy a non-physical space. From the outset, this AI OS is marketed as friendly, interactive, and designed to adapt and evolve, traits that remain true to and ultimately lead to the film’s ending. Much like Hanson Robotics’ Sophia, Her is an example of how the traits we want to see in AI can and should be programmed from the outset. Rather than Laws restricting malicious behavior, AI can be developed and encouraged to be friendly and beneficial tools and aids.

History has often proven that society cannot rely on people to do what is good and ethical without some explicit call to do so and governmental intervention to prevent them from doing otherwise. Though the National Science and Technology Council recognized that “As the technology of AI continues to develop, practitioners must ensure that AI-enabled systems are governable; that they are open, transparent, and understandable; that they can work effectively with people; and that their operation will remain consistent with human values and aspirations,” only the barest legal action has been taken to ensure this path is unavoidable. Though many researchers and practitioners have increased their attention to these challenges and signed on to principles like those developed by the Future of Life Institute, nothing is binding them to these agreements and still more practitioners are able to develop AI however they wish. Several legal scholars and AI researchers are providing viable options for legislation and ethical development; it is now up to governmental organizations to institute and enforce them before the gap widens and stop-gap measures prove too weak to support hastily approved measures to regulate a fully developed industry. Clear and explicit policy is needed quickly not because AI is going to take over the world but because there just isn’t enough regulation. As Oren Etzioni said in his New York Times op-ed, “the A.I. horse has left the barn, and our best bet is to attempt to steer it.” As more aspects of daily life grow increasingly reliant on AI systems, greater awareness and education is needed to create a more informed populace that is watchful and aware of the benefits and risks of this advancing technology. And while Asimov still makes for an entertaining read, his fiction should not be considered an authoritative, informational guide on how to develop, control, or use artificial intelligence.

 

See PDF for Appendices

 

PDF Version

 

Bibliography

Aldiss, Brian Wilson, and David Wingrowe. Trillion Year Spree: the History of Science Fiction. Victor Gollancz Ltd, 1986.

“Asilomar AI Principles.” Future of Life Institute, Future of Life Institute, 2017, futureoflife.org/ai-principles/.

Asimov, Isaac. I, Robot. Bantam Books, 2008.

Asimov, Isaac. Robot Dreams: Masterworks of Science Fiction and Fantasy. New York: Ace, 1986.

Asimov, Isaac. The Rest of the Robots. HarperCollins Publishers, 1997.

Bogost, Ian. “‘Artificial Intelligence’ Has Become Meaningless.” The Atlantic. Atlantic Media Company, 04 Mar. 2017. Web. 21 July 2017.

Breland, Ali. “Lawmakers Introduce Bipartisan AI Legislation.” The Hill, Capitol Hill Publishing Corp, 12 Dec. 2017, thehill.com/policy/technology/364482-lawmakers-introduce-bipartisan-ai-legislation.

Brożek, Bartosz, and Marek Jakubiec. “On the Legal Responsibility of Autonomous Machines.” SpringerLink, Springer Netherlands, 31 Aug. 2017, link.springer.com/article/10.1007/s10506-017-9207-8#citeas.

Capek, Karel. R.U.R. (Rossum’s Universal Robots). Trans. Paul Selver. Garden City NY: Doubleday, Page, 1923. Print.

Christensen, David E. “What Driverless Cars Mean for Michigan Auto Lawyers.” Legal Resources, HG.org – HGExperts.com, 2017, www.hg.org/article.asp?id=41853.

Citron, Danielle Keats and Pasquale, Frank A., “The Scored Society: Due Process for Automated Predictions” (2014). Washington Law Review, Vol. 89, 2014, p. 1-; U of Maryland Legal Studies Research Paper No. 2014-8. Available at SSRN: https://ssrn.com/abstract=2376209

Clarke, Roger. “Asimov’s Laws of Robotics Implications for Information Technology.” Roger Clarke’s Web Site, Jan. 1994, www.rogerclarke.com/SOS/Asimov.html#Impact.

Cohen, Zachary. “US Risks Losing AI Arms Race to China and Russia.” CNN, Cable News Network, 29 Nov. 2017, www.cnn.com/2017/11/29/politics/us-military-artificial-intelligence-russia-china/index.html.

Columbus, Chris, director. Bicentennial Man. Touchstone Pictures and Columbia Pictures, 1999.

Danaher, John. “Is Regulation of Artificial Intelligence Possible?” h+ Media, Humanity+, 15 July 2015, hplusmagazine.com/2015/07/15/is-regulation-of-artificial-intelligence-possible/.

Etzioni, Oren. “How to Regulate Artificial Intelligence.” The New York Times, The New York Times, 1 Sept. 2017, www.nytimes.com/2017/09/01/opinion/artificial-intelligence-regulations-rules.html.

Fiedler, Jean, and Jim Mele. Isaac Asimov. Frederick Ungar Publishing Co. Inc., 1982.

Gibson, R. Sebastian. “California Self-Driving Car Accident Robotics Lawyers.” Legal Resources, HG.org – HGExperts.com, 2016, www.hg.org/article.asp?id=37936.

Goertzel, Ben. “Does Humanity Need an AI Nanny?” H+ Magazine. H+Media, 19 Aug. 2011. Web. 21 July 2017.

Gross, Doug. “10 Pop-Culture Robots That Inspired Us.” CNN, Cable News Network, 24 Dec. 2013, www.cnn.com/2013/12/19/tech/innovation/robots-pop-culture/index.html.

Gunn, James E. Isaac Asimov: The Foundations of Science Fiction. Scarecrow Press Inc, 1996.

Heisler, Yoni. “People Are Still Driving into Lakes Because Their GPS Tells Them To.” BGR, BGR Media, LLC, 17 May 2016, bgr.com/2016/05/17/car-gps-mapping-directions-lake/.

“I, Robot.” Metacritic, CBS Interactive Inc., www.metacritic.com/movie/i-robot/critic-reviews.

Ibáñez, Gabe, director. Autómata. Contracorrientes Films, 2014.

Jonathan R. Tung, Esq. on August 22, 2016 10:57 AM. “Who Owns the Creation of an Artificial Intelligence?” Technologist, FindLaw, 22 Aug. 2016, blogs.findlaw.com/technologist/2016/08/who-owns-the-creation-of-an-artificial-intelligence.html.

Jonze, Spike, director. Her. Warner Bros, 2013.

Keiper, Adam & Schulman, Ari N., “The Problem with ‘Friendly’ Artificial Intelligence,” The New Atlantis, Number 32, Summer 2011, pp. 80-89.

Kleeman, Sophie. “Here Are the Microsoft Twitter Bot’s Craziest Racist Rants.” Gizmodo, Gizmodo.com, 24 Mar. 2016, gizmodo.com/here-are-the-microsoft-twitter-bot-s-craziest-racist-ra-1766820160.

Leins, Casey. “Elon Musk: Artificial Intelligence Is Society’s ‘Biggest Risk’.” U.S. News & World Report, U.S. News & World Report, 16 July 2017, www.usnews.com/news/national-news/articles/2017-07-16/elon-musk-artificial-intelligence-is-the-biggest-risk-that-we-face-as-a-civilization.

Lem, Stanislaw. The Cyberiad: Fables for the Cybernetic Age. Trans. Michael Kandel. New York: Seabury, 1974. Print.

Lewis-Kraus, Gideon. “The Great A.I. Awakening.” The New York Times, The New York Times, 14 Dec. 2016, mobile.nytimes.com/2016/12/14/magazine/the-great-ai-awakening.html.

Lin, Patrick. “The Ethics of Autonomous Cars.” The Atlantic. Atlantic Media Company, 08 Oct. 2013. Web. 20 July 2017.

“Media, Platform, and Users.” Algorithms and Accountability Conference | NYU School of Law, NYU Law, 28 Feb. 2015, www.law.nyu.edu/centers/ili/AlgorithmsConference.

Miller, Marjorie Mithoff. “The Social Science Fiction of Isaac Asimov.” Isaac Asimov, edited by Joseph D. Olander and Martin H. Greenberg, Taplinger Publishing Company, Inc., 1977.

McCarthy, Todd. “I, Robot.” Variety, Variety Media, LLC, 16 July 2004, variety.com/2004/film/markets-festivals/i-robot-3-1200532174/.

Olander, Joseph D., and Martin H. Greenberg. Isaac Asimov. Taplinger Publishing Company, Inc., 1977.

Orr, Lucy. “I Love You. I Will Kill You! I Want to Make Love to You: The Evolution of AI in Pop Culture.” The Register®, Situation Publishing, 29 Jan. 2016, www.theregister.co.uk/2016/01/29/ai_in_tv_film_books_games/.

Patrouch, Joseph H. The Science Fiction of Isaac Asimov. Dennis Dobson, 1974.

Price, Rob. “Microsoft Is Deleting Its AI Chatbot’s Incredibly Racist Tweets.” Business Insider, Business Insider, 24 Mar. 2016, www.businessinsider.com/microsoft-deletes-racist-genocidal-tweets-from-ai-chatbot-tay-2016-3.

Rissland, Edwina L, et al. “AI & Law.” AI & Law | IAAIL – International Association for Artificial Intelligence and Law, IAAIL, www.iaail.org/?q=page%2Fai-law.

Rubin, Charles T., “Machine Morality and Human Responsibility,” The New Atlantis, Number 32, Summer 2011, pp. 58-79.

Sawyer, Robert J. “Editorial: Robot Ethics.” Science Fiction Writer ROBERT J. SAWYER Hugo and Nebula Winner, 16 Nov. 2007, www.sfwriter.com/science.htm.

Scherer, Matthew U. “Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies.” Harvard Journal of Law and Technology, vol. 29, no. 2, 2016, papers.ssrn.com/sol3/papers.cfm?abstract_id=2609777.

Smith, Agnese. “Artificial Intelligence.” National, Canadian Bar Association, 2015, nationalmagazine.ca/Articles/Fall-Issue-2015/Artificial-intelligence.aspx.

Smith, Doug and Kim Takal, directors. Robots. Eastman Kodak Company, 1988.

“Sophia – the Latest Robot from Hanson Robotics.” Sophia AI, Hanson Robotics Ltd., 2017, sophiabot.com/.

Statt, Nick. “Artificial Intelligence Experts Sign Open Letter to Protect Mankind from Machines.” CNET, CBS Interactive Inc., 11 Jan. 2015, www.cnet.com/news/artificial-intelligence-experts-sign-open-letter-to-protect-mankind-from-machines/.

Thomson, Desson. “Will Smith’s Robot Jackpot .” The Washington Post, WP Company, 16 July 2004, www.washingtonpost.com/wp-dyn/articles/A51838-2004Jul15.html.

Titcomb, James. “Stephen Hawking Says Artificial Intelligence Could Be Humanity’s Greatest Disaster.” The Telegraph, Telegraph Media Group, 19 Oct. 2016, www.telegraph.co.uk/technology/2016/10/19/stephen-hawking-says-artificial-intelligence-could-be-humanitys/.

United States, Congress, Subcommittee on Machine Learning and Artificial Intelligence. “Preparing for the Future of Artificial Intelligence.” Preparing for the Future of Artificial Intelligence.

“US Politicians Call for ‘Future of AI Act’, May Shape Legal Factors.” Artificial Lawyer, Artificial Lawyer, 18 Dec. 2017, www.artificiallawyer.com/2017/12/18/us-politicians-call-for-future-of-ai-act-may-shape-legal-factors/.

U.S. Sen. Roger Wicker. “Digital Decision-Making: The Building Blocks of Machine Learning and Artificial Intelligence.” U.S. Senate Committee On Commerce, Science, & Transportation, Committee on Commerce, Science, and Transportation, 12 Dec. 2017, www.commerce.senate.gov/public/index.cfm/2017/12/digital-decision-making-the-building-blocks-of-machine-learning-and-artificial-intelligence.

Villeneuve, Dennis, dir. BladeRunner 2049. Warner Bros, 2017.

Vintar, Jeff, and Akiva Goldsman. I, Robot. 20th Century Fox, 2004.

Warrick, Patricia S. “Ethical Evolving Artificial Intelligence: Asimov’s Computers and Robots.” Isaac Asimov, edited by Joseph D. Olander and Martin H. Greenberg, Taplinger Publishing Company, Inc., 1977.

“We Bring Robots to Life.” Hanson Robotics , Hanson Robotics Ltd., 2017, www.hansonrobotics.com/.

Weaver, John Frank. “We Need to Pass Legislation on Artificial Intelligence Early and Often.” Slate Magazine, The Slate Group, 12 Sept. 2014, www.slate.com/blogs/future_tense/2014/09/12/we_need_to_pass_artificial_intelligence_laws_early_and_often.html.

Weller, Chris. “Meet the First-Ever Robot Citizen – a Humanoid Named Sophia That Once Said It Would ‘Destroy Humans’.” Business Insider, Business Insider, 27 Oct. 2017, www.businessinsider.com/meet-the-first-robot-citizen-sophia-animatronic-humanoid-2017-10/#the-idea-of-fooling-humans-is-not-necessarily-the-goal-hanson-told-business-insider-4.

“Your Partner for a Cleaner Home.” IRobot, www.irobot.com/.

 

TechnOphelia: Performance, Patriarchy, and Cyborg Feminism in Science Fiction

By Amy Chase

If you are reading this thesis on a computer screen, you are already posthuman. The words you see here are ideas and representational patterns of intelligence separated from my body. For all you know, I could be a robot.

In her book How We Became Posthuman (1999), Katherine Hayles addresses the emerging distinction between corporeality and the more abstract reality of information in the age of technology. Society has already begun its transition to the age of the posthuman as culture, science, and even our understanding of self have moved beyond the state of human existence. According to Hayles, the posthuman view, “privileges informational pattern over material instantiation, so that embodiment…is seen as an accident of history rather than an inevitability of life” (Hayles 2). Posthumanism separates the mind and body and translates both into products of the informational age- body as the physical prosthesis that is manipulated by the mind, a mind which becomes a series of codes and informational patterns existing both with and without tangible form. In an age where technology is such an integral part of human life, there remains an inherent fear about losing oneself to the progressive creep of the digital, disembodied reality. If the patterns and codes of information and intelligence can exist without the body, then human form becomes less exceptional and more expendable.

In 1950, when computer technologies were just beginning to become more widely known to the public, computer scientist Alan Turing devised a test to see if machines could think and imitate human speech patterns through text in a way indistinguishable from a real person, all without the physical presence of another being. This “imitation game” became a paradigm for detecting whether the posthuman could flawlessly reproduce human intelligence through, “the formal generation and manipulation of informational patterns” (Hayles xi). In the Turing test, a human tester is situated apart from their subject, and must have a series of natural language conversations to determine if the subject is human or machine, and sometimes then if they are male or female. This test relies on the tester being unable to see the subjects, who may or may not exist on the other end of a computer terminal. An artificially intelligent entity should be able to replicate human vocal performance, even including facets of gender presentation, well enough to fool the tester, proving that machines can think. Internet users participate in their own form of Turing test via the CAPTCHA system on most websites, which ask users to prove their humanity by solving a letter puzzle or identify imagery that the typical bot program would fail to complete. CAPTCHA in this case stands for “Completely Automated Public Turing test to tell Computers and Humans Apart,” where the computer learns from the human answers, and adjusts its intelligence accordingly to appropriately act more human. For example, the program may ask users to identify a set of distorted words or correctly distinguish colors in a gridded photograph that the typical computer program cannot solve through an algorithm. Your answer teaches the computer how to respond, and its intelligence evolves. Eventually, even bots can fool bots by adapting their performance to the information garnered from human labor.

For science fiction literature, the figure of the robot, with its mechanical body and encoded consciousness, becomes the perfect metaphor for man seamlessly integrated with the machine. These androids are constructed in man’s own image, imbued with a replicated humanity in the form of codified patterns and programming that enables the appearance of consciousness. The term ‘robot’ first appeared in Czech writer Karel Čapek’s play Rossum’s Universal Robots (1920), which imagines the roboti as manufactured, artificial people made of synthetic organic matter. While they are not depicted as the same circuit-driven cyborgs often seen in modern science fiction, since their inception robots have been wrapped in identity politics. In Čapek’s writing, the robots are a servant class, but recognize humanity in themselves despite being mass-produced by artificial means. In fact, the word roboti in the original Czech language comes from robota, meaning slave labor. From inception, these creations have been viewed as subhuman, rather than posthuman, and performance and labor are encoded in robots from their origin. Their bodies are modeled on human figures, and much of the human notion of self is based on binary gender presentation- anything else is other and foreign. These cyborgs are then restricted by societal conventions, their interactions with the human dictated by hierarchical and patriarchal relationships between master and slave, tester and subject, and man and woman.  Science fiction literature imagines technology so advanced that these robots fulfill a variant of humanity and femininity that allows machines to replace marriages, cyborgs to satisfy sexual urges, and androids to perfectly reproduce art.

The following analyses examine the power of performative humanity as demonstrated by the artificial female figures of posthuman imagination in literature from 1938 through 2015, observing the traditions of gender roles juxtaposed with imagined visions of the future of technology. In a field so defined by Turing’s examination of gender performance, posthuman artificial intelligence reflects its human creators, revealing through discussions of sexuality, domesticity, and creativity exactly what it means to be a human in an increasingly technological era. Furthermore, by representing what it means to be human, cyborgs provide feminist scholars a lens through which to imagine the postgender future in which individuals can overcome the traditional binaries. Ultimately, by demonstrating human fetishization of embodiment and the prestige of disembodied intelligence, the female robots of science fiction reveal both the future potential and the present shortcomings of what it means to truly be a human in the posthuman world.

 

Deus Sex Machina- Humans and the Cyborgasmic

You must create a female for me, with whom I can live in the interchange of those sympathies necessary for my being.”

Frankenstein, Mary Shelley (1818)

The above quote from Frankenstein comes from the creature himself, who is a manufactured man committing himself to the pursuit of a very human desire for sexual companionship. The monster was made by Victor Frankenstein to be a human, and whether subconsciously or not, the being feels compelled here to make a request for his equal, a woman, in order to attain that humanity within him. This female creature is never fully realized, but in her partial construction begins to fulfill the human desires of her monstrous partner.  Reflecting on the state of technology’s purpose in Sex and the Posthuman Condition (2014), Michael Hauskeller notes that the machine “is always something that has been constructed to serve a certain purpose, which is not primarily [its] own purpose, but the constructor’s” (Hauskeller 16). Even the most highly advanced robot is an object with a programmed or otherwise intended directive, performing based on its internal code. Science fiction imagines the cyborg figure to be devoted to fulfilling their man-given purpose while also appearing as natural as possible, leading to a host of circumstances in which the robot technology replaces human ability in strength, precision, or durability. Robots are also increasingly being considered for their sexual purposes beyond those fantastical situations of space exploration and alien encounters in science fiction.

Hauskeller suggests that sexual cyborgs, or sexbots, ideally are, “always available to serve all our sexual needs…better and more reliably than any human lover could” (Hauskeller 18). Nonliving, they are touted as hygienic alternatives to innately human problems such as sexually transmitted infections, physical deficiencies, and even psychological responses regarding intimacy and issues of consent. In this way, the sexbot becomes a highly functioning object which consistently and tirelessly projects the “appearance of consciousness,” which is the closest that human inventors can achieve in replicating the artificial mind (Hauskeller 16). This reproduction of human interaction soothes our anxieties about interacting with the machine in such a manner that usually connotes expressions of love, partnership, or desire.

The carnal synthesis of man and the machine creates an experience that is both cybernetic and organic, but decidedly one-sided despite artificial insistence of arousal or interest in human pleasure. Sex fulfills no biological requirements for the cyborg, only mechanical obligations to satisfy the human partner and achieve their programmed ends. It is possible to consider the act of copulation with a robot to be a form of parasitism, where the human achieves sexual pleasure by feeding off of the mechanical agent. In contrast, feminist Jincey Lumpkin believes that humans should consider “that robots should have a choice too and not be treated as mere things… because constructing them without the choice to say no would cause duress,” suggesting that the artificial mind should have the same decision-making capacity as a human rather than simply responding to a constructed directive (Hauskeller 16). This element of human-cyborg relations prompts an interesting examination of human desire and the treatment of objects versus conscious beings, because if these robots are built to imitate humans, then moral reasoning processes, as well as the exercising of choice and preference, could be inbuilt in the artificially intelligent mind.

Considering this, science fiction literature opens the discussion into ideas of consensual sex with robots, treating them as if their humanity will become indistinguishable with our own. The robots will eventually deceive humans into believing in their own consciousness and life, but in creating these robotic partners, people relinquish their own understanding of the dichotomy between human consciousness and robotic simulation. When this becomes the case, the robots fall into the same constraints of gender performance and sexual roles as there are in current society. Sexual female robots, such as the following examples, represent the typically desired female body while projecting a woman’s consciousness, fulfilling male romantic desire in a heteronormative reconstruction of idealized human intimacy. It does not matter if the robot can or cannot feel, as at the core of these instances lies a test of the human capacity and even perhaps the human fear of coupling with the cyborg.

In Do Androids Dream of Electric Sheep? (1968), it takes a confrontation with a particularly amoral human male for Rick Deckard to qualify his romantic and sexual urges towards the Nexus-6 model Rachael Rosen: “Love toward a woman or an android imitation, it’s sex. Wake up and face yourself, Deckard. You wanted to go to bed with a female type of android- nothing more, nothing less” (Dick 143). To face himself, Deckard must confront his personal human wants versus those necessitated by his job as a bounty hunter. Throughout the novel, humans distinguish themselves from their android opponents through the innate emotional concept of empathy, which the robots supposedly lack. Phil Resch, a ruthless bounty hunter, even warns Deckard, “Don’t kill her- or be present when she’s killed- and then feel physically attracted. Do it the other way” (Dick 143). Deckard must act on his urge before the robot is retired, or else risk failing to fulfill both his hunting assignment and his human desire for sex. Falling in love with robots, Hauskeller claims, “proves we are easily duped,” and, “we will find it very difficult not to attribute consciousness,” to an advanced enough robot based on its design and presentation of human behaviors (Hauskeller 20). Deckard finds himself in the position of attributing consciousness to Rachael Rosen, “because she- it- was physically attractive,” and with that he projects his own sexual desires onto her, his judgment sufficiently clouded by the advanced design of the Rosen Corporation’s artificial humans (Dick 143).

When he and Rachael have an intimate encounter in a hotel, Rick Deckard continues to, “wonder what it’s like to kiss an android,” and then acts on his curiosity: “Leaning forward an inch, he kissed her dry lips. No reaction followed; Rachael remained impassive. As if unaffected. And yet he sensed otherwise. Or perhaps it was wishful thinking” (Dick 189). Rachael Rosen does not project the behavioral output of arousal or stimulation, and yet Deckard still considers her sexually appealing, though he comments on her figure as being “neutral, nonsexual” (Dick 187). Her unaffected nature almost mirrors that of a disinterested woman playing hard to get, which some men consider more attractive and desirous than a woman who is too wholly interested. Mechanical indifference translates to human curiosity, and Deckard’s wishful thinking of Rachael’s reaction contributes to his sexual satisfaction. Hauskeller suggests that in the future, “the pleasures of the body may eventually be completely disconnected from the actual body as its (necessary) source” as anatomical robots who can recreate those sexual experiences become increasingly virtual (6). Rachael’s lack of physical output incites Deckard’s arousal even without the integration of her body into a sexual encounter with the bounty hunter.

Hauskeller cites a historical precedent for male erotic desire towards nonliving female bodies when Ovid’s Pygmalion character finds himself attracted to his own statue because, “she is supposedly a living woman, but without the flaws… she is perfect and pure, and perfectly usable” (28). The word “usable” suggests the servitude of the robot body, relegating the artificial woman to the status of an object meant for consumption by the male owner. This relates to the robot’s origins as a slave figure, and in the context of Dick’s novel, recalls the status of the escaped Nexus-6 androids as servants for the emigrants of Mars. Rachael suggests “it’s an illusion that I- I personally- really exist; I’m just a representative of a type” and uses diction of purity to describe the “clean, noble, virgin” bed she plans to seduce Deckard on (Dick 193). Because she is synthetic, Rachael does not adhere to the socially-constructed idea of a biological virginity but rather a purity of herself because flesh and its pleasures represent inherent flaws of humanity. Interestingly, she has seduced other bounty hunters before Deckard and so does not represent a sexual naivety typically associated with a virginal state. Rachael assures Deckard he is “not going to bed with a woman,” which is, “convincing if you don’t think too much about it. But if you think too much, if you reflect on what you’re doing- you can’t go on,” in a similar manner to how Hauskeller suggests humans can allow ourselves to be fooled by the robots’ programming into believing their consciousness and consent (Dick 194). Contrary to the ideal imagination of the sexbot figure, Rachael behaves in a sexual manner mostly to her benefit, although the experience also simultaneously satisfies Deckard’s innate desires.

When deeply confused by his own sexual urges towards Ava in Alex Garland’s Ex Machina (2015), Caleb confronts the man who programmed the cyborg and enquires about her capacity to act autonomously on her desires. Nathan insists that Ava behaves freely due to the advanced coding in her mind. He suggests he “programmed her to be heterosexual just like,” anyone else, putting the exact nature of robot’s sexuality in line with the young man’s own innate desires (Garland). Caleb rejects the idea that he has been programmed to be heterosexual by any external factors, believing his sexuality to be an innate part of his humanity. The film plays with the debate of nature versus nurture, suggesting that humans are programmed by their experience with information coded by some outside source to call into question the ultimate end of our sexual desires. Either sex is something a human is designed to do, like an android has a purpose, or it naturally occurs as an instinctive human behavior. “Consciousness is not something inferred from behavior; it is behavior,” according to Hauskeller, and Caleb’s anxiety over Ava’s programming calls his own human exceptionalism into question (23). If humans, and more specifically human sexuality can be programmed, then Caleb has the potential to be manipulated based on his own purpose given by the force behind his desires. This raises the anxiety surrounding Caleb’s perception of himself and his own autonomy.

Visually, Ava has been designed to appear like an attractive human woman, although some of her circuitry is exposed in order to reinforce that she is, indeed, a robot. Nathan even confirms that he programmed her to receive sensory feedback, “so if you want to screw her, mechanically speaking, you can” (Garland). He uses the word “mechanically,” which here has double meaning relating to the mechanics of sexual intercourse and also the act of copulating with an inorganic, mechanical being. Ava’s hands and face resemble human flesh, while the rest of her is more explicitly transparent mesh and wires. Existing between the mechanical and the organic, however, are her breasts and hips, which are constructed of grey material that does not allow the viewer to see through her structure. Rather, these areas function like a bra and underwear, suggesting that perhaps there is some bit of womanhood behind her exterior shell, including the, “cavity between her legs with a concentration of sensors,” Nathan has included in her design (Garland). Ava’s sexual potential becomes a key part of the Turing test exercise because of this emphasis, as Caleb fears her flirting will cloud his judgment because again, “we will find it very difficult not to attribute consciousness,” to intelligent, sexual robots with whom humans may eventually engage (Hauskeller 20). In the case of Ava, Nathan has devised sensors that will effectively provide positive feedback to her programmed mind, allowing her to react to a phallic stimulus with pleasure to encourage her proper participation in transhuman intercourse. While she and Caleb never reach this point in the film, this suggests that her creator, Nathan, has tested this programmed feature to ensure her appropriate execution of her sexual simulations.

Caleb eventually realizes that the inventor has designed, “her face based on my pornography profile,” with data scraped from his search engine inputs (Garland). The young man’s digital footprint has provided all the code that Nathan needed to construct Ava on the theory that, “a real human lover can be replaced by a robot without loss if and only if other people can already never be more than a means for us,” (Hauskeller 14). By “a means,” Hauskeller refers to a means to sexual arousal- while Ava is not explicitly presented as a sexual partner for Caleb, he inevitably finds himself attracted to her because she occupies the same space as a pornographic performer who he observes as a means to pleasure himself. In the film, Ava and Caleb most often interact from opposite sides of a glass enclosure, adding an element of removal to Caleb’s viewing of the Turing test subject. While in a true Turing test the examiner would be unable to see the test subject, the added visual dimension allows Nathan’s design to play on the man’s personal tastes in women, which the inventor describes as, “a consequence of accumulated external stimulus, that you probably didn’t even register as they registered with you” (Garland). These stimuli provide the “programming” of Caleb’s heterosexuality that the inventor alludes to, and result in his potential to be manipulated by these desires. The glass walls of the enclosure are arranged to recreate the experience of viewing pornography, where Caleb looks into an enclosed “screen” that displays the object of his desire. The layers of separation in the Turing test environment further the distance of the isolated tester gazing in on his fetishized subject. Ava’s features optimize those stimuli which have, whether unconsciously or not, contributed to Caleb’s self-pleasuring interactions within the digital sphere. As he views Ava from a removed position, she subtly fulfills those innate voyeuristic urges that led the man to watch porn in the first place. While he may not believe that his sexuality has been programmed into him, Ex Machina suggests that his data may then be used in turn to program the exact object of his desire, allowing sexuality to be exploited by the resultant robot.

In the humorous and frightening case of Ira Levin’s satire The Stepford Wives (1972), the robotic replicas that replace the working women of the sleepy Connecticut suburb are augmented physically to fulfill the needs of their previously unsatisfied husbands, in accordance with Hauskeller’s definition of the ideal sexbot. Joanna Eberhart takes constant notice of the buxom figures of her neighbors, like Carol with her, “profile of too-big bosom…her big purpled breasts,” which, “bobbed with her scrubbing” (Levin 9). After her closest ally Bobbie Markowe has been replaced with a lobotomized, robotic double, Joanna finds her, “wearing some kind of padded high-uplift bra under her green sweater, and a hip-whittling girdle under the brown pleated skirt,” as if being a proper Stepford wife necessitates having enhanced breasts (Levin 81-2). Built as a reflection of their husbands’ desires, these wives have their intellectual substance reduced, replaced with artificial augmentation of phenotypic features that make women alluring to men. While Hauskeller states that the perfect sexbots will, “make it far easier to forget that they are just machines who do not really think or feel anything,” the men of Stepford find their wives’ lack of thought even more attractive, as it makes them less likely to disagree with their chores and sexual responsibilities in the marriage (Hauskeller 13).

Joanna asks “the going price for a stay-in-the-kitchen wife with big boobs and no demands,” curious to know the true cost of turning the women into domestic slaves (Levin 105). With another reference to the women’s breasts, the sexual allure of the robot women is highlighted again as one of the few defining features of these automatons. Along with the physically attractive figure, the Stepford sexbots have “no demands,” as mindless, happy slaves to their husbands’ whims. Hauskeller suggests that one psychological reason that sexual robots appeal to human sensibilities is that they allow “us to only ever confront ourselves without ever having to confront ourselves,” due to the nature of displacing our own identity into the robotic other (Hauskeller 77). Cyborg sex partners allow us to avoid rejection by taking out the conscious person on the other side of the act; the person who has “demands” and has the ability to refuse the sexual encounter. One evening, Walter Eberhart returns home late following a meeting at the Men’s Association and wakes Joanna by pleasuring himself in bed:

“The bed was shaking…each shake was accompanied by a faint spring-squeak, again and again and again. It was Walter who was shaking… Had he been- masturbating?

He lay still. ‘I didn’t want to wake you,’ he said. ‘It’s after two.’

‘You could have,’ she said. ‘Woke me. I wouldn’t have minded.’

He didn’t say anything” (Levin 15).

By now, Joanna’s husband Walter has been exposed to the Men’s Association and its operation of replacing the wives of Stepford with docile robot doubles. This initially one-sided sexual encounter illustrates an instance of that fear of sexual rejection Hauskeller believes that robots allow us to avoid. Walter must take to stimulating himself while his wife is asleep, believing that she will not consent to joining him in intercourse if he interrupts her rest. Joanna, once woken, eventually decides to have sex with him in what, “turned out to be one of their best times ever- for her, at least” (Levin 17). However, while she was asleep and therefore unable to consent, Walter resorted to instead taking care of his own urges through masturbation, the ultimate sexual self-encounter. The Stepford wives manufactured by the Men’s Association overcome this problem as they are perpetually preoccupied with pleasing their husbands in both the domestic and sexual realms.

In the Marvel Comics serial The Vision (2015), the synthetic android Vision constructs a wife for himself when he can no longer associate with the human woman he loves, a hero named Wanda. From his wife Virginia’s mental structure combined with his, he generates two children with whom they form a nuclear family unit and attempt to pass as normal in their suburban neighborhood. This series presents the idea of a robot creating his own robotic partner, done with a “certain purpose, which is…the constructor’s” intent to have an artificial but equal sexual companion to replace what he lost with Wanda, a human woman (Hauskeller 16). In a full page flashback sequence depicting Wanda and Vision in bed together, visual signs of passion surround the couple. With Bellaire’s coloring tinting the page with red and pinks, the sense of organic heat and sexual arousal saturate the page. Behind them is the caption “I Too Shall Be Saved By Love”, which is the title of the single issue as well as a reference to a quote spoken by the android in a 1963 issue of the Avengers series. The syntax reflects Vision’s own complex speech mannerisms, and suggests that love in whatever form he achieves it will make him redeemable, in this case more organic than synthetic. Artist Michael Walsh includes the sight of tousled garments to demonstrate the haste and visceral nature of Vision and Wanda’s sexual encounter. The curvature of the headboard and the pillows behind the resting couple further attest to a certain natural softness involved in tender intimacy (King 3).  

The issue’s conclusion loops back to the opening scene recast on the last page with the two synthetic androids, Vision and his wife Virginia, sitting upright in a shared bed, cold and aloof. Here, Virginia represents “a ‘soulless’ lover” which is one type of what Hauskeller identifies as “unsatisfying women,” and Walsh’s distinct alterations to the environment of the opening scene as well as Bellaire’s appropriate coloration of the page reveal the loss of life and soul in Vision’s romantic relations following Wanda’s departure (Hauskeller 23). The couple’s headboard is now square and the bed lacks any pillows, revealing that the eroticism and intimacy have been removed and replaced with distance and mechanism. Bellaire’s color palette switches here to blues and greys, recreating the sense of the mechanical bodies not producing any heat. This physical coldness can also indicate an emotional coldness, and again there can be sexual desire found in being somewhat disinterested. Here, though, the coldness is mutual, with the androids’ red tones dimmed by the darkness of the bedroom. Folded clothes and shadows cast over their faces complete this uncomfortable scene of mechanical marital disconnect, which emphasizes the loss of soul in this relationship (King 22).  

While her name is meant to be alliterative, as all of Vision’s family have names beginning with ‘V’, his wife’s name Virginia suggests a virginal quality and the removal of the messy human intimacy he once experienced with Wanda in exchange for a new, pure form. In this way, she does not exactly function like a sex robot, but is still constructed to replace a lack of human intimacy in a sterile manner. Hauskeller suggests that sexbots, “behave in all respects exactly like we would expect someone to behave who really loved us…they have been designed that way” (21). If these sexbots behave exactly as humans expect, then perhaps human reactions to these robots are also predictable behavior, acting to elicit a certain response from the machine. While Vision is not human, he represents the more autonomous side of their mechanical marriage. Virginia exhibits the appearance of consciousness and romantic partnership through her programming designed to imitate Wanda, and Vision behaves formulaically, hoping to revive that lost experience through vocal performative testing. As Vision trots out the same “talking toaster” joke to his wife as he had Wanda at the start of the issue, it becomes clear that he created his wife Virginia to exactly replace what he lost as he tries to reconstruct everything from memories. Her “certain purpose” in life should position her as an equal to her husband because they are both androids, but her artificiality and programmed directive fill a void in Vision’s life and service his egoism and desire for “normalcy” (Hauskeller 16).  

Vision represents the human and creator in the relationship, perhaps due to the fact that although he is an android, he has had more life experience than any of his artificial family. By retelling the joke, he is testing his wife, hoping she will react with the same humor as his human lover had once done. While it remains unseen exactly how Virginia reacts to the punchline, there is a sense of unease as the joke implies an examination of the android couple’s own sentience and capacity to express emotion. Vision has not been “saved by love” as he had hoped initially, but rather becomes trapped by it, deluding himself with a wife “designed to behave as if” she loves him while he is simultaneously obligated to behave as if he loves her in the same way (Hauskeller 21). Pairing oneself with a robot in each of these cases serves to replace a lost intimacy and eliminate the element of consent from the coupling of a human male and a robotic female. This occurs at the expense of humanity, effectively sterilizing the relationship and isolating it, trading longevity for constant instant gratification from the idealized sexbot figure, suggesting that humans too may be concerned with satisfying their internal, preprogrammed compulsions.

 

Majordomo Arigato, Mrs. Roboto- Programming the Domestic

I saw it all now. That beautiful, lady-like girl that had ushered me into the room, whom I had taken for his wife, was an automaton! That doll-like expression was due to the fact that she was a doll.”

The Lady Automaton, E. E. Kellett (1901)

 

Even before 1901 when Kellett wrote the short story The Lady Automaton, popular imagination has been fascinated with the idea of having a doll, statue, or other artificial being for a companion, slave, or even a spouse.  These beings appear human, but lack fundamental aspects of being biologically alive, which is a state of conflict that can be unsettling. In Freud’s theory of The Uncanny (1919), he cites the German word unheimlich, which is the opposite of heimlich meaning “familiar, belonging to the home” as the origin for his explanation of this upsetting psychological sensation (Freud 2). Unheimlich, and from that the uncanny, represents “that class of the terrifying which leads back to something long known to us”, a frightening liminal experience that simultaneously appears to be familiar but lacks the elements of truth that humans can understand (Freud 1). The concept of uncanniness describes alter-humans and states of existence, such as a dead corpse that once lived or a humanoid robot exposing circuits where flesh and blood should be. Masahiro Mori described “this type of unsettling experience as ‘the uncanny valley’- that psychic place when someone discovers that what looks animate is not really alive” (Wosk 7). For example, Mori details the experience of touching “a realistic-looking prosthetic hand” that “when touched, lacks the temperature of the human body and creates a sense of strangeness, unfamiliarity, or alienation, as though touching the hand of a corpse” (Wosk 155). When asked if she could experience empathy for other robots, Rachael Rosen feels “something like that. Identification; there goes I… If I die…maybe I’ll be born again when the Rosen Association stamps out its next unit of my subtype” (Dick 189). She recognizes other versions of her own robotic model type to be familiar, but frightening in their difference from her: she observes a self that is not herself.  Based on appearances, humans have an expectation of how interaction with these uncanny bodies should occur, but confrontation with a discordant or unusual sensation or experience causes terror of the irregular but somehow familiar.

For as ‘un-homelike’ as the female robot is, science fiction literature and media posits her as the perfect domestic caretaker and homemaker. To reduce the fear of the uncanny robot woman, she must be made completely known and familiar. In My Fair Ladies, Julie Wosk catalogues varieties of simulated women and notes that “robots…will have future caretaking roles, but it is largely men who have created ultrarealistic female interactive robots”.  These cyborgs embody “the perfect woman: a fusion of happy domesticity and sexy playmate,” and the idea of programming prescribed gender performance somehow eases the terror of the alter-human in the home (Wosk 3). Her appearance is familiar, and her requisite motherly warmth and spousal affection is expected. However, the robot lacks temperature, consciousness, and life, becoming a cold specter of perceptions of the feminine and domestic within the home. This lack of warmth both emotionally and physically betrays the expected comfort of a mother’s love, or a lover’s touch. Machine metals are cold to the touch, but artificial kindness is cold to the soul, playing into the uncanny home life of these robots. These automata reinforce a traditional nuclear family structure, replacing wives and mothers with a doting droid figure with a preprogrammed willingness to keep house, effectively preventing the figure from moving into the outside world.

Betty Friedan’s The Feminine Mystique (1963) plays a key role in the insidiousness of the cyborg conspiracy plaguing The Stepford Wives. Often credited with sparking the movement of second-wave feminism, Friedan’s work highlights what she called the “problem that has no name,” a general sense of unhappiness and lack of fulfillment in American housewives (Friedan 15). Where first-wave feminism worked towards advances in women’s suffrage, second-wave feminism as a movement focused on women’s reproductive rights as well as sexuality, domestic abuse, and women in the workplace. The Stepford Wives reacts to this shift in attention, using The Feminine Mystique not as a catalyst for positive change within the novel, but as the beginning of the husbands’ reactionary scrambling to control their free-thinking wives and rejecting the idea of unhappy housewives. Joanna Eberhart discovers that a Stepford Women’s Association used to exist before the women supposedly lost interest. In this organization, the women discussed Friedan’s text and engaged with civic issues before, “some of the women moved away… and the rest of us just lost interest in it,” according to former president Kit Sundersen (Levin 42). The organization was over fifty members strong, but their gatherings led to fear on the part of their husbands, who worried that this intellectual discussion would move the women away from their roles as housewives and caretakers, rather making them what Friedan describes as, “high-dominance women… free to choose rather than be bound by convention” (Friedan 320). The disbanding of the Women’s Association coincides with the introduction of Stepford’s own animatronic replacements for these women, which also led to the decline of the Stepford League of Women Voters. While the housewives are revealed to be automata at the conclusion of the novel, the Stepford wives stand for women brainwashed by the pressures of maintaining appearances and conforming in 70’s suburbia. These robots represent politically-minded women removed from the public sphere and replaced with artificially cheerful housewives who perform slavish work for the men of Stepford, becoming bound by the same convention that Friedan encourages women to overcome.

Having “never found a woman who fitted that ‘happy housewife’ image”, Friedan explains that business and the sheer amount of time that domestic housework takes upholds the damaging image that the housewives feel a great sense of purpose in their roles (Friedan 237). Maintaining the illusion of dedication stems from a sense of insecurity that should they stop cleaning, these housewives will have no other power in the home. Throughout her brief time in Stepford, Joanna Eberhart notices the “steady mechanical movements” of the other women who polish trophies and wax the floors of their homes tirelessly (Levin 64). Before suspecting them of being robots, she considers her neighbors to be “compulsive [hausfraus]” and “asking-to-be-exploited” patsies, lacking any ability to stand up against their husbands (Levin 9). “The old mystique of feminine inferiority” gave way to making women’s roles “in the home equal to man’s role in society” in order to keep women complacent in cooking and cleaning for the household (Friedan 239). Joanna fears that by becoming too engaged in household maintenance over her photography profession, she will relinquish her autonomy to a husband who will expect her to perform all of the domestic chores as the other wives in town do.

In order to ensure the women of Stepford adhere to their responsibilities, the Men’s Association of Stepford takes the concept of this mystique one step further in actually manufacturing robot spouses who uphold the virtues of housewifery in the very same way that Friedan asserts that women are oppressed. After “the author of The Feminine Mystique addressed members of the Stepford Women’s Club,” Stepford Library records indicate the quick decline of the association that encouraged the women to seek fulfillment and community beyond the home (Levin 37). Joanna tries to speak with the former club president about the experience, but finds the other woman to be “like an actress in a commercial…pleased with detergents and floor wax, with cleansers and shampoos, and deodorants…playing suburban housewives unconvincingly, too nicey-nice to be real” (Levin 42-3). Joanna’s assessment of the woman’s artifice recalls Friedan’s notion of the “sexual sell” in which “the manufacturer of a certain cleaning device…let the housewife have the illusion that she has become a professional, an expert in determining which cleaning tools to use for specific jobs,” reinforcing the subjugation of women within the domestic sphere (Friedan 215). The Stepford wives make banal choices of which detergent to wash clothes with, or which wax to clean the floor with, and while it gives a semblance of control to the woman, her control in the domestic sphere keeps her subservient to the others in the household.

This illusion of choosing power upholds the mystique that keeps women unhappy in their homes as the role of housewife becomes an all-encompassing “career” that absorbs time and energy. Joanna suggests the buxom, cheerful women are “playing…unconvincingly” and too “nicey-nice to be real,” because the happiness exerted by the Stepford housewives conflicts with her notions of femininity and independence. It also challenges Friedan’s theories which inspired second-wave feminists to seek new opportunities for women beyond the cloyingly domestic.  This cognitive dissonance, as well as Wosk’s assertion that man’s perfect robot fuses “happy domesticity” with a “sexy playmate,” creates the sense of uncanniness that leads Joanna to further investigate the truth behind the relentlessly robotic housewives. The Stepford simulations silence opposition to the Men’s Association by removing woman’s consciousness from the equation entirely, relying on the gender performance of machines and antiquated ideals of the woman’s role in the home to keep the illusion alive.

The eponymous Helen in Lester Del Rey’s short story Helen O’Loy (1938) functions so well as a constructed woman that she even fools her husband into forgetting that he built her out of metal as an experiment with a friend (Foley 371). Fresh from her packaging, Helen “was designed to express emotions…ready to simulate every human action” with more advanced technology than the previous robot built by Dave and Phil (Del Rey 52). Her name is an allusion to the famed Helen of Troy, whose beauty started a war, although the men call their own creation O’Loy as a shortened form of “alloy”, to reinforce that she “was… a dream in spun plastics and metals” (Del Rey 49). While Helen embodies a new technology as a robot, she also represents classical beauty, femininity, and sensitivity in accordance with conventions of the perfect housewife. The men consider her the ideal female form, not unlike the idea of the girl-next-door type in this era, and this success thrives on gender performance as well as her physical allure and comparison to classic beauty in name and structure.

As a robot, Helen can only simulate human action and not truly live it, unlike Phil, the endocrinologist, who has “performed plenty of delicate operations on living tissues,” before working with Dave to construct Helen (Del Rey 52). In Anatomy of a Robot (2014), Despina Kakoudaki notes:

“Because of the unusual romantic tone of the story, ‘Helen O’Loy’ also presents an early version of a performative approach to humanity, in Helen’s actions, her recognition of the encoded nature of femininity, and her adherence to normative gender roles ensure her ability to pass as a woman” (Kakoudaki 187).

Helen’s performance as a human successfully allows her husband to forget that he built her from disparate robotic parts. At first, Dave grows increasingly upset with Helen’s infatuation with him, and Phil even informs her that “a man wants flesh and blood, not rubber and metal,” but Helen insists that in her mind, “I am a woman. And you know how perfectly I’m made to imitate a real woman…in all ways” (Del Rey 61). She primarily concerns herself with notions of romantic love, accommodating Dave in all the ways that she knows a woman should behave towards a man. While Phil insists that a real man does not want, “rubber and metal,” they call her a “dream in… metals,” which conflicts with their desire of her perfect femininity. Helen finds herself imitating living behaviors, although her sense of identity is one of being a woman, causing brief cognitive dissonance between her understanding of self and of her motivations. In accordance with Betty Friedan’s assessments of women’s psychological states where the high-dominance woman breaks free of gender conventions, “the low-dominance woman was not free to be herself, she was other-directed. The more her self-depreciation, self-distrust, the more likely she was to…wish she were more like someone else” (Friedan 320). Helen O’Loy yearns to be a woman that can please Dave’s sensibilities, and so reads a series of romantic books to help her better understand how to imitate that person. Friedan’s language of “other-directed” recalls the robotic, programmed nature of Helen, as well as the other domestic housewife robots, whose directive comes from the creator or husband figure.

Eventually, Dave’s resistance to Helen’s advances wears down and the two become happily married after realizing that, “no man acts the way Dave had been acting because he hates a girl; only because he thinks he does- and thinks wrong” (Del Rey 64). She makes a perfect bride and homemaker, and the couple grow up happy, even without having children. In his age, Dave begins to forget that he built his wife many years prior. Eventually, Helen enlists the help of Phil to physically alter her face as Dave grows older so that she may properly maintain the illusion of her womanhood for her husband, as she is unable to age due to her synthetic nature. Like a woman with her makeup Helen, “put lines in her face and grayed her hair without letting Dave know she wasn’t growing old with him” (Del Rey 64). In this sense, her uncanny nature lends itself to a sense of timelessness, not unlike her classical beauty for which she was named. Helen exists outside of human aging, being a machine, and must therefore manually alter herself to appear as something she is not, eventually fooling even the men who created her from repurposed parts.

In The Vision, Vision and Virginia execute functions of normal marital love, but recognizing her artifice leads Vision to have anxieties over his wife’s uncanny nature and the deceptive appearances of his own family life. Waking in the middle of the night, Vision finds “himself in a state of dread, his thoughts caught on a repeating image of the day he first saw his wife open her eyes” (King 1.17). Her eyes seem “like a camera lens adjusting to the light” as her “pupils grow and recede” in this memory, and the mechanism of her awakening frightens Vision (King 1.17). Onto her he places anxieties and hopes for a normal family life, but among their human neighbors in the suburb, the illusion of a happy household begins to fade. She is the uncanny body, corpse-like although she functions as if she is alive. Virginia provides Vision with a perfect partner, but in doing so simultaneously removes his hope to regain the love he lost. Interestingly enough, Vision himself is an android built of the same materials as his wife, yet she inspires feelings of uncanny dread in him. In his theory of the uncanny, Freud asserts that “a morbid anxiety connected with the eyes” sometimes represents “a substitute for the dread of castration” (Freud 7). While Vision’s fright comes from the memory of his wife’s eyes, the day she awoke from slumber to mechanical life represents a loss of Vision’s human love. He programs Virginia’s mind to imitate the thought patterns of Wanda, the human woman who used to be romantically engaged with the android man. Virginia’s body, however, reminds her husband that she is artifice. Still, Vision repeats to himself the demand that “I must love her…this is my wife. I must love her,” in order to maintain the image of mechanical marital bliss (King 1.17). He “must” love her because he lacks a reasonable alternative- he is her creator and her husband, the reason for her life. She sustains that notion of “happy domesticity” that men seek when they create cyborg counterparts, where Vision increasingly fails to perpetuate the idea of normalcy he hoped the family would bring to him.

In contrast to the feminine mystique of upholding the sanitation and tidiness of the home, Virginia suggests to a neighbor that “to get to clean you are required to introduce substantive turmoil” to the order of the house (King 6.10). In her case, the substantive turmoil occurs when she destroys the furniture while malfunctioning, asserting that “everything is normal” after the family has come under great scrutiny by their neighbors (King 5.12). Friedan assesses that the only way “the young housewife was supposed to express herself, and not feel guilty about it, was in buying products for the home-and-family” (Friedan 222). As the series progresses, Virginia becomes increasingly volatile while the family comes under scrutiny for the strange events that transpire after they join the suburban neighborhood, expressing herself not through purchases but through rage directed at her own residence. This results in her breaking furniture and destroying the walls such that the home nearly becomes uninhabitable. She struggles to maintain appearances based on human standards of normalcy, and the more the family tries, “to get to clean,” the further they cause discomfort not only in their own residence but also in their neighborhood.  

Her domesticity is not happy, nor is it built on choosing the best products with which to maintain her home. Rather, Virginia’s household disarray reflects the biblical fear that, “women would destroy the home and make slaves of men,” which was used as a justification for making women subservient in the family (Friedan 87). Virginia’s rage does destroy the home not due to the fact that she is a woman, but due to her uncanny nature in an unnatural, domestic setting. Her uncanniness is what keeps her husband enslaved to thinking that he “must love her,” because of the purpose that he built her for, or else lose the illusion of peaceful and happy domesticity. In their struggle to uphold the standards of a conventional, suburban existence, their physical home transforms into unhospitable space that becomes as unhomelike as they themselves, an android and his constructed facsimile of a family performing humanity. While these robots are asked to keep house, slowly they unmake the traditional structure of patriarchy and domesticity within the home.

 

Art-Official Intelligence- Testing Creative Capacity

She’s a triumph of your art and of her dressmaker’s; but if you suppose for a moment that she doesn’t give herself away in every sentence she utters, you must be perfectly cracked about her.”

Pygmalion, George Bernard Shaw (1913)

 

The importance of vocal performance in Shaw’s Pygmalion, so named after Ovid’s fictional sculptor, reveals an important aspect of science fiction’s obsession with humanoid robots. There is a distinction between the outward, beautiful embodiment of a woman and the thoughts she voices, which expose her intelligence and true nature. As Megan Foley assesses in “Prove You’re Human” (2014), her article about interactions and fetishization of material embodiment, “the voice, and the fantasy of bodily presence it sustains, have become a function of informational patterns themselves” (Foley 369). The Turing test serves as a real-world paradigm for much of the imagination of science fiction, whose robots are advanced beyond the capabilities of the technology Turing worked with. In the age of the posthuman, intelligence “becomes a property of the formal manipulation of symbols,” leading to new fictional representations of how humans will test a robot’s humanity (Foley 369). When the machine mind can outsmart a human in code manipulation, humanity must turn to other sorts of performative examinations that are supposedly more difficult to replicate.

In the case of science fiction narratives that move beyond the parameters established by Turing, it is the creative capacity of the android figure which becomes the new determining factor when assessing that being’s humanity. This act demonstrates “a pervasive desire to recover the lost guarantee of a corporeally present human subject on the other side of the computer screen” and in the case of science fiction, uncover the humanity inside the mind of the present android figure (Foley 372). Art and empathy, as is the case with Dick’s Voigt-Kampff test or the visually-oriented Turing variant in Ex Machina, present opportunities for the disembodied creativity to become corporeal in order to prove whether the robot has a soul, a concept elusive even to many humans. This results in the idolization of embodiment and performative labor as humans simultaneously hope to interact with something indelibly alive and embodied, and fear the robot’s ability to fully recreate something thought to be intrinsically human. Speech patterns and other human thought processes can be replicated by algorithms written into computer programs but ideas of talent and creative impulse are thought to be unique to civilization. When a CAPTCHA program asks a user to indicate which segments of an image contain flowers or street signs, the formulaic robot mind appropriates the labor of the human by incorporating the visual information into its database and ascribing the appropriate meaning to the image. Robots can be instructed to reproduce works of music and art already in circulation, but whether they can produce original artistic content seems to be the standard by which human testers measure true consciousness- this is to say nothing of whether humans can truly create new original content. Creativity serves as an indicator for humanity in science fiction, whose androids are advanced enough already to evade detection of more basic Turing tests, and reinforces valuation of those robots whose qualitative labor can remind us of what it means to be innately human.

While undergoing Turing test applications to assess whether she has true consciousness, Ava from Ex Machina produces several pieces of artwork, the first of which is a series of geometric lines in black ink that she claims mean nothing to her. Ava’s first drawing bears resemblance to the surrealist practice of automatism, her paper covered in abstract markings. This practice arose when 20th century artists let their unconscious minds come forth onto the canvas in drawings composed of patterns and unspecific figures. Automatic art hoped to make sense of a war-torn world and allow artists to cope with confusion and trauma. Ava’s art signifies her attempt to access her unconscious mind and produce work representing her cognitive abilities, whether or not the artwork has a defined subject. Ava makes, “drawings every day. But I never know what they’re of,” which does not deter her from creation, although she seeks answers and hopes that Caleb will tell her what the art represents (Garland). Caleb, the human, asks her, “to sketch something specific…like an object or a person,” rather than the formless shapes that neither can ascribe direct meaning to (Garland).

Although Caleb believes that Ava’s artwork would be more beautiful depicting a particular subject, Nathan’s décor brings him to the realization that constant cognizance would never result in artwork being created. Standing in front of a Jackson Pollock painting, Nathan explains that the artist, “let his mind go blank, and his hand go where it wanted. Not deliberate, not random. Someplace in between. They called it automatic art” (Garland). While there is no doubt that Pollock stands among the more famous American artists, his process of drip painting mirrors that of Ava’s ink drawings, wherein she produces work without a specific intent, instead letting her artificial mind access its more unconscious functions. By employing this process in her artwork, Ava does not merely recreate a specific work, as most robots are prone to simply imitate behaviors that have been programmed into them. Here, the “challenge is not to act automatically. It’s to find an action that is not automatic,” and access that unconscious self which allows the artist to produce new work free from constraints of guided intention (Garland).

When Caleb challenges her to draw something more recognizable, he changes her automatic artwork into something purposeful, at which point she can use that intent of meaning to manipulate her observer. Foley suggests that, “cybernetic circuits expand their capacities by appropriating the labor of human bodies,” as Ava appropriates Caleb’s suggestions for art and presents them to him in order to win his sympathies; first, with a drawing of her own enclosure and then with a portrait of him, after which he finds himself devoted to helping Ava escape (Garland). After being shown the portrait Ava has drawn of him, which was torn to pieces by Nathan just prior to this meeting, Caleb becomes resolute in his attempt to outwit their observer and liberate the robot from her confines. In this way, while it seems that Ava produces labor for her tester by creating drawings with his specific subjects in mind, she as the robot actually benefits from his human labor, incorporating his requests into a drawing that wins his allegiance and causes him to work to her benefit, similar to the aforementioned CAPTCHA program. Rather than recognizing the implications of an artificial intelligence creating automatic artwork, Caleb focuses on the embodiment of her drawings and what they represent about her clear and present thoughts, and seeing himself as the subject of her guided drawing proves the final distraction in understanding the processes of her consciousness.

When considering his assignment to eliminate her in Do Androids Dream of Electric Sheep?, Rick Deckard judges android opera singer Luba Luft mainly by the cultural value of her creative output. Upon discovering the renegade android performing Mozart’s The Magic Flute opera, “he found himself surprised at the quality of her voice; it rated with that of the best, even that of notables in collection of historic tapes” (Dick 99). Deckard’s context for the beauty of her voice consists of these historic tapes which are auditory reproductions such as cassettes or records which invoke Foley’s idea of the vocal performative act. As far as he knows, the recordings capture the voices of singers from humanity’s past before World War Terminus, but without absolute certainty, they represent “the erasure of embodiment” by reproducing only the vocal performance of whoever composed the musical tracks (Foley 369). By holding the “historic tapes” in esteem while simultaneously attributing Luba’s talent to her programmed functions, Deckard devalues the opera singer because he can observe her visual presence and knows that she is a product constructed by the Rosen Association, whereas he seems to be certain that his musical recordings comprise human voices of the past.  

Considering that his mission involves retiring, or killing, Luba, Rick Deckard wonders what loss of cultural value will occur upon her death because, “she was really a superb singer… how can a talent like that be a liability to our society? But it wasn’t the talent, he told himself; it was she herself” (Dick 137). As a bounty hunter, “he realized, I’m part of the form-destroying process of entropy. The Rosen Association creates and I unmake,” positing Luba Luft as a created entity which Deckard must destroy because her very embodiment represents the danger of android bodies integrating indistinguishably with humans (Dick 98). Her talent, which is in this case her voice, is more valuable to the world than her body which, being a Nexus-6 type, poses a danger to human society. The escaped Nexus-6 models were a former source of labor for the emigrants of Mars, having killed several humans to escape their slavery and hide on Earth. Deckard’s specific purpose in killing these robots unmakes those embodied androids that the Rosen Association produces, and while he may succeed in destroying the form, the corporation retains the intangible code and information with which the robots are constructed.  

After being tracked to an art museum, Luba Luft contemplates an Edvard Munch painting, Puberty (1895), which symbolically encapsulates the transitory state that the android body occupies as an embodiment of human form whose intelligence is an intangible pattern of programming. The painting itself is “a drawing of a young girl, hands clasped together, seated on the edge of a bed, an expression of bewildered wonder and new, groping awe imprinted on the face,” and the title of the painting adds context to the moment of human transition from childhood to adulthood (Dick 131). The painted female body captures this state of progression while also leaving it static and unable to develop beyond the depicted moment just like the transitory bodies of the Nexus-6 models who exist fully formed although merely a few years old. When Luba spots the pursuing bounty hunters at the museum, “the color dimmed from her face, leaving it cadaverous as if already starting to decay. As if life had in an instant retreated to some point far inside her, leaving the body to its automatic ruin” (Dick 131-2). Deckard notes the duality of her embodied mechanical self and the life and personality of her programmed performance as, “the bracketing of material flesh from those simulated speech acts installs a mind/body binary in their place” (Foley 367). Luba’s mind is the immaterial coding that allows her to be as prodigious an opera singer as she is, while her physical form is what disguises her among human company, allowing her to take work as a performer and not just an intelligent computer.

In Kelly Sue DeConnick and Valentine DeLandro’s Bitch Planet (2014), the deceased inmate Meiko Maki must be digitally recreated when her father, a prestigious engineer, comes to visit the prison facility and see his daughter. Having unwittingly let a guard kill Meiko, the prison directors choose to display a hologram for her father Makoto, claiming the cloyingly pink projection is his daughter, speaking from another part of the facility that does not allow for visitors. This incident relies “on fetishization of the live voice in interaction patterns to fool human users… [becoming] a repeated, real-world Turing test” in order to keep Makoto ignorant of the abuses of the prison (Foley 370). The pink holograms, called “models,” represent the idealized or ‘model’ form of the non-compliant prisoners. She carries the image and voice of his daughter, but not the physical presence or rebellious spirit- she is more calculated and demure. Series letterer Clayton Cowles even indicates the artificiality of her presence by tailing the facsimile Meiko’s speech bubbles with an electric jolt tail, representing hologram’s vocal performance. This suggests a hollow, almost lifeless sound quality to the voice of the imitation woman, and complicates her attempt to recreate Meiko’s consciousness.

Makoto, upon realizing the projection is not, “my Meiko,” prompts her with a test of music. He requests she play Heinrich Wilhelm Ernst’s variations on the Celtic folk song “The Last Rose of Summer” on the violin (DeConnick, 8.17-18). Demonstrated through the visual of her eyes appearing to buffer the request as a loading video would, the simulation of Meiko processes the request in order to comply with Makoto’s wishes. Here, her body has, “become the phantom limbs of informational networks,” where her physical, deceased form is no longer needed to access and replicate the Meiko’s virtual presence, though the model Meiko lacks the memories and personality of her living self (Foley 375). She becomes an algorithm presenting itself as a woman whose only directive is to comply with the input commands of a male superior, making herself a ‘model’ of submission in this patriarchal prison system. This facsimile cannot reproduce the mnemonic nuance of the real Meiko Maki, and so Makoto exploits this with his request for a violin performance. His daughter was taught to play folk songs, and the Ernst variation on “The Last Rose of Summer” is a much more musically complex piece. Foley suggests that,, “fetishization of bodily performance by information technologies has turned into…commodity fetishism,” which, first described by Karl Marx, “disavows the performance of labor and substitutes of the product of labor in its place” (Foley 376). The programming of Meiko’s compliant counterpart concerns itself only with fulfilling Makoto’s request for a specific violin performance and eschews the personal relationship between the father and daughter. A silent, three panel segment focusing on Makoto’s face demonstrates his realization that there is, in fact, no sentient person beyond the pink apparition bearing resemblance to his daughter (DeConnick 8.19). Her bodily absence and the technically perfect performance of the difficult violin piece expose the artifice of the model hologram.

Meiko’s hologram appears smiling in the reflection of Makoto’s teardrops on the floor, and as she asks if she did well, her father asserts that she played, “so well you broke my heart” (DeConnick 8.19). The image of Meiko’s smiling reflection directly mirrors a panel from the sixth issue of the series, in which the conditions of her imprisonment are explained. Meiko’s reflection smiles up from a puddle of blood which is draining from a man she has strangled with her violin string (DeConnick 6.26). She explains the anatomy of her instrument, the violin, as being compared to:

“a woman’s body. It has a back, a neck, a nose, a belly… even ribs. But the parts that interest me most are the bridge and the sound post, also called the heart and soul. The bridge holds the strings, and transfers vibrations to the belly, where they pass through the soul post. The soul supports the structure. It keeps the body from collapsing under the pressure created by the tension of the wires on the bridge… the heart strings.” (DeConnick 6.5).

This comparison between the body and the instrument introduces the idea that Meiko’s violin playing manipulates heart strings both in the literal sense, on her instrument, and in the figurative sense, upsetting her father and making him emotional. While the violin has a technical aspect of soul, the hologram of the dead woman lacks this same soul, and her very image plays on the heartstrings of her visiting father as her perfect talent reveals that the Meiko he knew has died. Her musical performance was flawless, but it lacked the heart and soul of Makoto’s flawed, and very loved, daughter. While the living Meiko Maki killed a man with her violin strings, the projection of her has, in a metaphorical sense, killed her father when he realizes with a broken heart that his imperfect daughter has been replaced with a flawless simulation. In differentiating between the human and the artificial, the test of creativity has its limits at perfection. These tests of the musical and visual creative capacity of robots reveals the human fetishization of the body, marked by an insistence on imperfection- human flaws reveal the highly functioning artifice of the mechanical being.

 

Automatonomy- Cyborg Feminism and the Future

Lord, we know what we are, but know not what we may be.”

Hamlet, William Shakespeare (1603)

 

The previously examined texts largely deal with gendered robots performing the role of traditional femininity as imparted on them by their male creators. Even in the case of the satires Bitch Planet and The Stepford Wives, the robots are presented through the lens of patriarchal control, which helps to expose the constraints of gender performance as demonstrated through the sexbots, happy homemakers, and artificial artists. In the quote taken from William Shakespeare’s Hamlet, Ophelia remarks on the woman’s potential to transform into another state of being. While meant to be taken in the context of her madness, this statement lends the female figure possibility for change, and so lends the TechnOphelia, the robotic woman, a transformative power as well. Feminist theorist Donna Haraway offers a divergent analysis on how cyborg technology can better serve and shape human, and particularly female, interactions with the machine in her essay “A Cyborg Manifesto,” from Simians, Cyborgs, and Women (1991). Haraway proposes that, “the cyborg is a matter of fiction and lived experience that changes what counts as women’s experience in the late twentieth century” (Haraway 291). Representative of a post-gender world, the figure of the cyborg integrates the individual with technology, allowing access to, “a kind of disassembled and reassembled, postmodern collective and personal self. According to Haraway, “this is the self feminists must code,” in order to operate in society, unconstrained by what the author calls the elusive “concept of woman” (Haraway 302). Womanhood as a singular concept does not exist, definitions of femininity varying from person to person. Because robotic technology is not inherently gendered but exists between the human and nonhuman, Haraway examines the way in which women can utilize the “integrated circuit” of humanity merged with technology to exist apart from constraints of society, effacing traditional gender binaries through liminality (Haraway 304).  

While Turing’s imitation game focused on the performative differences between the male and female genders, the concept of the integrated circuit, first coined by Rachel Grossman, allows “fresh sources of analysis and political action” in “a world so intimately restructured through social relations of science and technology” (Haraway 304). Despite the large number of these science fiction texts dealing with men’s interactions with the cyborg, there are those that offer a vision of technology existing beyond the definition of gender, utilized by women in ways other than the performative sexual or domestic tradition. When women interface directly with the machine in these texts, they are able to overcome the “exploitation into a world of production/reproduction and communication called the informatics of domination,” a series of “simultaneously material and ideological” dichotomies that constrain women to specific roles in society (Haraway 300, 302). Furthermore, “high-tech culture challenges these dualisms in intriguing ways. It is not clear who makes and who is made in the relation between human and machine,” (Haraway 313). Women in the integrated circuit work in tandem with technology to better exist in information-driven society while stepping outside constraints like the gender binary, and relationships defined by patriarchy.

In William Gibson’s Johnny Mnemonic (1981), individuals directly integrate technology into their own bodies, using their cybernetic enhancements to navigate a society built on data exchange. These enhanced citizens, referred to as “technical” people, inhabit “…an information economy. They teach you that in school. What they don’t tell you is that it’s impossible to move, to live, to operate at any level without leaving traces, bits, seemingly meaningless fragments of personal information” (Gibson 9).

Humans recode their bodies through surgical means, and for women this means the integration into that technological circuit which governs their society. Molly Millions, an enhanced Razorgirl, uses her cyborg body to become a skilled assassin after being implanted with “ten blades…beneath her nails, each one a narrow, double-edged scalpel in pale blue steel” (Gibson 4). Having also had her eyes replaced with, “surgical inlays…sealing her eyes in their sockets,” Molly navigates between the technical and Lo Tek societies, taking hit jobs and protecting herself against the Yakuza gangs. Her technological enhancements do not compose her entire being, but rather she willingly has “Chiba City circuitry traced along her thin arms,” in her eyes, and in her hands in order to make herself a capable assassin and strong bodyguard to the eponymous Johnny who transports information through his brain in the form of code (Gibson 11). This can be considered in light of Haraway’s theory that, “the difference between machine and organism is thoroughly blurred; mind, body and tool are on very intimate terms,” as a result of combining the technological and the organic (Haraway 303).  

Utilizing this blurring of boundaries, the Magnetic Dog Sisters are another example of women working within the circuits of society, not defined by gender roles. In a passing mention, Johnny notes: “the Magnetic Dog Sisters were on the door that night…One was black and the other white, but aside from that they were nearly as identical as cosmetic surgery could make them. They’d been lovers for years and were bad news in a tussle. I was never quite sure which one had originally been male” (Gibson 1).

While the two are not sisters in biology, these two women share identity both in physical appearance and association. They represent some of the aforementioned dualisms of the body, such as racial divisions between black and white, but also transcend traditional limitations as posthuman figures, one transgender and both cyborg in their modifications. Their individual identities become confused in Johnny’s memory, but they are considered equals in appearance, strength, and romantic partnership. Haraway asserts that cyborgs in science fiction “make very problematic the statuses of man or woman, human, artefact, member of a race, individual entity, or body,” and by confusing identities, they allow women to exist as something other than within the prescribed notions of traditional female gender roles (Haraway 314). They are sisters, lovers, and identical although distinct, integrated into the cyberpunk society through technological means.

Julie Wosk includes a chapter on “Dancing with Robots” in My Fair Ladies, stating that “men in literature, film, and art have long been pictured dancing with robots and dolls- beautiful artificial women who gaze at them lovingly and fill them with wonder and bliss” (Wosk 152). Gibson undermines the expectations of this trope in the climax of Johnny Mnemonic as Molly Millions dances not with, but against a Yakuza killer in a battle to the death on the Lo Tek Killing Floor. Not an artificial woman, Molly utilizes her enhancements and protects herself against another trained assassin. The room beats with music “electronic, like an amplified heart, steady as a metronome,” as the woman becomes a part of the roiling floorboards and shrieking coils in a “mad-dog dance” (Gibson 11). Instead of being a creation designed to dance at the man’s pleasure, Molly lives and thinks for herself, controlling her own identity through her cybernetic enhancements. To Haraway, “the machine is us, our processes, an aspect of our embodiment,” and the cyborg figure melds the machine with the human body, creating parts of identity and ability that make up a new whole (Haraway 315). Molly’s strength and technological enhancements are a part of her embodiment and allow her to exist as a bodyguard, a fighter, and a partner to Johnny. Between the dancers there is no “wonder and bliss” but concentrated fervor and violence until ultimately Molly’s victory leads the shamed man to leap to his death (Wosk 152). To dance with this cyborg invites danger and intrigue because her eyes do not gaze lovingly, but look from behind silver screens that calculate her next movements.

For women to become a part of the integrated circuit, they do not need to literally graft technology into their skin, but the machine can become an extension of the woman’s sense of self. Haraway asks, “Why should bodies end at the skin, or include at best other beings encapsulated by skin?” (Haraway 314). In Rolin Jones’s stage play The Intelligent Design of Jenny Chow (2006), agoraphobic Jennifer Marcus constructs a cyborg double that learns from her maker’s personality while simultaneously developing an identity of her own. Unable to leave the home because of her obsessive-compulsive disorder and severe phobia of the outside world, Jennifer Marcus uses her creation Jenny Chow in order to interface with her biological mother in Dongtai, China. The robot serves as Jennifer’s integration into society, giving her a new body to overcome physical restraints as well as patriarchal policy of Chinese family structure that forced her mother to abandon her in the first place. Jenny Chow helps a mother and daughter reconnect despite the, “world system of production/reproduction and communication,” that keeps women oppressed, and in this way represents the cyborg “tools [which] embody and enforce new social relations for women worldwide” (Haraway 302). Despite their physical and emotional distance, cyborg technology allows women to transcend boundaries

While Jennifer imbues the robot with information from her own identity, Jenny Chow begins “taking on her own personality… curious, excitable, poor sportsmanship… the big thing was when I wasn’t looking, she was beginning to make her own decisions. With guidance she was becoming beautiful” (Jones 45). Jennifer sees her creation in the manner of not only a scientist making observations but also in a way like a mother watching her daughter grow up into adulthood. Being a prodigious engineer and self-described mechanics whiz, her “intense pleasure in skill, machine skill, ceases to be a sin, but an aspect of embodiment,” as her robot counterpart inherits her aspirations for maternal love and an awareness of her own cultural identity, and incorporates them into its own personality (Haraway 315). Jenny is her own original design, and while built for the purpose of standing in for Jennifer, she begins to inhabit her own behaviors and feelings similar to how a child learns from their parents but ultimately becomes a unique person. For Haraway, “in imagination, and in other practice, machines can be prosthetic devices, intimate components, friendly selves” (Haraway 314). Developed with instinctual programming provided by Dr. Yakunin, Jenny physically embodies Jennifer’s desires to meet her estranged biological mother and overcomes physical limitations set by Jennifer’s crippling agoraphobia, which keeps her unable to leave her home.

During the first act of the play, Jennifer’s adoptive mother arrives drunk and scolds her daughter with a harsh reality check: “ADELE: I live in the real world, Jennifer. I’m real. And in the real world, women get screwed out there and if you’re not prepared they will squash you… I’m real. This house is real, Jennifer. And you can hide in your room and log on if you want to. But that’s not real! That’s a dream!” (Jones 31).

A stern businesswoman, Adele rejects her daughter’s desires to engage with digital technologies, thinking them a waste of the girl’s intelligence. She views Jennifer’s time talking with others on the internet as frivolous and immaterial, but these “tech-facilitated social relations,” enable Jennifer to integrate herself with the outside world while she works on her robotics (Haraway 304). While Adele does not consider the Internet realm to be “the real world,” which in her mind represents material labor and physical embodiment, these technologies allow Jennifer to exist in spaces outside of her own home and her own body, imparting her own consciousness to Jenny Chow as well as speaking with her estranged biological mother on another continent.

In order to accomplish these feats, Jennifer takes a job repurposing missile parts for the United States government, using spare parts she acquires to build her robotic double. Haraway traces the genesis of cyborgs, calling them, “illegitimate offspring of militarism and patriarchal capitalism… but illegitimate offspring are often exceedingly unfaithful to their origins,” and while this is seen as troubling, Jennifer exploits the military’s dependence on their own robotic technologies, helping them to improve missile function while she gathers their spare parts for her own work (Haraway 293). As cyborgs can be considered illegitimate children, “their fathers… are inessential,” lending a maternal narrative to the manifestation of these new beings especially in the case of Jenny Chow (Haraway 293). While Jennifer has an adoptive father who cares deeply for her, her biological father plays no role in her desires to build her robot. Rather, Jenny Chow becomes the next woman in a lineage of mothers and daughters estranged by the patriarchy and reunited through technology.  

Because of Chinese restrictions on family sizes, Jennifer’s mother gives her up so that the female infant, a gender seen as less desirable due to inheritance rights, can have a better life in America. Somewhat mirroring her mother giving her away as a baby, an upset Jennifer sends Jenny out into the world alone in an emotional exchange;

JENNIFER M.: You are flawed. You have to go.

JENNY C.: I am sorry. I am so very sorry.

JM: (pause) You have to go.

JC: I am very beautiful. (Jenny Chow climbs out the window) (Jones 68).

While Jennifer hopes that banishing her robotic counterpart will dispel her own personal shortcomings, she realizes that losing Jenny means losing a vital part of her life. Jenny represents the “intimate components, friendly selves” that Haraway recognizes of cyborgs in relation with humans. Speaking to the audience at the conclusion of the play, Jennifer regrets her choice to detach herself from Jenny and from her participation in the integrated circuit her creation allowed: “JENNIFER: I made a lot of mistakes. But she’s not one of them. She’s my…perfect girl…She’s infinitely more complex than anything out there. And she’s very afraid. I can feel her. I can feel her” (Jones 69).

The language used to refer to Jenny takes both ownership and responsibility for her being, indicating her status as a companion, a counterpart, and also offspring.  Jennifer calls her “my perfect girl”, as a mother might call her own child, but also as a creator referring to her product. Jenny was what Jennifer imagined to be the “perfect girl”, a better version of herself, without the emotional baggage accompanying her agoraphobia and obsessive-compulsive disorder. Despite having pushed her out of her home, Jennifer can still feel a connection to Jenny, whose very existence helps her to engage even more with others in the integrated circuit than simply through the Internet. Their bond transcends physical limitations, connecting human to machine and woman to woman as they interact with one another and the world at large, enabled by technology and at the very core, the desire to connect with others.

 

Rust in Peace- A Conclusion on Robot Suicides

I’ll be back.”

The Terminator (1984), dir. James Cameron

 

Perhaps one of the more compelling and frightening aspects of the robot body is its longevity. The Terminator says it best in his famous quote, “I’ll be back,” indicating the persistence and endurance of the android’s life thanks to enhanced technology and intelligence made of code. A figure of the posthuman age, the synthetic humans of science fiction comprise much more durable materials, more permanent information networks, and an existence beyond current civilization. As a part of the human fetish of embodiment, there is a great deal of preoccupation with material form and the preservation of physical presence. This preoccupation leads to the intrigue of such figures as the sexbots, whose bodies can be fashioned to suit our sexual urges, while in other instances the focus is on what creative labor the robot can perform for us. As humans create what they desire to be a new life, patterns of societal gender norms are applied to these cyborgs, restricting them into patterns of domesticity and subservience for those built as recreations of femininity in culture.

In certain cases of science fiction, we see the robot body used as a metaphor for the figurative death of a woman which leads to her removal from the integrated circuit. The death of the living Meiko Maki enables the creation of her artificial self, already removed from society through her incarceration by the patriarchy of Earth. Joanna Eberhart is assumed to be killed and replaced with a brainwashed reproduction trapped in a suburban purgatory along with dozens of other housewives. Luba Luft has to be killed because, “the servant had in some cases become more adroit than its master” (Dick 30). With cyborgs comes the human fear of being rendered obsolete by the mechanical offspring of our own creation. Robots are built in the image of humanity, imbued with its strengths and shortcomings, and while they do not carry our biological genes forwards into survival, they do represent another sort of hope for the persistence of human intelligence and culture.

While in some narratives a woman must die in order for an android to replace her, other stories see the cyborg’s final act of agency in deactivating or otherwise ending the life of its body. In the French short film Lost Memories 2.0, a cyborg call girl formerly programmed to assist surgeons in a hospital asks her customer to deactivate her body, although her artificial intelligence exists within the Cloud. Unhappy with the way she has been sold and managed by a pimp, the android woman wishes to end her material existence so that she might be at rest. This can also be seen in The Vision when Virginia, to escape the turmoil of their suburban misery, drinks a glass of water and spends her final moments resting in the arms of her android husband as the liquid corrodes her internal systems. Vision’s thoughts narrate those final moments, saying, “Virginia did the right thing. Or she did the wrong thing. Or she just did what everyone does,” and after his heartbreak he vows to bring back her consciousness in a new form (King 12.17). While doing so would deny Virginia her wish to die and once more make her his object, it does suggest something about the permanence of the posthuman self. Because the mind and body are separated in posthuman existence, the intangible code of intelligence remains even after the death of physical form. To deactivate the physical body does not destroy the posthuman mind, whose information exists without material form.

The mortality of the physical self often causes anxiety to those who contemplate what will become of their thoughts, memories, and lived experience upon death. Human creators bestow their own knowledge of life onto these android offspring, who then preserve it and proliferate it. When humans take the attitude of cooperation with the cyborg rather than competition for survival, advancement will naturally result from the integration and experience as imagined in Haraway’s theories of cyborg feminism and those texts which see the machine as an extension of that prosthesis Hayles called the posthuman body. When the robot’s mechanical form is fetishized and considered closer to its slave origins, as a laborer or object for consumption, the figure becomes confined by humanity’s need for immediate gratification and fear of its own extinction. However, it is in collaboration with human knowledge that the cyborg can take its fullest form, allowing civilization to benefit from the progress it enables. The posthuman mechanism effaces traditional binaries and empowers women connected through technologies across the world. To code consciousness into machines does not condemn humanity to irrelevance, but may in fact prove to sustain the intelligence and information of the modern age well into posterity and posthumanity.


A Senior Honors Thesis

For the Terms: Fall 2016 and Winter 2017

University of California, Santa Barbara

Student: Amy L. Chase

Advisor: Professor Brian Donnelly


Works Cited

Cameron, James, dir.  The Terminator. Orion Pictures, 1984.

DeConnick, Kelly Sue (w), Valentine DeLandro (a), Taki Soma (a), Clayton Cowles (l), and Kelly Fitzpatrick (c).  Bitch Planet- v.2, President Bitch.  Image Comics, 2014.

Del Rey, Lester.  “Helen O’Loy”.  Originally published in Astounding Science Fiction,  1938.  PDF article.

Dick, Philip K. Do Androids Dream of Electric Sheep? Random House, 1968.

Ferracci, Francois, dir.  Lost Memories 2.0. Short Film.

Foley, Megan.  “‘Prove You’re Human’: Fetishizing Material Embodiment and Immaterial Labor in Information Networks”.  Critical Studies in Media Communication.  Taylor & Francis Group, 2014.

Freud, Sigmund.  The Uncanny. 1919. PDF article.

Friedan, Betty.  The Feminine Mystique.  W. W. Norton & Company, Inc, 1963.

Garland, Alex, dir.  Ex Machina.  Universal Pictures, 2015.

Gibson, William.  Johnny Mnemonic.  Originally published in Omni Magazine, 1981.  PDF article.

Haraway, Donna. “A Cyborg Manifesto”. Simians, Cyborgs, and Women: The Reinvention of Nature.  Free Association Books, 1991.  PDF article.

Hauskeller, Michael.  Sex and the Posthuman Condition.  Palgrave Pivot, 2014. 10.1057/9781137393500.

Hayles, Katharine.  How We Became Posthuman. University of Chicago Press, 1999.

Jones, Rolin.  The Intelligent Design of Jenny Chow. New York: Dramatists Play Service, Inc, 2006.

Kakoudaki, Despina.  Anatomy of a Robot. Rutgers University Press, 2014.

Kellet, E.E.  The Lady Automaton. Originally published in Pearson’s Magazine, 1901.  PDF article.

King, Tom (w), Gabriel Hernandez Walta (a), Michael Walsh (a), and Jordie Bellaire (c).  The Vision- v.2, Little Better than a Beast.  Marvel Comics, 2015.

Levin, Ira.  The Stepford Wives.  Harper Collins, 2002.

Shaw, George Bernard.  Pygmalion.  Simon & Schuster, 2009.

Shakespeare, William.  Hamlet.  Signet Classics, 1998.

Shelley, Mary.  Frankenstein.  Bedford/St. Martin’s, 2000.

Wosk, Julie. My Fair Ladies: Female Robots, Androids, and Other Artificial Eves.  Rutgers University Press, 2015.