Traditional literature and poetry can do a lot to help humans comprehend the meaning of minutia surrounding them, but some issues are to complex for a world that resembles our own. That is where Science Fiction comes in. SF (“SF” is a common literary abbreviation for “Science Fiction” not “San Francisco,” the setting of the novel) serves as a means to take questions to the extreme for analysis at their logical (or sometimes illogical) final state. At the core of good science fiction is an examination of complex and real questions through a means that can not exist in the world as we know it. Indeed, we really should listen to Nigel Wheale’s statement, “[i]t really is time to take science fiction seriously” (297). By taking advantage of this newly validated suite of tools, authors can examine large issues about the essence of humanity and its interactions. One of the best examples of this is Philip K. Dick’s Do Androids Dream of Electric Sheep?, “a classic in the genre. […] It works on many levels and addresses a range of humanity’s most pressing concerns” (Axelrod, 84). Dick uses androids to examine the concepts of nature and artificiality in order to dissect the false privileged binary between the two.
A very basic question one might ask looking at Do Androids Dream? is whether or not Dick thinks of Androids as living things, but that question is far too reductive. Do Androids Dream? makes it clear that electric things are also alive through the protagonist Rick Deckard’s observation “The electric things have their lives, too. Paltry as those lives are” (Dick 241). This sentence provides a good starting point for the scholarly work done around the novel. The issue of existence and of consciousness are taken as given. With the novel’s advancements in making robots more “human” there is no way to tell an organic human apart from a functioning android until they have expired to have a bone marrow test taken. Because it is so remarkably difficult to tell the difference between the two, the novel takes their consciousness and even sapiency as a given. The question is not whether robots are lives, but how they should be considered.
What makes a human? Perhaps it would be easier to define man vs. not-man if we were able to communicate with things that are not human, but at this point such conversations are beyond the realm of possibility. So we, as a species, are forced to discuss the issue among ourselves. By only being able to experience the concept of intelligence with itself, humanity was led to an anthropocentric worldview. This is a belief of human supremacy Per Schelde best summarizes as the feeling humans are “specially created with a purpose: their salvation and final happiness. Tied in with this purposefulness is the notion of a free will” (125). The ability to actualize desires is a central point of differentiation between what makes something living with a conscious and what is not, and therefore lesser.
This thinking establishes three levels of privileged class: human, animal, and other. This notion is one of the traditional ways that man has distinguished himself from machine. Dick upsets this balance with the introduction of intelligent machines that resemble the top caste (humans), androids. Through this unsettling, androids seek to leave the lowest caste, the other, to function among the hegemonic beings. So, asks Wheale, “what would be the difference between a physically perfect android kitted out with memories and emotions passably like our own, and a person nurtured through the usual channels,” and furthermore “what is an authentic human psyche” (298)? For centuries, the question was as simple as attempting a conversation (Similar to the traditional “Turing Test” to differentiate between a human and a computer consciousness).
In the real world, it is fairly safe to determine an entity’s humanity by looking at it. In Do Androids Dream? it may not be enough to isolate humanity at a glance. Mark Axelrod notes the difficulty wherein “androids have been reproduced to be exact replicas of real humans; therefore, they are difficult to tell them from real humans” (86). By using such a simple test to determine what is human for so long, mankind constructed what Christopher Sims names the “binary [of] natural/artificial” (68), wherein humans can converse, animals can react but not converse, and that bottom class, the other, which has no ability to recognize humanity, plants and inanimate objects (At this point in reality robots, largely incapable of intelligent thought, would belong to this category). Why are animals privileged above the other? Tony Vinci explains animals are granted a special status “for their ability to register human existence, but non-intersecting gazes between humans and animals position them as objectified commodities” (100). Within this critical theory not all animals are equal in their value to humans.
For example, many humans would value a dog over a fish for its ability to interact with, and even assist in human life. Dog is “man’s best friend” because it possesses enough intelligence to directly make man’s life better, and recognize our existence in contrast to that of other beings. Still, they are clearly lower because virtually all cultures will refer to other humans as various kinds of animals (English examples would include: pig, ass, cow, bitch, etc.). Many humans become greatly offended that they could have some distant evolutionary relations to primates. Though not human, humans value animals for their ability to enhance humanity. This leads to the third class, the other. The party of concern in the novel are the electric beings, both the artificial pets as well as the human mimicking androids. This is where the natural/artificial binary starts to become apparent.
Up to this point this essay has focused on what differentiates between the three castes in relation to their positions in Do Androids Dream?. I will turn now to look at how humans in the novel reinforce their position atop the arbitrary hierarchy. An iPhone has more utility and use for the average human than an elephant. The prior can serve any one of millions of functions for any modern person with at least one functional hand, whereas the elephant is a variant of nature that some humans find aesthetically pleasant. Yet, anyone who would choose to save an iPhone over an elephant would be looked at as a monster. So, humans will place some value in the “other,” for its utility, but its inability to recognize humanity makes it a cut below animals. Sims explains that this is because “modern Western cultures hierarchize the natural and the artificial” (85). This “difference” is an illusion, there is nothing between the two but human opinion. Those concerned with preserving the high station of the natural can not stand to see this change in the ancient balance of power. So, what then of the intelligent android?
Rick Deckard’s San Francisco exists after “World War Terminus,” a nuclear armageddon that killed most life on earth and left the rest irradiated beyond reproductive capability or, for the rich, fleeing to the safety of Mars with android servants. Where many real-life humans cherish their animals as a friend and companion, a pet is less of a companion to the post-apocalyptic person and more of a prized possession to convey social status, in part because of their rarity. Because the radiation has made it impossible for any species to reproduce in a fashion that would keep up with demand, only the rich can afford to have a genuine animal and everyone else can get electric mimicries, such as Deckard’s titular electric sheep, to try and convey status. In this society, “crimes against animals which universally horrify humanity” (Wheale 300) are fundamental to the maintenance of status quo. If humans are unable to have their reaction to violence against another living thing, how would they know they are human?
Drowning in anthropocentrism, humans promptly place themselves at the top of the organic/inorganic class system, and make moves to bring greater hegemonic status. In explaining the violence of humans toward androids, Axelrod explains “There is a fear that [androids] will create some kind of havoc if not eliminated, […] what kind of havoc they would wreak is not exactly detailed though” (86). Because they can be so hard to spot, humans need something to draw a line with. To this end, the humans of Dick’s imagination develop the “Voigt-Kampff” empathy test, administered by the protagonist Rick Deckard to suspected androids. The goal of this test is to detect empathy, which is decided to be the paramount human virtue by post-apocalyptic society. The primary reason for this premium on both empathy and the testing thereof is that robots do not feel empathy, or at least not in the way that humans recognize.
The novel’s humans are very concerned with empathy, they even make it into a religion called “Mercerism.” In this newly founded devotion “Empathy is the paramount tenet of Mercerism” (Sims 74). That is where the value of animals is relevant. Vinci argues animals are able to produce empathetic emotions and comfort, but really “what humans are ‘with’ is not animals but the imagined lack of their own lack” (100). Counterintuitively, the humans of Do Androids Dream? rarely empathize with one another.
Axelrod describes the situation “Deckard’s thoughts, mediated by the narrator, are often insightful for Deckard himself; however, we know very little about his wife’s, Iran, thoughts and concerns on more than a superficial level” (91). By contrast, Wheale notes that in the 24 hours required for the novel to unfold, “Rick Deckard’s infatuation with Rachael [a Nexus-6 android] is the most troubling instance of this problem. In the novel, bounty hunter and android sleep together” (304). Deckard has an intimate and emotionally meaningful connection not with the woman he has agreed to spend his life with, but a Nexus-6 model android he was sent to examine for signs of empathy just hours before. As Rachel notes, “You’re not going to bed with a woman […] don’t think about it, just do it. Don’t pause and be philosophical, because from a philosophical standpoint it’s dreary. For us both” (Dick 194). In a world concerned with empathy on a religious scale, people don’t seem overly concerned with their fellow humans.
Wheale explains the “Do Androids Dream? employs this idea of ‘affect’ to distinguish between a ‘person-Thing’ and a human entity: humanity experiences affect (and affect-ion), robots don’t” (299). Some of those biological people don’t meet this definition of humanity as empathy. Wheale further describes this dilemma of people who “suffer from a ‘flattening of affect’, and in the test situation could be mistaken for robots” (299) because they are not mentally able to feel empathy. Though they are traditionally born and raised humans, they are incapable of the primary characteristic assigned to humanity by the global religion.
This isn’t the only criterion that can disqualify people from humanity, and empathy for these people seems to be in short supply. One of Do Androids Dream?’s major characters, John Isidore “had failed to pass the minimum mental faculties test, which made him in popular parlance a chickenhead” (Dick 18). Isidore is capable of some amount of empathy, as illustrated by his interest in animals and his android companions, but is not of high enough intelligence to be granted full and legal personhood which would allow Isidore to get a better job, own an animal, and/or flee the irradiated landscape of Earth for the safety of Mars. Both those with a flattened affect and sub-par mental facilities are biologically and genetically humans, but they are not considered such. So humanity isn’t even tied to biology as would be suggested by privileging animals over robots. Humanity is tied to power.
Having reviewed the negative consequences of using empathy as a means of differentiating mankind above all else, I will now show why this is a flawed test and a false construct. There is the “empathetic” practice of Mercerism itself. Iran Deckard demonstrates this in the novel’s opening scene when she connects to the Mercerist “empathy box” which shows the object of the religion, Wilbur Mercer going through trials and tribulations to spark empathy among his followers. Wheale describes this process as “tuning in to an ’empathy box’ each individual shares in the Ascent of Mercer, and shares the antagonism directed to their god-figure by some unknown enemies” (299). This allows for humans to share in the experience of suffering together with Mercer, virtually the definition of empathy; however, this experience is being delivered in an artificial manner. Galvan goes as far as to say “the empathy box, which despite its name more undermines than facilitates the experience of emotional community” (418).
Humans do not go to an event occurring live and/or in person, they are essentially watching a rerun on TV. It is still given to humans by the use of a human creation that artificially delivers an experience that is not actually happening in the moment. Optimistic skeptics might argue that it is inconsequential to receive in an artificial way because the “feeling with” of empathy is still natural. This would be a valid argument within the context of Do Androids Dream?, except the full experience of Mercerism, that world’s primary religion and inspiration of empathy, is fundamentally false.
Wilbur Mercer, believed to be an actual person, is revealed by Buster Friendly to be a man “named Al Jarry, who played a number of bit parts in pre-war films” (Dick 207) and then acted as if he were being persecuted for the scene before resigning to a secluded life in Indiana. This “empathy” was not with a real person or even a real event, but a purely synthetic experience Jill Galvan argues is primarily a means of the government to keep people from rebelling against the structure (417). Not only is the means of experiencing artificial, not only is the experience conveyed synthetic, the purpose of the storytelling is not organic and grassroots in nature, it is a means of the oligarchs within the hegemonic class to guard their power. This leads to the conclusion that either empathy can be artificial or is not any great virtue of humanity. Either way this outcome leaves humans without a jewel for their crown of anthropocentrism. Even worse, the androids are gaining ground in empathy. So, let us review the moral implications of differentiating humans as a uniquely superior and moral class.
In some SF it is easy to spot an android, they just look fake to even the unobservant eye. As previously noted, this is not at all the case for not so in Do Androids Dream? where a living human and function android can not be differentiated fully. Where Wheale notes “the latest generation of Nexus-6 ‘andys’ approaches nearer and nearer to human” (300) it becomes more and more difficult to tell a difference at a glance. That is where the protagonist, Rick Deckard, introduces the Voigt-Kampff test. One of the first conflicts presented to Deckard is testing the Nexus-6s. In this scene, Dick reveals that the Voigt-Kampff test was not the first of its kind, it is part of a series of ever more difficult tests. So, the definition/idea used in defining empathy, and consequently humanity, is not fixed, it has changed over time as the hegemonic humans have moved the goalposts and keep the number of privileged people to a minimum. This is not merely a group of people sticking to what they know for fear of the new, it is a concerted effort to suppress groups, both organic and mechanical.
Tony Vinci writes “The post-apocalyptic culture depicted in the novel is based upon anthropocentric values constructed in such a way as to belittle and disempower human and nonhuman others (‘specials,’ androids, ersatz animals) by defining the human as a specialized category of being that has exclusive access to empathy” (92). In this situation, the androids are not treated as people, but slaves. Perhaps androids might reach the same status as humans if they can meet certain conditions, specifically meeting the current definition of empathetic, but every time they are able to meet these standards of humanity, the powers that be change their definition so they to keep their club closed from anyone looking to climb the ranks. This includes some androids that would classically meet the definitions of humanity with flying colors.
Luba Luft is one of Deckard’s android targets found at an art exhibition and posing as an opera singer, a very bourgeois art. To be deconstructionist for a moment, arts are also called humanities meaning that someone who can perform a humanity on a high level should most likely be considered human. Luft is operating as a very talented human artist meaning she is not only expressing her own form of humanity, but also potentially helping organic humans achieve a more fulfilling life. In this scenario, Luft should be considered a human, or at least something on the order of an animal; however, Galvan writes
In effect, it is not the scenarios that Rick posits that might prove Luba Luft guilty; rather, it the resolute relationship of signifiers and signifieds-the vise-like stability of the dialectical code-that proclaims the law’s authority and thus already brands her a criminal. Deputized to administer the test, Rick insists repeatedly upon Luba’s “response,” but in Baudrillard’s view, of course, that response would only confirm the operation of the hegemonic code (421).
What this means in a power structure is that the signifiers are being perpetually changed to keep specific signifiers held in a position of lower power. The androids are trapped as slaves by this hegemonic sign.
Typically, the androids do not do much to resist their bondage, but Sims says “Rarely, an android slave will kill its master and flee Mars for haven on Earth” (67). Throughout the novel, Deckard, a bounty hunter, stands as a forceful means of differentiating between the natural/artificial binary, humans and androids, respectively. Just as the Fugitive Slave Act permitted white men to hunt after escaped slaves looking for a way to leave their masters. Humans are very concerned to see their position challenged on the top of the pyramid, and so they assert the whole of their power to suppress the rising forces of intellectual opposition following the model of Marxist theory.
Androids are transformed into the other, and, as Axelrod notes even largely resembling a traditionally marginalized European ethnicity. “Unlike the nonandroids, each android […] has Euro-Slavic names or features” (86). The hegemonic forces are using their privileged rank to exploit and oppress the lower classes from removing their privilege. Istvan Csicsery-Ronan argues “Hypercapitalism labors to replace them with the ‘multicultural’ coexistence of irresolvable, irreducible, and intractable differences that must never develop into serious challenges to imperial sovereignty. The utopian ideal of universal right and law is replaced by the imperial practice of corruption” (242). Once again, even looking beyond the abuse of androids, there is a good number of natural-born humans, like John Isidore, that are not being allowed to interact with more respected humans recognized as intelligent, empathetic, and generally better. It is just another way for the powerful to oppress the powerless.
Class relations are a very difficult thing to discuss in America. Csicsery-Ronay states “This is one reason why some Marxist critics consider the genre to be inherently critical, despite the fact that careful social analysis rarely plays a central role in sf narratives […] the way global capitalism prevents dialectical historical awareness from coming to revolutionary consciousness” (242). No one likes to acknowledge that people of a higher station do not view them equally as a result of class. That is really the brilliance of Do Androids Dream?. In this instance, Androids are used as a powerful equal to stand aside organic humans to show the violent hierarchies mirrored in nature and human society. Regardless of whether or not the work is explicitly, implicitly, or not at all Marxist, Dick forces readers of Do Androids Dream? to confront the fundamental power structures and assumptions of human identity.
Axelrod, Mark. I Read It at the Movies: The Follies and Foibles of Screen Adaptation. Portsmouth, NH: Heinemann, 2007. Print.
Csicsery-Ronay, Istvan, Jr. “Science Fiction and Empire.” Science Fiction Studies vol. 30, no. 2 (2003): 231-45.JSTOR. Web. 28 Mar. 2017.
Dick, Philip K. Do Androids Dream of Electric Sheep. New York City: Ballantine, 2008. Kindle.
Galvan, Jill. “Entering the Posthuman Collective in Philip K. Dick’s ‘Do Androids Dream of Electric Sheep?’.” Science Fiction Studies vol. 24, no. 3 (1997): 413-29. JSTOR. Web. 25 Mar. 2017.
Schelde, Per. Androids, Humanoids, and Other Science Fiction Monsters: Science and Soul in Science Fiction Films. New York, NY: New York U, 1993. Print.
Sims, Christopher A. “The Dangers of Individualism and the Human Relationship to Technology in Philip K. Dick’s “Do Androids Dream of Electric Sheep?”.” Science Fiction Studies vol. 36, no. 1 (2009): 67-87. JSTOR. Web. 25 Mar. 2017.
Vinci, Tony M. “Posthuman Wounds: Trauma, Non-Anthropocentric Vulnerability, and the Human/Android/Animal Dynamic in Philip K. Dick’s “Do Androids Dream of Electric Sheep?”” The Journal of the Midwest Modern Language Association vol. 47, no. 2 (2014): 91-114. JSTOR. Web. 28 Mar. 2017.
Wheale, Nigel. “Recognising a ‘human-Thing’: Cyborgs, Robots and Replicants in Philip K. Dick’s ‘Do Androids Dream of Electric Sheep?’ and Ridley Scott’s ‘Blade Runner’.” Critical Survey vol. 3, no. 3, (1991): 297-304. JSTOR. Web. 22 Mar. 2017.