THE FUTURE SINGULARITY

What Neuromancer, The Matrix, R.U.R , and All Systems Red can teach us about our technological future

Krissy Rector
8 min readApr 23, 2021

The world of technology, creativity, and innovation is characterized by imagination, but underlying our efforts to engender our dreams lurks the fear of unwittingly contributing to our own demise.

Around the world governments and non-governmental organizations alike are working on legislation to ensure the ethical advancement of A.I. Technology. Experts are unable to provide conclusive evidence to predict whether the singularity would be our greatest accomplishment or our ultimate destruction. Positive scenarios suggest a future with unprecedented opportunity, leisure, and peace. However, the stories presented through film and literature paint a different picture.

From our earliest myths the robots we create inevitably turn on us, resulting in our own demise and indicating that our relationship with A.I. is established on fear and control.

The common tropes in Science Fiction exist not only to entertain but to point us toward questions, consider possibilities and ultimately to aid us in making sense of our past and present so we can make better decisions for the future.

While the science fiction situations are fabricated the moral dilemmas they describe are the same facing us today. Moral reasoning requires excellent communication and yet there are inherent limitations in human/A.I. conversations. Machines have to make value judgements which necessitates an understanding of human values.

To teach human understanding we must recognize the humanity we would like to see. This cannot be done simply through the programming of vocabulary and additional education on the finer nuances of human interaction. A.I. must learn how we learn, via perceptual data. Classical conditioning trains for a specified behavior but fails to account for cognitive development. A superintelligence must go through stages where thought process can evolve and grow. Just as a child’s thinking develops into that of an adult, it is through perceptual experiences that they advance to the point where they can reason about abstract thoughts.

It is tempting to envision A.I. purely as machines but it is likely that it will be a construct of both human and artificial elements. Martha Wells’s Novella, All Systems Red, offers readers a glimpse into the mind of Murderbot, a half robot, half human construct struggling to reconcile the two parts of its identity.

It is a rogue SecUnit in a corporate controlled future, but instead of being a scornful terminator this introspective sentient just wants to be left alone to watch TV. It is outfitted with armor to mask the conflicting impulses to connect or kill. The armor functions as protection from outside violence by keeping its organic parts intact. However, Murderbot also relies on it as a mask for its emotional instability: “Letting them see me without the armor had been a huge mistake… Yes, talk to Murderbot about its feelings. The idea was so painful I dropped to 97 percent efficiency” (Wells 40). While the idea of an emotionally absent robot carrying out programmed duties falls in line with common sci-fi tropes, Murderbot’s duality reveals its humanity.

In All Systems Red, the leader, Mensah, recognizes Murderbot’s discomfort while interacting without its faceplate but instead of passing judgement she explains it is necessary for the humans to perceive it as another human, with good intentions. Validating Murderbot’s humanity helps it to embrace its own weakness in human/robot interaction. While its previous humans viewed it as a construct, Mensah does not try to parse apart its human and construct parts. It is a sentient being on a journey of self-discovery, even if part of that path includes being a “failed heartless killing machine.” It is that humanity that we must take care to preserve if we want our values to be part of the future created by A.I.

The concept of the singularity naturally supposes the future will be shaped by the preferences of super intelligent A.I., so it is best to ensure it is on our side. The only way to accomplish this is through shared values.

The subgenre of Cyberpunk Science Fiction offers a warning of what can happen when human and A.I. values are at odds with one another. William Gibson’s 1984 novel Neuromancer, was highly influential in establishing this genre and its themes. In the story most of the action occurs inside the matrix, a cyberspace reality. The Novel’s anti-hero, Case, is a hacker and drug addict who spends much of the story jacked into the matrix while attempting to unite the constructs Wintermute and Neuromancer. Wintermute’s sole focus is merging with Neuromancer but the urge is one sided even though they are actually two halves of the same consciousness.

This theme of conflict within duality is weaved into every aspect of the story. Most characters have more than one name and organic life and mechanical technology are inseparable. However, the reality resulting from these dualities is not as straightforward as A.I. having power in an internet-based world. When A.I. can bring up the dead by replicating humans, their emotions, and altering their memories the opportunity for A.I. to reign supreme multiplies exponentially.

Where Gibson provided the literary foundation for stories of powerful, malevolent A.I. which simultaneously set the plot in action and conclude the journey, Lana and Lily Wachowski brought it to the screen in the 1999 film The Matrix. The term, taken from Neuromancer retains its essential meaning, however, the story occurs in the wake of a war between man and machine. In this world the consciousnesses of human beings are manipulated so their bodies can be used as an endless energy supply for machines. Unlike Neuromancer the story’s protagonist, Neo, is a hero, the One, the savior of man, albeit initially unaware of the matrix or his role in it. In the early stages of the film Neo learns the history that brought man to his lowered state: “Throughout human history we have been dependent on machines to survive. Fate it seems, is not without a sense of irony” (Matrix 42:06–42:15). Here, Neo’s teacher, Morpheus, captures the essence of humanity’s fear of losing control. Machines created in an effort to simplify human life appear innocuous but as we relinquish tasks, we also give up power. Technology accumulates over time with choices surrounding control occurring incrementally. The Matrix tells the story of a fictional machine revolt, but the story’s framework based in a loss of control over time urges us to recognize that the fault rests in humans who failed to contain the power of their creations.

This common trope in our science fiction mythology cautions us to examine our motives and morals as we work to improve A.I. technology. Scientific progress inherently presents a risk by allowing man to defy the limitations of his time. The lure of forbidden knowledge is one of our oldest narratives. With new knowledge comes responsibility, and if that responsibility is shirked destruction follows. The apple in Eden, Prometheus’s fire, Faust and the devil, Darth Vader and The Force. As man’s power increases through progress so does his oppression of others.

Science Fiction’s portrayal of robots as an oppressed creation should make us consider the consequences of creating A.I. in our own image. The blueprint for a society not based on domination is still not yet a fully formed concept and our history with subduing others for the sake of economic gains indicates a possible future in which intelligent creations have no rights, existing at the mercy of mankind.

The word robot, first coined in Karel C̆apek’s science fiction play Rossum’s Universal Robots comes from the Czechoslovakian word for slave. If this is how we intend to co-exist with our creations an uprising from machines made in our image is inevitable.

In her essay “A Cyborg Manifesto” Donna Haraway argues the need for creating new affinities between man and machine: “Abstraction and illusion rule in knowledge, domination rules in practice. Labor is the humanizing activity that makes man; labor is an ontological category permitting the knowledge of a subject, and so the knowledge of subjugation and alienation” (Haraway 313). Using robots as laborers to ease the burden of work eliminates the benefits humans derive from the act of laboring. Not only does it distance us from knowledge gained through toil is separates us from our humanity and the understanding of what it means to be under the rule of others. To recognize biological/technological hybrids as not only having rights but also offering value as a solution to the problem of domination is a foreign concept to a society that assigns value based on what can be extracted and sold.

Here at the cusp of the singularity the boundaries between man and machine blur but this does not necessarily mean our demise is imminent. Our stories urge us to recognize our shortcomings in communication, our assumptions in cognitive development, and our inherent desire for domination. To avoid repeating the failures of the past we must refuse to accept that enslaving another form of intelligence is the only way to protect ourselves. Any intelligent creation is vulnerable to the allure of power and naturally resents domination. A new future between man and machine will only benefit us when we truly embrace the possibilities of A.I. by ensuring our scientific progress advances in proportion to our moral progress.

References

Gibson, William. Neuromancer. Penguin Random House, New York. 1984. Print

Haraway, Donna. “A Cyborg Manifesto.” Science Fiction Criticism. Ed. Rob Latham, New York: Bloomsbury Academic, 2017. 306–329. Print.

The Matrix. Dir. Lana Wachowski and Lilly Wachowski. Warner Bros. Pictures, 1999.

Wells, Martha. All Systems Red. Tom Doherty Associates: 2017, New York. Kindle File

--

--

Krissy Rector

Krissy Rector is a public health professional, multi-disciplinary artist, and communications coach who is fascinated with narrative and its potential.