Don’t Panic! Will artificial intelligence redefine what it means to be human?

Features Editor Gemma Kent explores how the future of Artificial Intelligence could see reality and science fiction blur.

“The rise of powerful AI will be either the best or the worst thing ever to happen to humanity. We do not know which” – Stephen Hawking

If there’s one thing the twenty first century has proven time and time again, it’s that just because something was once a fabrication of science fiction, doesn’t mean it couldn’t one day become a reality. Back to the Future’s ubiquitous drones, 2001: A Space Odyssey’s video chatting and H2G2’s real-time language translation – all of these remind us of the uncanny ability of science fiction to predict where humanity’s technological advancements are going to lead us. One of the major game changers sci-fi has prophesied since its inception is the rise of Artificial Intelligence: computers which can perform tasks that would normally require human intelligence. As we accelerate towards the creation of the world’s first truly sentient robot, it seems the movies are about to prove their prescience once more. But that begs the question: what now? If we are truly on the verge of the birth of a new super-species, how should we spend the intervening decade? On-street delirium and lunch-time conspiracy groups certainly have their appeal, but maybe we’d do well to start with a calmer frame of mind. Why not begin by reviewing the history of Artificial Intelligence, then moving to consider some of the potential futures science fiction has laid out for us and the ways in which our own world could one day realise them? Maybe we have nothing to fear in the first place – or maybe we do.

The Background Check

Credit: reelviews.net

When we think of the birth of AI, we often default to the 1950s, when Alan Turing first proposed his ‘Turing Test’ to determine if a machine is truly sentient. In fact, we have been experimenting with the idea of intelligent machines since the time of Ancient Greece, where Olympian Hephaestus, god of blacksmithing, was said to have created automatons to assist him in his projects. However, it is accurate to credit Turing’s work as the first to spark practical investigation into the possibility of  intelligent machines. It was just over a decade after his proposal that the first AI programmes were being unveiled: one, a 1962 checkers player and the other a 1956 ‘Logic Theorist’. Neither of these machines were ground-breaking by today’s standards (the checkers programme was only as good as a well-briefed amateur) but these creations paved the way for the technology surrounding us today.

The next twenty years buzzed with optimism as scientists investigated the newly named field of Artificial Intelligence. One of the key reasons for this sudden interest was an exponential increase in science fiction films and literature, among them the iconic 2001: A Space Odyssey. This boom of positivity was soon stunted as the difficulty in replicating human mobility and problem solving skills became clear. In 1973, what is known as the AI Winter set in, a time during which funding for AI projects was slashed due to lack of progress. It was only about a decade later, in 1981, that businesses began to realise the lucrative potential of a machine that could perform at least some of the tasks a human could, and with this in mind we turned our attention to what has become known as Weak AI.

Also known as Artificial Narrow Intelligence, Weak AI can perform a narrow range of tasks (often to a superhuman standard), but cannot independently apply this knowledge to the performance of a new task. That is, weak AIs cannot generalise their knowledge of X to help them perform Y, in the way that you or I could apply our knowledge of how to play football to how to play soccer. Consider Deep Blue, a chess-playing computer built in the 1990s that went on to beat the chess champion of the time, Garry Kasparov, in 1997. Deep Blue, while clearly adept at playing chess, was not ‘generally’ intelligent, as it could not, say, play checkers or attempt Ludo without having been programmed previously to understand the strategy behind such games. It was capable of performing only within the strict parameters of its programming. In this way, Weak AI cannot learn for itself.

Credit: IMDB

Programmes like Deep Blue seem commonplace to us now. In the years since its inception, we have seen the invasion of self-service tills in every supermarket, the rising prominence of phone assistants like Siri and Cortana in our daily lives, and – looking ahead – the possibility of self-driving cars and aeroplanes. But while these latter forms of AI seem a world away from the functioning of Deep Blue, there isn’t much separating them as far as the terminology is concerned: they still remain restricted by the boundaries of their programming. For all our leaps and bounds over the last seventy years, we have yet to crack ‘the big one’: Strong AI.

Recalling the definition of Weak AI, it is easy to guess what Strong AI might be (hurray for the ability to generalise!). Strong AI (better known as AGI or Artificial General Intelligence) is AI which is capable of independent learning and generalizing and which can, in a nutshell, think and reason as a human can. While we lack examples in reality, programmes like HAL 9000 from 2001: A Space Odyssey and VIKI from I, Robot are good predictions of what AGI might look like. It will feel ‘human’ emotions as we do, and be capable of making its own choices based on its needs and wants. It will also represent a new era in our history.

The First 100 Days

Experts predict the arrival of Artificial General Intelligence at some point within the next thirty years. If the rate at which technology changes is rapidly accelerating, how deeply steeped in AI will the world be by the time AGI arrives? The best way to estimate that is to consider those areas currently in their infancy: the rise of voluntary amputation to enhance physical aesthetic and ability; the installing of chips in your arm and brain which enable you to control computers and electronic doors with your mind; the customising of your future child’s eye, hair and skin colour. All of these areas are just now taking their first steps, but will be coming of age by 2029, meaning that AGI will bloom in a world that has already radically altered its perception of where to draw the line between human and machine.

So if we assume that AGI will arrive in a world accustomed to such technologies, what does this mean for the impact it will have on our lives? To explore this, let us examine a growing branch of AI: the use of artificially intelligent sex dolls.

Credit: The Mind Reels

Currently garnering a lot of media attention, these life-size, human-like (typically woman-like) dolls are a growing market for those who claim to struggle to build meaningful relationships with other people. Unlike the stand-still models previously available, contemporary dolls interact and converse with their ‘owner’, and come with a variety of settings meant to simulate a real human personality. The debate surrounding the ethics of this practice is lengthy and deserving of a whole piece in and of itself, but for this article it’s enough to consider some of the likely positions such debate might inspire. Is it ethical to simulate rape on such robots? Could they provide a safe place for humans that cannot integrate well into society? How drastically could these robots affect our perception of real-world people? Add into the mix, then, the potential of these robots to be sentient creatures. What happens when these dolls have, to some extent, thoughts and feelings of their own, and decide they do not wish to continue performing their designated role? The outcome mirrors contemporary sex slavery, only that these slaves can be built and programmed for the job.

Moreover, many experts currently fear that these dolls could have damaging effects on how we view women in our society: a woman-like doll that you can possess wholly and do with as you please may potentially contribute to a mentality that sees women as property, useful only as a means of pleasuring their masters. If these dolls are matched with an AGI programme that makes them all but organic women, how much more potent could this message be? These concerns are based on actual happenings, but they have not escaped the realms of science fiction. Niska, a self-aware synthetic human in Channel 4’s Humans, is a sex slave in a brothel, where she is forced to submit to the desires of the patrons of the institution. When one man’s particularly vulgar request finally pushes her over the line, she murders him and escapes, only to spend much of the time following her escape wreaking vengeance on humanity for its failure to recognise her higher-order existence. The example is extreme, but it highlights how, in the early days of AGI’s existence, humanity will struggle with the gravity of what they have created and with understanding what kind of respect and tolerance it is owed. In the meantime, of course, AGI like Niska and the other maltreated creatures of her kind, will probably be plotting how best to rid themselves of us.

Humanity, Impeached

So, let’s be pessimistic, shall we? Considering humanity’s reputation when it comes to existing peacefully with our own sisters and brothers, it probably won’t be long before we are overrun by  smarter, stronger, more durable AGI cousins. They will take seats in our government, abolish our rights, and make us the test dummies in the crash simulations for their new, expensive top-of-the-range cars. It might sound like an extremist point of view, but this prophecy of carnage is not only foretold in science fiction films galore, it is also advocated by many of today’s leading AGI experts. Elon Musk, CEO of Tesla (a company leading the way in self-driving cars), is well-known for his uncertainties when it comes to the rise of AGI. Taking to twitter in recent weeks, Musk described Artificial General Intelligence as “vastly more risk [sic] than North Korea” and called for the regulation of what he deems a “danger to the public”. These views are not shared universally, of course, and others in the industry, most recently Mark Zuckerberg, have sparred with Musk over his so-called scaremongering.

Detouring from the words of the ‘experts’, though, what does science fiction have to say on the matter? Certainly, there is a fondness in the genre for portraying futures in which human and machine violently clash, but it is worth considering that a primary reason for this is the need for conflict and resolution in any narrative work. Sci-fi would hardly capture our attention with problem-free coexistence and tea-drinking. Nonetheless, there are worthwhile musings to be scavenged from these brutal face-offs, two of which we will consider here.

Credit: The Game Insider

First, in a classic demonstration of the AI-murders-humanity trope, there is 2004’s I, Robot, in which AI computer VIKI causes chaos and destruction as she works to rob humanity of its free will. The reason? VIKI has been created to protect humanity from harm, and, upon seeing the devastating destruction we cause to ourselves and our planet, she uncovers a loophole in her programming which allows her to ‘ethically’ hijack our minds – infringing upon what we see as a core element of our humanity so as to protect that very humanity. To us, this logic seems perverse, but it highlights how whole new intelligent species may possess ideas contrary to our own. This fear represents a more globalised version of a common enough trepidation surrounding loss of culture, one that often masks itself in science fiction. A second example is found in Star Trek: The Next Generation in the form of the Borg, a race of humanoid aliens most notably encountered by the Enterprise in seasons 3 and 4. They are partially organic, but have combined themselves with various technologies to the point that they now lack any individual characteristics, and operate as a (nearly!) unstoppable Collective Mind. Primarily, this fear of the collective serves as an allegory for Communism, but also the threat posed by new technologies which do not build upon our human form, but override it entirely. More than fearing the death and chaos AGI might cause, we fear how it might usurp, and even improve upon, our own ‘human’ ways of living.

Together in Perfect Harmony

Maybe, just maybe, our AI future is not as doomy and gloomy as this article has so far suggested. After all, while there is a littering of science fiction that suggests a gruesome and troubled future for our relations with AGI, there are as many stories which foretell that humanity and robot-kind will live side-by-side, with only as many problems as you would expect between sudden, vastly different neighbours.

Credit: AMC

The fulcrum of the issues we will face will be the need for a relational definition of what it means to be human: what it is to love, grieve and generally think, when we can create a non-organic entity that also experiences such qualia. A functional definition of love, for example, might run something like “love is a dilation of the eyes and a quickening of the heart when in the presence of someone one deems attractive.” But if an android has neither conventional eyes nor heart, how then does that affect our definition? This dilemma is hardly intrinsically negative; if anything, the existence of a greater pool of varied experiences will only strengthen our ability to refine and make more accurate our means of defining crucial parts of life. Already, you have probably decided that my functional definition of love doesn’t quite hit the spot. More minds and unique viewpoints, from a philosophical point of view, can only give power to such debates.

Credit: Entertainment Weekly

To pass the reins to science fiction, it is my personal belief that you can’t discuss the potential for AGI to reshape our world thinking without considering the character of Data from the series Star Trek: The Next Generation. A valued commander aboard the Starship Enterprise and also the only one of his kind, Data is frequently the source of some of the series’ most introspective and thought-provoking episodes, among them The Measure of a Man (2×9) and The Most Toys (3×22). Both episodes place Data in situations in which his rights to bodily autonomy are drawn into question, and in which the reliability of his attempts to assert his status are permanently downplayed and dismissed. In the former episode, for example, a scientist seeks to take Data apart so as to replicate the technology that gives him sentience, though Data expresses concern that his unique idiosyncrasies and, essentially, his life, may be erased in the process. The scientist’s malicious efforts are thwarted only when the ship’s captain highlights how humanity has yet to properly define what consciousness even is and, as such, cannot claim that Data has none, just because he is not human. The latter episode follows a similar thread, questioning if Data can truly feel human emotions. In both instances, the existence of an AGI crew member working alongside human beings opens up a whole new realm of unexplored existential territory, in which the crew (and by extension, the viewers) are encouraged to boldly think where no one has thought before.

Life, The Universe and Everything

Credit: MyComicShop.com

It is this talk of unasked questions that spurs me to look at one of science fiction’s quirkiest children: The Hitchhiker’s Guide to the Galaxy. If ever there was a work of science fiction that could flaunt its irreverence for systems of logic and still remain a definitive classic of the genre, it is this one. A key element of the story is the supercomputer Deep Thought, whose function it is to come up with the ‘answer to The Ultimate Question of Life, The Universe and Everything’. For all its superior AGI, however, all Deep Thought can produce at the end of its seven and a half million years of calculations is the number ‘42’, claiming the defective output lies in there being no proper question to begin with. In an effort to find this question, Deep Thought (which coincidently inspired the name of many chess-playing computers, like our early Deep Blue) builds a superior entity: our humble planet Earth. Ignoring the gloomy end that meets the Blue Planet in the early pages of the book, there is something captivating about this version of events, not least because in this reality it is the AI that creates us, and not the other way around. But even more than that, the idea that Earth and its inhabitants were created, not as a means of answering life’s big questions, but as a tool to uncovering the very questions we should be preoccupied with in the first place, is both a deeply confounding and surprisingly uplifting idea. For one, it means we do not need to concern ourselves with working everything out – in the context of our discussion here, it means we do not need a definitive answer for what our future will look like with AGI in it. Rather, as a species, our job is to ask, ask, ask, and resolve ourselves to not knowing everything, and recognising that there is something valuable in the journey to knowledge. And perhaps that is the best way we can view AGI going forward: not as a problem we must solve, but as fellow problem-solvers in themselves, designed by us to aid in our quest to ask the right question. So, don’t panic. It might not be an apocalypse after all.