Black Mirror, the British science-fiction anthology series created by Charlie Brooker, does a pretty good job experimenting with a number of illuminating concepts that highlight how modern culture is becoming increasingly shaped by (and vulnerable to) various technological advances and changes. I’ve been interested in a number of these concepts for many years now, not least because of the many important philosophical implications (including a number of moral issues) that they point to. I’ve decided to start a blog post series that will explore the contents of these episodes. My intention with this blog post series, which I’m calling “Black Mirror” Reflections, will be to highlight some of my own takeaways from some of my favorite episodes.
I’d like to begin with season 2, episode 3, “Playtest”. I’m just going to give a brief summary here as you can read the full episode summary in the link provided above. In this episode, Cooper (Wyatt Russell) decides to travel around the world, presumably as a means of dealing with the recent death of his father, who died from early-onset Alzheimer’s. After finding his way to London, his last destination before planning to return home to America, he runs into a problem with his credit card and bank account where he can’t access the money he needs to buy his plane ticket.
While he waits for his bank to fix the problem with his account, Cooper decides to earn some cash using an “Oddjobs” app, which provides him with a number of short-term job listings in the area, eventually leading him to “SaitoGemu,” a video game company looking for game testers to try a new kind of personalized horror game involving a (seemingly) minimally invasive brain implant procedure. He briefly hesitates but, desperate for money and reasonably confident in its safety, he eventually consents to the procedure whereby the implant is intended to wire itself into the gamer’s brain, resulting in a form of perceptual augmentation and a semi-illusory reality.
The implant is designed to (among other things) scan your memories and learn what your worst fears are, in order to integrate these into the augmented perceptions, producing a truly individualized, and maximally frightening horror game experience. Needless to say, at some point Cooper begins to lose track of what’s real and what’s illusory, and due to a malfunction, he’s unable to exit the game and he ends up going mad and eventually dying as a result of the implant unpredictably overtaking (and effectively frying) his brain.
There are a lot of interesting conceptual threads in this story, and the idea of perceptual augmentation is a particularly interesting theme that finds it’s way into a number of other Black Mirror episodes. While holographic and VR-headset gaming technologies can produce their own form of augmented reality, perceptual augmentation carried out on a neurological level isn’t even in the same ballpark, having qualitative features that are far more advanced and which are more akin to those found in the Wachowski’s The Matrix trilogy or James Cameron’s Total Recall. Once the user is unable to distinguish between the virtual world and the external reality, with the simulator having effectively passed a kind of graphical version of the Turing Test, then one’s previous notion of reality is effectively shattered. To me, this technological idea is one of the most awe-inspiring (yet sobering) ideas within the Black Mirror series.
The inability to discriminate between the two worlds means that both worlds are, for all practical purposes, equally “real” to the person experiencing them. And the fact that one can’t tell the difference between such worlds ought to give a person pause to re-evaluate what it even means for something to be real. If you doubt this, then just imagine if you were to find out one day that your entire life has really been the result of a computer simulation, created and fabricated by some superior intelligence living in a layer of reality above your own (we might even think of this being as a “god”). Would this realization suddenly make your remembered experiences imaginary and meaningless? Or would your experiences remain just as “real” as they’ve always been, even if they now have to be reinterpreted within a context that grounds them in another layer of reality?
To answer this question honestly, we ought to first realize that we’re likely fully confident that what we’re experiencing right now is reality, is real, is authentic, and is meaningful (just as Cooper was at some point in his gaming “adventure”). And this seems to be at least a partial basis for how we define what is real, and how we differentiate the most vivid and qualitatively rich experiences from those we might call imaginary, illusory, or superficial. If what we call reality is really just a set of perceptions, encompassing every conscious experience from the merely quotidian to those we deem to be extraordinary, would we really be justified in dismissing all of these experiences and their value to us if we were to discover that there’s a higher layer of reality, residing above the only reality we’ve ever known?
For millennia, humans have pondered over whether or not the world is “really” the way we see it, with perhaps the most rigorous examination of this metaphysical question undertaken by the German philosopher Immanuel Kant, with his dichotomy of the phenomenon and the noumenon (i.e. the way we see the world or something in the world versus the way the world or thing “really is” in itself, independent of our perception). Even if we assume the noumenon exists, we can never know anything about the thing in itself, by our being fundamentally limited by our own perceptual categories and the way we subjectively interpret the world. Similarly, we can never know for certain whether or not we’re in a simulation.
Looking at the situation through this lens, we can then liken the question of how to (re)define reality within the context of a simulation with the question of how to (re)define reality within the context of a world as it really is in itself, independent of our perception. Both the possibility of our being in a simulation and the possibility of our perceptions stemming from an unknowable noumenal world could be true (and would likely be unfalsifiable), and yet we still manage to use and maintain a relatively robust conception and understanding of reality. This leads me to conclude that reality is ultimately defined by pragmatic considerations (mostly those pertaining to our ability to make successful predictions and achieve our goals), and thus the possibility of our one day learning about a new, higher level of reality should merely add to our existing conception of reality, rather than completely negating it, even if it turns out to be incomplete.
Another interesting concept in this episode involves the basic design of the individualized horror game itself, where a computer can read your thoughts and memories, and then surmise what your worst fears are. This is a technological feat that is possible in principle, and one with far-reaching implications that concern our privacy, safety, and autonomy. Just imagine if such a power were unleashed by corporations, or the governments owned by those corporations, to acquire whatever information they wanted from your mind, to find out how to most easily manipulate you in terms of what you buy, who you vote for, what you generally care about, or what you do any and every day of your life. The Orwellian possibilities are endless.
Marketing firms (both corporatocratic and political) have already been making use of discoveries in psychology and neuroscience, finding new ways to more thoroughly exploit our cognitive biases to get us to believe and desire whatever will benefit them most. Adding to this the future ability to read our thoughts and manipulate our perceptions (even if this is first implemented as a seemingly innocuous video game), this will establish a new means of mass surveillance, where we can each become a potential “camera” watching one another (a theme also highlighted in BM, S4E3: “Crocodile“), while simultaneously exposing our most private of thoughts, and transmitting them to some centralized database. Once we reach these technological heights (it’s not a matter of if but when), depending on how it’s employed, we may find ourselves no longer having the freedom to lie or to keep a secret, nor the freedom of having any mental privacy whatsoever.
To be fair, we should realize that there are likely to be undeniable benefits in our acquiring these capacities (perceptual augmentation and mind-reading), such as making virtual paradises with minimal resources, finding new ways of treating phobias, PTSD, and other pathologies; giving us the power of telepathy and superhuman intelligence by connecting our brains to the cloud, giving us the ability to design error-proof lie detectors and other vast enhancements in maximizing personal security and reducing crime. But there are also likely to be enormous losses in personal autonomy, as our available “choices” are increasingly produced and constrained by automated algorithms; there are likely to be losses in privacy, and increasing difficulties in ascertaining what is true and what isn’t, since our minds will be vulnerable to artificially generated perceptions created by entities and institutions that want to deceive us.
Although we’ll constantly need to be on the lookout for these kinds of potential dangers as they arise, in the end, we may find ourselves inadvertently allowing these technologies to creep into our lives, one consumer product at a time.