This Black Mirror reflection will explore season 4, episode 1, which is titled “U.S.S. Callister”. You can click here to read my last Black Mirror reflection (season 3, episode 2: “Playtest”). In U.S.S. Callister, we’re pulled into the life of Robert Daly (Jesse Plemons), the Chief Technical Officer at Callister Inc., a game development company that has produced a multiplayer simulated reality game called Infinity. Within this game, users control a starship (an obvious homage to Star Trek), although Daly, the brilliant programmer behind this revolutionary game, has his own offline version of the game which has been modded to look like his favorite TV show Space Fleet, where Daly is the Captain of the ship.
We quickly learn that most of the employees at Callister Inc. don’t treat Daly very kindly, including his company’s co-founder James Walton (Jimmy Simpson). Daly appears to be an overly passive, shy, introvert. During one of Daly’s offline gaming sessions at home, we come to find out that the Space Fleet characters aboard the starship look just like his fellow employees, and as Captain of his simulated crew, he indulges in berating them all. Due to the fact that Daly is Captain and effectively controls the game, he is rendered nearly omnipotent, and able to force his crew to perpetually bend to his will, lest they suffer immensely.
It turns out that these characters in the game are actually conscious, created from Daly having surreptitiously acquired his co-workers’ DNA and somehow replicated their consciousness and memories and uploaded them into the game (and any biologists or neurologists out there, let’s just forget for the moment that DNA isn’t complex enough to store this kind of neurological information). At some point, a wrench is thrown into Daly’s deviant exploits when a new co-worker, programmer Nanette Cole (Cristin Milioti), is added to his game and manages to turn the crew against him. Once the crew finds a backdoor means of communicating with the world outside the game, Daly’s world is turned upside down with a relatively satisfying ending chock-full of poetic justice, as his digitized, enslaved crew members manage to escape while he becomes trapped inside his own game as it’s being shutdown and destroyed.
This episode is rife with a number of moral issues that build on one another, all deeply coupled with the ability to engineer a simulated reality (perceptual augmentation). Virtual worlds carry a level of freedom that just isn’t possible in the real world, where one can behave in countless ways with little or no consequence, whether acting with beneficence or utter malice. One can violate physical laws as well as prescriptive laws, opening up a new world of possibilities that are free to evolve without the feedback of social norms, legal statutes, and law enforcement.
People have long known about various ways of escaping social and legal norms through fiction and game playing, where one can imagine they are somebody else, living in a different time and place, and behave in ways they’d never even think of doing in the real world.
But what happens when they’re finished with the game and go back to the real world with all its consequences, social norms and expectations? Doesn’t it seem likely that at least some of the behaviors cultivated in the virtual world will begin to rear their ugly heads in the real world? One can plausibly argue that violent game playing is simply a form of psychological sublimation, where we release many of our irrational and violent impulses in a way that’s more or less socially acceptable. But there’s also bound to be a difference between playing a very abstract game involving violence or murder, such as the classic board-game Clue, and playing a virtual reality game where your perceptions are as realistic as can be and you choose to murder some other character in cold blood.
Clearly in this episode, Daly was using the simulated reality as a means of releasing his anger and frustration, by taking it out on reproductions of his own co-workers. And while a simulated experiential alternative could be healthy in some cases, in terms of its therapeutic benefit and better control over the consequences of the simulated interaction, we can see that Daly took advantage of his creative freedom, and wielded it to effectively fashion a parallel universe where he was free to become a psychopath.
It would already be troubling enough if Daly behaved as he did to virtual characters that were not actually conscious, because it would still show Daly pretending that they are conscious; a man who wants them to be conscious. But the fact that Daly knows they are conscious makes him that much more sadistic. He is effectively in the position of a god, given his powers over the simulated world and every conscious being trapped within it, and he has used these powers to generate a living hell (thereby also illustrating the technology’s potential to, perhaps one day, generate a living heaven). But unlike the hell we hear about in myths and religious fables, this is an actual hell, where a person can suffer the worst fates imaginable (it is in fact only limited by the programmer’s imagination) such as experiencing the feeling of suffocation, yet unable to die and thus with no end in sight. And since time is relative, in this case based on the ratio of real time to simulated time (or the ratio between one simulated time and another), a character consciously suffering in the game could feel as if they’ve been suffering for months, when the god-like player has only felt several seconds pass. We’ve never had to morally evaluate these kinds of situations before, and we’re getting to a point where it’ll be imperative for us to do so.
Someday, it’s very likely that we’ll be able to create an artificial form of intelligence that is conscious, and it’s up to us to initiate and maintain a public conversation that addresses how our ethical and moral systems will need to accommodate new forms of conscious moral agents.
We’ll also need to figure out how to incorporate a potentially superhuman level of consciousness into our moral frameworks, since these frameworks often have an internal hierarchy that is largely based on the degree or level of consciousness that we ascribe to other individuals and to other animals. If we give moral preference to a dog over a worm, and preference to a human over a dog (for example), then where would a being with superhuman consciousness fit within that framework? Most people certainly wouldn’t want these beings to be treated as gods, but we shouldn’t want to treat them like slaves either. If nothing else, they’ll need to be treated like people.
Technologies will almost always have that dual potential, where they can be used for achieving truly admirable goals and to enhance human well being, or used to dominate others and to exacerbate human suffering. So we need to ask ourselves, given a future world where we have the capacity to make any simulated reality we desire, what kind of world do we want? What kind of world should we want? And what kind of person do you want to be in that world? Answering these questions should say a lot about our moral qualities, serving as a kind of window into the soul of each and every one of us.