“Primordial Bounds”

“Primordial Bounds”

Animating forces danced in the abyss
From cosmic clouds did Helios arise
Upon the sands of Gaia shining bright
Warming seas to hold her in embrace
Effervescent stirrings in the depths below
From whence primordial bounds emerged
Tendril seeds could then begin to grow
Entropic channels out of chaos born
Spreading far and wide, were favored so
The spark of life ignited, on it goes
Destined imperfections bringing forth
Beauty and a mass of creatures flow

Senses born, interpretations formed
Rosetta stone with energetic tone
Neuronal trees to carve the world of one
Integrated symphony, the source of all divinity
Then the eye began to gaze within
Branches twisted, turned, and formed the self
Archetypal images and dreams to undergo
Ego and unconsciousness, a battle for the soul
Psyches feeding culture with the food of all the gods
Imagination, future selves, to suffer and rejoice
Cogitation flowing, individuation growing
‘Til cybernetic unity subsumes the human story

Advertisement

“Black Mirror” Reflections: U.S.S. Callister (S4, E1)

This Black Mirror reflection will explore season 4, episode 1, which is titled “U.S.S. Callister”.  You can click here to read my last Black Mirror reflection (season 3, episode 2: “Playtest”).  In U.S.S. Callister, we’re pulled into the life of Robert Daly (Jesse Plemons), the Chief Technical Officer at Callister Inc., a game development company that has produced a multiplayer simulated reality game called Infinity.  Within this game, users control a starship (an obvious homage to Star Trek), although Daly, the brilliant programmer behind this revolutionary game, has his own offline version of the game which has been modded to look like his favorite TV show Space Fleet, where Daly is the Captain of the ship.

We quickly learn that most of the employees at Callister Inc. don’t treat Daly very kindly, including his company’s co-founder James Walton (Jimmy Simpson).  Daly appears to be an overly passive, shy, introvert.  During one of Daly’s offline gaming sessions at home, we come to find out that the Space Fleet characters aboard the starship look just like his fellow employees, and as Captain of his simulated crew, he indulges in berating them all.  Due to the fact that Daly is Captain and effectively controls the game, he is rendered nearly omnipotent, and able to force his crew to perpetually bend to his will, lest they suffer immensely.

It turns out that these characters in the game are actually conscious, created from Daly having surreptitiously acquired his co-workers’ DNA and somehow replicated their consciousness and memories and uploaded them into the game (and any biologists or neurologists out there, let’s just forget for the moment that DNA isn’t complex enough to store this kind of neurological information).  At some point, a wrench is thrown into Daly’s deviant exploits when a new co-worker, programmer Nanette Cole (Cristin Milioti), is added to his game and manages to turn the crew against him.  Once the crew finds a backdoor means of communicating with the world outside the game, Daly’s world is turned upside down with a relatively satisfying ending chock-full of poetic justice, as his digitized, enslaved crew members manage to escape while he becomes trapped inside his own game as it’s being shutdown and destroyed.

Daly stuck

This episode is rife with a number of moral issues that build on one another, all deeply coupled with the ability to engineer a simulated reality (perceptual augmentation).  Virtual worlds carry a level of freedom that just isn’t possible in the real world, where one can behave in countless ways with little or no consequence, whether acting with beneficence or utter malice.  One can violate physical laws as well as prescriptive laws, opening up a new world of possibilities that are free to evolve without the feedback of social norms, legal statutes, and law enforcement.

People have long known about various ways of escaping social and legal norms through fiction and game playing, where one can imagine they are somebody else, living in a different time and place, and behave in ways they’d never even think of doing in the real world.

But what happens when they’re finished with the game and go back to the real world with all its consequences, social norms and expectations?  Doesn’t it seem likely that at least some of the behaviors cultivated in the virtual world will begin to rear their ugly heads in the real world?  One can plausibly argue that violent game playing is simply a form of psychological sublimation, where we release many of our irrational and violent impulses in a way that’s more or less socially acceptable.  But there’s also bound to be a difference between playing a very abstract game involving violence or murder, such as the classic board-game Clue, and playing a virtual reality game where your perceptions are as realistic as can be and you choose to murder some other character in cold blood.

Clearly in this episode, Daly was using the simulated reality as a means of releasing his anger and frustration, by taking it out on reproductions of his own co-workers.  And while a simulated experiential alternative could be healthy in some cases, in terms of its therapeutic benefit and better control over the consequences of the simulated interaction, we can see that Daly took advantage of his creative freedom, and wielded it to effectively fashion a parallel universe where he was free to become a psychopath.

It would already be troubling enough if Daly behaved as he did to virtual characters that were not actually conscious, because it would still show Daly pretending that they are conscious; a man who wants them to be conscious.  But the fact that Daly knows they are conscious makes him that much more sadistic.  He is effectively in the position of a god, given his powers over the simulated world and every conscious being trapped within it, and he has used these powers to generate a living hell (thereby also illustrating the technology’s potential to, perhaps one day, generate a living heaven).  But unlike the hell we hear about in myths and religious fables, this is an actual hell, where a person can suffer the worst fates imaginable (it is in fact only limited by the programmer’s imagination) such as experiencing the feeling of suffocation, yet unable to die and thus with no end in sight.  And since time is relative, in this case based on the ratio of real time to simulated time (or the ratio between one simulated time and another), a character consciously suffering in the game could feel as if they’ve been suffering for months, when the god-like player has only felt several seconds pass.  We’ve never had to morally evaluate these kinds of situations before, and we’re getting to a point where it’ll be imperative for us to do so.

Someday, it’s very likely that we’ll be able to create an artificial form of intelligence that is conscious, and it’s up to us to initiate and maintain a public conversation that addresses how our ethical and moral systems will need to accommodate new forms of conscious moral agents.

Jude Law, Haley Joel Osment, Brendan Gleeson, and Brian Turk in Artificial Intelligence: AI (2001)

We’ll also need to figure out how to incorporate a potentially superhuman level of consciousness into our moral frameworks, since these frameworks often have an internal hierarchy that is largely based on the degree or level of consciousness that we ascribe to other individuals and to other animals.  If we give moral preference to a dog over a worm, and preference to a human over a dog (for example), then where would a being with superhuman consciousness fit within that framework?  Most people certainly wouldn’t want these beings to be treated as gods, but we shouldn’t want to treat them like slaves either.  If nothing else, they’ll need to be treated like people.

Technologies will almost always have that dual potential, where they can be used for achieving truly admirable goals and to enhance human well being, or used to dominate others and to exacerbate human suffering.  So we need to ask ourselves, given a future world where we have the capacity to make any simulated reality we desire, what kind of world do we want?  What kind of world should we want?  And what kind of person do you want to be in that world?  Answering these questions should say a lot about our moral qualities, serving as a kind of window into the soul of each and every one of us.

“Black Mirror” Reflections: Playtest (S3, E2)

Cooper

Black Mirror, the British science-fiction anthology series created by Charlie Brooker, does a pretty good job experimenting with a number of illuminating concepts that highlight how modern culture is becoming increasingly shaped by (and vulnerable to) various technological advances and changes.  I’ve been interested in a number of these concepts for many years now, not least because of the many important philosophical implications (including a number of moral issues) that they point to.  I’ve decided to start a blog post series that will explore the contents of these episodes.  My intention with this blog post series, which I’m calling “Black Mirror” Reflections, will be to highlight some of my own takeaways from some of my favorite episodes.

I’d like to begin with season 2, episode 3, “Playtest”.  I’m just going to give a brief summary here as you can read the full episode summary in the link provided above.  In this episode, Cooper (Wyatt Russell) decides to travel around the world, presumably as a means of dealing with the recent death of his father, who died from early-onset Alzheimer’s.  After finding his way to London, his last destination before planning to return home to America, he runs into a problem with his credit card and bank account where he can’t access the money he needs to buy his plane ticket.

While he waits for his bank to fix the problem with his account, Cooper decides to earn some cash using an “Oddjobs” app, which provides him with a number of short-term job listings in the area, eventually leading him to “SaitoGemu,” a video game company looking for game testers to try a new kind of personalized horror game involving a (seemingly) minimally invasive brain implant procedure.  He briefly hesitates but, desperate for money and reasonably confident in its safety, he eventually consents to the procedure whereby the implant is intended to wire itself into the gamer’s brain, resulting in a form of perceptual augmentation and a semi-illusory reality.

cooper-mad

The implant is designed to (among other things) scan your memories and learn what your worst fears are, in order to integrate these into the augmented perceptions, producing a truly individualized, and maximally frightening horror game experience.  Needless to say, at some point Cooper begins to lose track of what’s real and what’s illusory, and due to a malfunction, he’s unable to exit the game and he ends up going mad and eventually dying as a result of the implant unpredictably overtaking (and effectively frying) his brain.

nanobots in brain

There are a lot of interesting conceptual threads in this story, and the idea of perceptual augmentation is a particularly interesting theme that finds it’s way into a number of other Black Mirror episodes.  While holographic and VR-headset gaming technologies can produce their own form of augmented reality, perceptual augmentation carried out on a neurological level isn’t even in the same ballpark, having qualitative features that are far more advanced and which are more akin to those found in the Wachowski’s The Matrix trilogy or James Cameron’s Total Recall.  Once the user is unable to distinguish between the virtual world and the external reality, with the simulator having effectively passed a kind of graphical version of the Turing Test, then one’s previous notion of reality is effectively shattered.  To me, this technological idea is one of the most awe-inspiring (yet sobering) ideas within the Black Mirror series.

The inability to discriminate between the two worlds means that both worlds are, for all practical purposes, equally “real” to the person experiencing them.  And the fact that one simulationcan’t tell the difference between such worlds ought to give a person pause to re-evaluate what it even means for something to be real.  If you doubt this, then just imagine if you were to find out one day that your entire life has really been the result of a computer simulation, created and fabricated by some superior intelligence living in a layer of reality above your own (we might even think of this being as a “god”).  Would this realization suddenly make your remembered experiences imaginary and meaningless?  Or would your experiences remain just as “real” as they’ve always been, even if they now have to be reinterpreted within a context that grounds them in another layer of reality?

To answer this question honestly, we ought to first realize that we’re likely fully confident that what we’re experiencing right now is reality, is real, is authentic, and is meaningful (just as Cooper was at some point in his gaming “adventure”).  And this seems to be at least a partial basis for how we define what is real, and how we differentiate the most vivid and qualitatively rich experiences from those we might call imaginary, illusory, or superficial.  If what we call reality is really just a set of perceptions, encompassing every conscious experience from the merely quotidian to those we deem to be extraordinary, would we really be justified in dismissing all of these experiences and their value to us if we were to discover that there’s a higher layer of reality, residing above the only reality we’ve ever known?

For millennia, humans have pondered over whether or not the world is “really” the way we see it, with perhaps the most rigorous examination of this metaphysical question undertaken by the German philosopher Immanuel Kant, with his dichotomy of the phenomenon and the noumenon (i.e. the way we see the world or something in the world versus the way the world or thing “really is” in itself, independent of our perception).  Even if we assume the noumenon exists, we can never know anything about the thing in itself, by our being fundamentally limited by our own perceptual categories and the way we subjectively interpret the world.  Similarly, we can never know for certain whether or not we’re in a simulation.

Looking at the situation through this lens, we can then liken the question of how to (re)define reality within the context of a simulation with the question of how to (re)define reality within the context of a world as it really is in itself, independent of our perception.  Both the possibility of our being in a simulation and the possibility of our perceptions stemming from an unknowable noumenal world could be true (and would likely be unfalsifiable), and yet we still manage to use and maintain a relatively robust conception and understanding of reality.  This leads me to conclude that reality is ultimately defined by pragmatic considerations (mostly those pertaining to our ability to make successful predictions and achieve our goals), and thus the possibility of our one day learning about a new, higher level of reality should merely add to our existing conception of reality, rather than completely negating it, even if it turns out to be incomplete.

Another interesting concept in this episode involves the basic design of the individualized horror game itself, where a computer can read your thoughts and memories, and then surmise what your worst fears are.  This is a technological feat that is possible in principle, and one with far-reaching implications that concern our privacy, safety, and autonomy.  Just imagine if such a power were unleashed by corporations, or mind-readingthe governments owned by those corporations, to acquire whatever information they wanted from your mind, to find out how to most easily manipulate you in terms of what you buy, who you vote for, what you generally care about, or what you do any and every day of your life.  The Orwellian possibilities are endless.

Marketing firms (both corporatocratic and political) have already been making use of discoveries in psychology and neuroscience, finding new ways to more thoroughly exploit our cognitive biases to get us to believe and desire whatever will benefit them most.  Adding to this the future ability to read our thoughts and manipulate our perceptions (even if this is first implemented as a seemingly innocuous video game), this will establish a new means of mass surveillance, where we can each become a potential “camera” watching one another (a theme also highlighted in BM, S4E3: Crocodile), while simultaneously exposing our most private of thoughts, and transmitting them to some centralized database.  Once we reach these technological heights (it’s not a matter of if but when), depending on how it’s employed, we may find ourselves no longer having the freedom to lie or to keep a secret, nor the freedom of having any mental privacy whatsoever.

To be fair, we should realize that there are likely to be undeniable benefits in our acquiring these capacities (perceptual augmentation and mind-reading), such as making virtual paradises with minimal resources, finding new ways of treating phobias, PTSD, brain to cloudand other pathologies; giving us the power of telepathy and superhuman intelligence by connecting our brains to the cloud, giving us the ability to design error-proof lie detectors and other vast enhancements in maximizing personal security and reducing crime.  But there are also likely to be enormous losses in personal autonomy, as our available “choices” are increasingly produced and constrained by automated algorithms; there are likely to be losses in privacy, and increasing difficulties in ascertaining what is true and what isn’t, since our minds will be vulnerable to artificially generated perceptions created by entities and institutions that want to deceive us.

Although we’ll constantly need to be on the lookout for these kinds of potential dangers as they arise, in the end, we may find ourselves inadvertently allowing these technologies to creep into our lives, one consumer product at a time.

Technology, Mass-Culture, and the Prospects of Human Liberation

Cultural evolution is arguably just as fascinating as biological evolution (if not more so), with new ideas and behaviors stemming from the same kinds of natural selective pressures that lead to new species along with their novel morphologies and capacities.  And as with biological evolution where it, in a sense, takes off on its own unbeknownst to the new organisms it produces and independent of the intentions they may have (with our species being the notable exception given our awareness of evolutionary history and our ever-growing control over genetics), so too cultural evolution takes off on its own, where cultural changes are made manifest through a number of causal influences that we’re largely unaware of, despite our having some conscious influence over this vastly transformative process.

Alongside these cultural changes, human civilizations have striven to find new means of manipulating nature and to better predict the causal structure that makes up our reality.  One unfortunate consequence of this is that, as history has shown us, within any particular culture’s time and place, people have a decidedly biased overconfidence in the perceived level of truth or justification for the status quo and their present world view (both on an individual and collective level).  Undoubtedly, the “group-think” or “herd mentality” that precipitates from our simply having social groups often reinforces this overconfidence, and this is so in spite of the fact that what actually influences a mass of people to believe certain things or to behave as they do is highly contingent, unstable, and amenable to irrational forms of persuasion including emotive, sensationalist propaganda that prey on our cognitive biases.

While we as a society have an unprecedented amount of control over the world around us, this type of control is perhaps best described as a system of bureaucratic organization and automated information processing, that gives less and less individual autonomy, liberty, and basic freedom, as it further expands its reach.  How much control do we as individuals really have in terms of the information we have access to, and given the implied picture of reality that is concomitant with this information in the way it’s presented to us?  How much control do we have in terms of the number of life trajectories and occupations made available to us, what educational and socioeconomic resources we have access to given the particular family, culture, and geographical location we’re born and raised in?

As more layers of control have been added to our way of life and as certain criteria for organizational efficiency are continually implemented, our lives have become externally defined by increasing layers of abstraction, and our modes of existence are further separated cognitively and emotionally from an aesthetically and otherwise psychologically valuable sense of meaning and purpose.

While the Enlightenment slowly dragged our species, kicking and screaming, out of the theocratic, anti-intellectual epistemologies of the Medieval period of human history, the same forces that unearthed a long overdue appreciation for (and development of) rationality and technological progress, unknowingly engendered a vulnerability to our misusing this newfound power.  There was an overcompensation of rationality when it was deployed to (justifiably) respond to the authoritarian dogmatism of Christianity and to the demonstrably unreliable nature of superstitious beliefs and of many of our intuitions.

This overcompensatory effect was in many ways accounted for, or anticipated within the dialectical theory of historical development as delineated by the German philosopher Georg Hegel, and within some relevant reformulations of this dialectical process as theorized by the German philosopher Karl Marx (among others).  Throughout history, we’ve had an endless clash of ideas whereby the prevailing worldviews are shown to be inadequate in some way, failing to account for some notable aspect of our perceived reality, or shown to be insufficient for meeting our basic psychological or socioeconomic needs.  With respect to any problem we’ve encountered, we search for a solution (or wait for one to present itself to us), and then we become overconfident in the efficacy of the solution.  Eventually we end up overgeneralizing its applicability, and then the pendulum swings too far the other way, thereby creating new problems in need of a solution, with this process seemingly repeating itself ad infinitum.

Despite the various woes of modernity, as explicated by the modern existentialist movement, it does seem that history, from a long-term perspective at least, has been moving in the right direction, not only with respect to our heightened capacity of improving our standard of living, but also in terms of the evolution of our social contracts and our conceptions of basic and universal human rights.  And we should be able to plausibly reconcile this generally positive historical trend with the Hegelian view of historical development, and the conflicts that arise in human history, by noting that we often seem to take one step backward followed by taking two steps forward in terms of our moral and epistemological progress.

Regardless of the progress we’ve made, we seem to be at a crucial point in our history where the same freedom-limiting authoritarian reach that plagued humanity (especially during the Middle Ages) has undergone a kind of morphogenesis, having been reinstantiated albeit in a different form.  The elements of authoritarianism have become built into the very structure of mass-culture, with an anti-individualistic corporatocracy largely mediating the flow of information throughout this mass-culture, and also mediating its evolution over time as it becomes more globalized, interconnected, and cybernetically integrated into our day-to-day lives.

Coming back to the kinds of parallels in biology that I opened up with, we can see human autonomy and our culture (ideas and behaviors) as having evolved in ways that are strikingly similar to the biological jump that life made long ago, where single-celled organisms eventually joined forces with one another to become multi-cellular.  This biological jump is analogous to the jump we made during the early onset of civilization, where we employed an increasingly complex distribution of labor and occupational specialization, allowing us to survive many more environmental hurdles than ever before.  Once civilization began, the spread of culture became much more effective for transmitting ideas both laterally within a culture and longitudinally from generation to generation, with this process heavily enhanced by our having adopted various forms of written language, allowing us to store and transmit information in much more robust ways, similar to genetic information storage and transfer via DNA, RNA, and proteins.

Although the single-celled bacterium or amoeba (for example) may be thought of as having more “autonomy” than a cell that is forcefully interconnected within a multi-cellular organism, we can see how the range of capacities available to single cells were far more limited before making the symbiotic jump, just as humans living before the onset of civilization had more “freedom” (at least of a certain type) and yet the number of possible life trajectories and experiences was minuscule when compared to a human living in a post-cultural world.  But once multi-cellular organisms began to form a nervous system and eventually a brain, the entire collection of cells making up an organism became ultimately subservient to a centralized form of executive power — just as humans have become subservient to the executive authority of the state or government (along with various social pressures of conformity).

And just as the fates of each cell in a multi-cellular organism became predetermined and predictable by its particular set of available resources and the specific information it received from neighboring cells, similarly our own lives are becoming increasingly predetermined and predictable by the socioeconomic resources made available to us and the information we’re given which constitutes our mass-culture.  We are slowly morphing from individual brains into something akin to individual neurons within a global brain of mass-consciousness and mass-culture, having our critical thinking skills and creative aspirations exchanged for rehearsed responses and docile expectations that maintain the status quo and which continually transfers our autonomy to an oligarchic power structure.

We might wonder if this shift has been inevitable, possibly being yet another example of a “fractal pattern” recapitulated in sociological form out of the very same freely floating rationales that biological evolution has been making use of for eons.  In any case, it’s critically important that we become aware of this change, so we can try and actively achieve and effectively maintain the liberties and level of individual autonomy that we so highly cherish.  We ought to be thinking about what kinds of ways we can remain cognizant of, and critical to, our culture and its products; how we can reconcile or transform technological rationality and progress with a future world comprised of truly liberated individuals; and how to transform our corporatocratic capitalist society into one that is based on a mixed economy with a social safety net that even the wealthiest citizens would be content with living under, so as to maximize the actual creative freedom people have once their basic existential needs have been met.

Will unchecked capitalism, social-media, mass-media, and the false needs and epistemological bubbles they’re forming lead to our undoing and destruction?  Or will we find a way to rise above this technologically-induced setback, and take advantage of the opportunities it has afforded us, to make the world and our technology truly compatible with our human psychology?  Whatever the future holds for us, it is undoubtedly going to depend on how many of us begin to critically think about how we can seriously restructure our educational system and how we disseminate information, how we can re-prioritize and better reflect on what our personal goals ought to be, and also how we ought to identify ourselves as free and unique individuals.

Book Review: Niles Schwartz’s “Off the Map: Freedom, Control, and the Future in Michael Mann’s Public Enemies”

Elijah Davidson begins his foreword to Niles Schwartz’s book titled Off the Map: Freedom, Control, and the Future in Michael Mann’s Public Enemies with a reference to Herman Melville’s Moby Dick, where he mentions how that book was, among other things, about God.  While it wasn’t a spiritual text in any traditional sense of the term, it nevertheless pointed to the finitude of human beings, to our heavy reliance on one another, and highlighted the fact that the natural world we live in is entirely indifferent to our needs and desires at best if not outright threatening to our imperative of self-preservation.  Davidson also points out another theme present in Moby Dick, the idea that people corrupt institutions rather than the other way around—a theme that we’ll soon see as relevant to Michael Mann’s Public Enemies.  But beyond this human attribute of fallibility, and in some ways what reinforces its instantiation, is our incessant desire to find satisfaction in something greater than ourselves, often taken to be some kind of conception of God.  It is in this way that Davidson refers to Melville’s classic as “a spiritual treatise par excellence.”

When considering Mann’s films, which are often a cinematic dichotomous interplay of “cops and robbers” or “predator and prey,” they are, on a much more basic level, about “freedom and control.”  We also see his filmography as colored with a dejection of, or feeling of malaise with respect to, the modern world.  And here, Mann’s films can also be seen as spiritual in the sense that they make manifest a form of perspectivism centered around a denunciation of modernity, or at least a disdain for the many depersonalizing or externally imposed meta-narratives that it’s generated.  Schwartz explores this spiritual aspect of Mann’s work, most especially as it relates to PE, but also connecting it to the (more explicitly spiritual) works of directors like Terrence Malick (The Thin Red Line, The New World, The Tree of Life, to name a few).  Schwartz proceeds to give us his own understanding of what Mann had accomplished in PE, and this is despite there being a problematic, irreconcilable set of interpretations as he himself admits: “Interpreting Public Enemies is troubling because it has theological and philosophical precepts that are rife with contradictions.”  This should be entirely expected however when PE is considered within the broader context of Michael Mann’s work generally, for Mann’s entire milieu is formulated on paradox:

“Michael Mann is a director of contradictions: aesthetic and didactive, abstract and concrete, phantasmagorical and brutally tactile, expressionistic and anthropological, heralding the individual and demanding social responsibility, bold experimenter and studio genre administrator, romantic and futurist, Dillinger freedom seeker and Hoover control freak, outsider and insider.”

Regardless of this paradoxical nature that is ubiquitous throughout Mann’s filmography, Schwartz provides a light to take us through a poetic journey of Mann’s work and his vision, most especially through a meticulous view of PE, and all within a rich and vividly structured context centered on the significance of, and relation between, freedom, control, and of course, the future.  His reference to “the future” in OTM is multi-dimensional to say the least, and the one I personally find the most interesting, cohesive, and salient.  Among other things, it’s referring to John Dillinger’s hyper-focused projection toward the future, where his identity is much more defined by where he wants to be (an abstract, utopian future that is “off the map,” or free from the world of control and surveillance) rather than defined by where he’s been (fueled by the obvious fact that he’s always on the run), and this is so even if he also seems to be stuck in the present, with Dillinger’s phenomenology as well as the film’s structure often traversing from one fleeting moment to the next.  But I think we can take Dillinger at his own word as he tells his true love, Billie Frechette, after whisking her away to dine in a high-class restaurant: “That’s ‘cuz they’re all about where people come from.  The only thing that’s important, is where somebody’s going.” 

This conception of “the future” is also referencing the transcendent quality of being human, where our identity is likewise defined in large part by the future, our life projects, and our striving to become a future version of ourselves, however idealized or mythologized that future-self conception may be (I think it’s fair to say Dillinger’s was, to a considerable degree).  The future is a reference to where our society is heading, how our society is becoming increasingly automated, taken over by a celeritously expanding cybernetic infrastructure of control, evolving and expanding in parallel with technology and our internet-catalyzed global interconnectedness.  Our lives are being built upon increasing layers of abstraction as our form of organized life continues to trudge along its chaotic arc of cultural evolution, and we’re losing more and more of our personal freedom as a result of evermore external influences, operating on a number of different levels (socially, politically, technologically), known and unknown, consciously and unconsciously.  In PE, the expanding influences were best exemplified by the media, the mass-surveillance, and of course Hoover’s Bureau and administration, along with the arms race taking place between Dillinger’s crew and the FBI (where the former gave the latter a run for their money).

As Schwartz explains about J. Edgar’s overreach of power: “Hoover’s Bureau is increasingly amoral as it reaches for a kind of Hobbesian, sovereign super control.”  Here of course we get our first glimpse of a noteworthy dualism, namely freedom and control, and the myriad of ways that people corrupt institutions (as Davidson explored in his foreword), though contrary to Davidson’s claim, it seems undeniable to surmise that once an institution has become corrupted by certain people, that institution is more likely to corrupt other individuals, both internally and externally (and thus, institutions do corrupt individuals, not merely the other way around).  If bad ideas are engineered into our government’s structure, our laws, our norms, our capitalist market, or any other societal system, they can seemingly take off on their own, reinforced by the structure itself and the automated information processing that’s inherent to bureaucracies, the media, our social networks, and even inherent to us as individuals who are so often submerged in a barrage of information.

Relating Hoover to the power and influence of the media, Schwartz not only mentions the fact that PE is undoubtedly “conscientious of how media semiotics affect and control people,” but he also mentions a piece of dialogue that stood out to me as well, where FBI Director Hoover (Billy Crudup) just got out of the Congressional Hearing Room, having been chastised by Senator McKellar (Ed Bruce), and he says to his deputy, Clyde Tolson (Chandler Williams): “If we will not contest him in his committee room, we will fight him on the front page,” showing us a glimpse of the present day where news (whether “fake” news or not), and the overall sensationalism of a story is shown to be incredibly powerful at manipulating the masses and profoundly altering our trajectory, one (believed) story at a time.  If you can get somebody to believe the story, whether based on truth or deception, the battle is half won already.  Even Dillinger himself, who’s own persona is wrapped in a cloud of mythology, built up by Hollywood and the media’s portrayal of his life and image, shows us in a very concrete way, just how far deception can get you.

For example, Schwartz reminds us of how Dillinger managed to escape through six doors of Crown Point jail with nothing other than a mock gun made of wood.  Well, nothing but a mock gun and a hell of a good performance, which is the key point here, since the gun was for all practical purposes real.  By the time Dillinger breaks into Warden Baker’s office to steal some Tommy guns before finishing his escape, Warden Baker (David Warshofsky) even says to him “That wasn’t real was it?,” which resonated with the idea of how powerful persuasion and illusion can be in our lives, and maybe indirectly showing us that what is real to us in the ways that matter most is defined by what’s salient to us in our raw experience and what we believe to be true since that’s all that affects our behavior anyway.  The guards believed Dillinger’s mock gun was real, and so it was real, just as a false political campaign promise is real, or a bluffed winning-hand in poker, or any other force, whether operating under the pretenses of honesty or deception, pushing us individually and collectively in one direction or another.

The future is also a reference to Michael Mann’s 2015 cyberthriller Blackhat—a movie that Mann had been building up to, and a grossly underappreciated one at that, with various degrees of its foreshadowing in PE.  This progression and relation between PE and Blackhat is in fact central to Schwartz’s principle aim in OTM:

“My aim in this book is to explore Public Enemies as an extraordinary accomplishment against a backdrop of other digital films, its meditations on the form precipitating Blackhat, Mann’s stunning and widely ignored cyberthriller that converts the movie-house celluloid of its predecessors into a beguiling labyrinth of code that’s colonized the heretofore tangible firmament right under our noses.”

And what an extraordinary accomplishment it is; and fortunately for us we’re in a better position to appreciate it after reading Schwartz’s highly perceptive analysis of such a phenomenal artist.  In PE, the future is essentially projected into the past, which is interesting to me in its own right, since this is also the case with human memory, where we reconstruct our memories upon recall in ways that are augmented by, and amalgamated with, our present knowledge, despite not having such knowledge when the memory itself was first formed.  So too in PE, we see a number of questions posed especially as it relates to the existentialist movement, which hadn’t been well-developed or nearly as influential until some time after the 1930s (especially after WWII, with the advent of existential philosophers including Sartre, Camus, and Heidegger), and as it relates to the critical theory stemming from the Frankfurt School of social research (Marcuse, Adorno, Horkheimer, et al.), neither of which being nearly as pertinent or socially and politically relevant then as they are now in the present day.  And this is where Blackhat becomes decidedly apropos.

So what future world was foreshadowed in PE?  Schwartz describes the world in Blackhat as an utterly subliminal and cybernetic realm: “The world is pattern recognition and automatic information processing, the stuff of advertising.”  Though I would argue that our entire phenomenology is fundamentally based on pattern recognition where our perception, imagination, actions, and even desires are mediated by the models of the world’s causal structure that our brains create and infer through our experiences.  But this doesn’t mean that our general schema of pattern recognition hasn’t been influenced by modernity such that we’ve become noticeably more automated, and where many have seemingly lost their capacity for contemplative reflection and lost the historically less-hindered psychological channel between reason and emotion.  The Blackhat world Schwartz is describing here is one where the way information is being processed is relatively alien to the more traditional conceptions of what it means to be human.  And this cybernetic world is a world where cultural and technological evolution are accelerating far faster than our biological evolution ever could (though genetic engineering has the potential to re-synchronize the two if we dedicate ourselves to such an ambitious project), and this bio-cultural desynchronization has made us increasingly maladapted to the Digital Age we’ve now entered.  Furthermore, this new world has made us far more susceptible to subliminal, corporatocratic and sociopolitical influences, and it has driven us toward an increasing reliance on a cybernetically controlled way of life.

The social relevance of these conceptions makes Blackhat a much-needed lens for fully appreciating our current existential predicament, and as Schwartz says of Mann’s (perhaps unavoidably ironic) digital implementation of this somewhat polemical techno-thriller:

“Conversely, Mann’s embrace of the digital is a paradoxical realization of tactile historical and spatial phenomenology, lucidly picturing an end of identity, while leaping, as through faith, toward the possibility of individuation in nature, free from institutional conscriptions and the negative assignments of cybernetics.”

Schwartz illustrates that concomitant with identity, “… the film prompts us to ask where nature ends and the virtual begins,” though perhaps we could also infer that in Blackhat there’s somewhat of a dissolution of the boundary (or at least a feeling of arbitrariness regarding how the boundary is actually defined) between the real and the virtual, the natural and the artificial, the human and the transhuman.  And maybe each of these fuzzy boundaries implies how best to resolve the contradictions in Mann’s work that Schwartz describes in OTM, with this possible resolution coming about through a process of symbiotic fusion or some kind of memetic “speciation” transitionally connecting what seem to be distinct concepts through a brilliantly structured narrative.

And to take the speciation analogy even further, I think it can also be applied to the changes in filmmaking and culture (including the many changes that Schwartz covers in his tour de force), where many of the changes are happening so gradually that we simply fail to notice them, at least not until a threshold of change has occurred.  But there’s also a punctuated equilibrium form of speciation in filmmaking, where occasionally a filmmaker does something extraordinary in one fell swoop, setting the bar for a new genre of cinema, just as James Cameron arguably did with the heavily CGI-amalgamated world in Avatar (with his planet Pandora “doubling for the future of cinema [itself]…” as Schwartz mentions), and to a somewhat lesser technological extent, in Michael Mann’s PE, where even though the analog to digital leap had already been made by others, “Mann’s distinctly idiosyncratic use of HD cameras rattled viewers with its alien video-ness, explicating to viewers that they were perched on a separate filmic architecture that may require a new way of seeing.”

Similarly in Blackhat, Mann takes us through seamless transitions of multiple scales of both time and space, opening up a window that allows us to see how mechanized and deterministic our modern world is, from the highest cosmological scales, down to cities populated with an innumerable number of complex yet predictable humans, and finally down to the digital information processing schema at the micro and nanotechnological scales of transistors.  Within each perspective level, we fail to notice the causal influences operating at the levels above or below, and yet Mann seamlessly joins these together, showing us a kind of fractal recapitulation that we wouldn’t otherwise fully appreciate.  After reflecting on many of the references to freedom Schwartz posits in OTM, I’ve begun to more seriously ponder over the idea that human freedom is ultimately in the eye of the beholder, dependent on one’s awareness of what is being controlled by another, what is predictable and what isn’t, and one’s perception of what constitutes self-authored behavior or a causa sui formulation of free will.  Once we realize that, at the smallest scales, existence is governed by deterministic processes infused with quantum randomness, it is less surprising to see the same kind of organized, predictable causal structure at biological, sociological, and even cosmological scales.

Aside from Blackhat, Schwartz seamlessly ties together a number of Mann’s other films including Thief, Miami Vice, and my personal favorite, Heat.  There’s also a notable progression or at least an underlying and evolving conceptual structure connecting characters and ideas from one film to the next (above and beyond the transition from PE to Blackhat), as Schwartz eloquently points out, which I see as illustrating how various salient psychological and sociological forces and manifestations are so often reiterated in multiple contexts varying in time and space.  Clearly Mann is building off of his previous work, adapting previous ideas to new narratives, and doing so while continuously trying to use, as Schwartz puts it: “alchemic cinema tools to open a window and transform our perception,” thus giving us a new way to view the world and our own existential status.

An important dynamic that Schwartz mentions, not only as it pertains to Mann’s films, but of (especially well-crafted) films in general, is the notable interplay between the audience and the film or “the image.”  The image changes us, which in turn changes the image ad infinitum, establishing a feedback loop and a co-evolution between the viewer’s interpretation of the film as it relates to their own experiences, and the ongoing evolution of every new cinematic product.  There may be some indoctrinatory danger in this cycle if the movie you’re engaging with is a mass-produced product, since this product is, insofar as it’s mass-produced, deeply connected to “the system”, indeed the very same system trying to capture John Dillinger in PE.  And yet, even though the mass-produced product is a part of the system, and despite its being a cog in the wheel of what we might call a cybernetic infrastructure of control, Schwartz highlights an important potentiality in film viewing that is often taken for granted:

“…The staggering climax inside the Biograph beseeches us to aspire to the images conscientiously, reconciling the mass-produced product with our private histories and elevating the picture and our lives with the media.”

In other words, even in the case of mass-produced cinema, we as viewers stand to potentially gain a valuable experience, and possibly a spiritual or philosophical one at that by our forming a personal relationship with the film, synthesizing the external stimuli with our inner sense of self, coalescing with the characters and integrating them into our own relationships and to ourselves, thus expanding our set of perspectives by connecting to someone “off the map”.  On the other hand, Schwartz also mentions Herbert Marcuse’s views, which aptly describes the inherent problem of art insofar as it becomes a part of “the system”:

“…Herbert Marcuse writes that as long as art has become part of the system, it ceases in questioning it, and thus impedes social change.  Poetic language must transcend the “real” world of the society, and in order to transcend that world it must stand opposed to it, questioning it, quelling us out of it.”

But we need also realize that art will inevitably be influenced by the system, because there are simply too many subliminal or even explicit system-orchestrated ideas that the artist (and everyone else in society) has been instilled with, even if entirely unbeknownst to them.  It seems that the capacity for poetic language to transcend the real world of the society lies in its simply providing any new perspective at all, making use of allegory, metaphor, and the crossing of contextual boundaries, and it can do so even if this new perspective doesn’t necessarily or explicitly oppose some other (even mainstream) perspective.

This is most definitely not to in any way discount Marcuse’s overarching point of how art’s connection to the system is a factor that limits its efficacy in enacting social change and its ability to positively feed the public’s collective stream of consciousness, but merely to point out that art’s propensity for transcending the status quo isn’t entirely inhibited from its unavoidable connection to the system.  And to once again bring us back to the scene in PE where Dillinger is fully absorbing (or being fully absorbed by) his viewing of Manhattan Melodrama in the Biograph theater, Schwartz seems to describe the artistic value of a film as something at least partially dependent on whether or not it facilitates a personal and transcendent experience in any of the viewers:

“Cinema is elevated to a ceremony of transubstantiation where fixed bodies are resurrected through the mercurial alchemy of speed and light, contradicting the consumption we saw earlier during the newsreel.  Dillinger’s connection to cinema is a meditative and private one…”

I think it’s fair to say that, if there’s anything that can elevate our own experience of Mann’s work in particular, and film viewing more generally, Schwartz’s Off the Map is a great philosophical and spiritual primer to help get us there.  Both comprehensive and brilliantly written, Schwartz’s contribution to film scholarship should become required reading for anybody interested in cinema and film viewing.

CGI, Movies and Truth…

After watching Rogue One: A Star Wars Story, which I liked, though not nearly as much as the original trilogy (Episodes IV, V, and VI), it got me thinking more about something I hadn’t thought about since the most recent presidential election.  As I watched Grand Moff Tarkin and Princess Leia, both characters made possible in large part thanks to CGI (as well as the help of actors Ingvild Deila and Guy Henry), I realized that although this is still a long way away, it is inevitable that (barring a nuclear world war or some other catastrophe that kills us all or sets us back to a pre-industrialized age) the pace of this technology will eventually lead to CGI products that are completely indistinguishable from reality.

This means that eventually, the “fake news” issue that many have been making a lot of noise about as of late, will one day take a new and ugly turn for the worse.  Not only is video and graphic technology accelerating at a fairly rapid pace to exacerbate this problem, but similar concerns are also arising as a result of voice editing software.  By simply gathering several seconds of sample audio from a person of interest, various forms of software are getting better and better at synthesizing their speech in order to mimic them — putting whatever words into “their” mouths that one so desires.

The irony here is that this means that despite the fact that we are going to continue becoming more and more globally interconnected, technologically advanced, and gain more global knowledge, it seems that we will eventually reach a point where each individual becomes less and less able to know what is true and what isn’t in all the places that you are electronically connected to.  One reason for this is that, as per the opening reference to Rogue One, it will become increasingly difficult to judge the veracity of videos that go viral on the internet and/or through news outlets.  We can imagine seeing a video (or many series of videos) released on the news and throughout the internet containing shocking events with real world leaders or other famous people, places, and so forth, events that could possibly start a civil or world war, alter one’s vote, or otherwise — but with the caveat that these events are entirely manufactured by some Machiavellian warmonger or power seeking elite.

Pragmatically speaking, we must still live our lives trusting what we see in proportion to the evidence we have, thus believing ordinary claims with a higher degree of confidence than extraordinary ones.  We will still need to hold to the general rule of extraordinary claims requiring extraordinary evidence in order to meet their burden of proof.  But it will become more difficult to trust certain forms of evidence (including in a court of law), so we’ll have to take that into consideration so that actions that result in more catastrophic consequences (if your assumptions/information turn out to be based on false evidence) require a higher burden of proof — once we are able to successfully pass some kind of graphics Touring Test.

This is by no means an endorsement for conspiracy theories generally nor any other anti-intellectual or dogmatic non-sense. We don’t want people to start doubting everything they see nor to start doubting everything they don’t WANT to see (which would be a proverbial buffet for our cognitive biases and the conspiracy theorists that make use of these epistemological flaws regularly), we still need to take this dynamic technological factor into account to maintain a world view based on proper Bayesian reasoning.

On the brighter side of things, we are going to get to enjoy much of what the new CGI capabilities will bring to us, because movies and all visual entertainment are going to be revolutionarily changed forever in many ways that will be worth celebrating, including our use of virtual reality generally (many various forms that we do and will continue to consciously and rationally desire). We just need to pay attention and exercise some careful moral deliberation as we develop these technologies. Our brains simply didn’t evolve to easily avoid being duped by artificial realities like the ones we’re developing (we already get duped far too often within our actual reality), so we need to engineer our path forward in a way that will better safeguard us from our own cognitive biases so we can maximize our well being once this genie is out of the bottle.

Co-evolution of Humans & Artificial Intelligence

In my last post, I wrote a little bit about the concept of personal identity in terms of what some philosophers have emphasized and my take on it.  I wrote that post in response to an interesting blog post written by James DiGiovanna over at A Philosopher’s Take.  James has written another post related to the possible consequences of integrating artificial intelligence into our societal framework, but rather than discussing personal identity as it relates to artificial intelligence, he discussed how the advancements made in machine learning and so forth are leading to the future prospects of effective companion AI, or what he referred to as programmable friends.  The main point he raised in that post was the fact that programmable friends would likely have a very different relationship dynamic with us compared with our traditional (human) friends.  James also spoke about companion AI  in terms of their also being laborers (as well as being friends) but for the purposes of this post I won’t discuss these laborer aspects of future companion AI (even if the labor aspect is what drives us to make companion AI in the first place).  I’ll be limiting my comments here to the friendship or social dynamic aspects only.  So what aspects about programmable AI should we start thinking about?

Well for one, we don’t currently have the ability to simply reprogram a friend to be exactly as we want them to be, in order to avoid conflicts entirely, to share every interest we have, etc., but rather there is a bit of a give-and-take relationship dynamic that we’re used to dealing with.  We learn new ways of behaving and looking at the world and even new ways of looking at ourselves when we have friendships with people that differ from us in certain ways.  Much of the expansion and beneficial evolution of our perspectives are the result of certain conflicts that arise between ourselves and our friends, where different viewpoints can clash against one another, often forcing a person to reevaluate their own position based on the value they place on the viewpoints of their friends.  If we could simply reprogram our friends, as in the case with some future AI companions, what would this do to our moral, psychological and intellectual growth?  There would be some positive effects I’m sure (from having less conflict in some cases and thus an increase in short term happiness), but we’d definitely be missing out on a host of interpersonal benefits that we gain from having the types of friendships that we’re used to having (and thus we’d likely have less overall happiness as a result).

We can see where evolution ties in to all this, whereby we have evolved as a social species to interact with others that are more or less like us, and so when we envision these possible future AI friendships, it should become obvious why certain problems would be inevitable largely because of the incompatibility with our evolved social dynamic.  To be sure, some of these problems could be mitigated by accounting for them in the initial design of the companion AI.  In general, this could be done by making the AI more like humans in the first place and this could be something advertised as some kind of beneficial AI “social software package” so people looking to get companion AI would be inclined to get this version even if they had the choice to go for the entirely reprogrammable version.

Some features of a “social software package” could be things like a limit on the number of ways the AI could be reprogrammed such that only very serious conflicts could be avoided through reprogramming, but without the ability to avoid all conflicts.  It could be such that the AI are able to have a weight on certain opinions, just as we do, and to be more assertive with regard to certain propositions and so forth.  Once the AI has learned its human counterpart’s personality, values, opinions, etc., it could also be programmed with the ability to intentionally challenge that human by offering different points of view and by its using the Socratic method (at least from time to time).  If people realized that they could possibly gain wisdom, knowledge, tolerance, and various social aptitudes from their companion AI, I would think that would be a marked selling point.

Another factor that I think will likely play a role in mitigating the possible social dynamic clash between companion AI (that are programmable) and humans is the fact that humans are also likely to become more and more integrated with AI technology generally.  That is, as humans are continuing to make advancements in AI technology, we are also likely to integrate a lot of that technology into ourselves, to make humans more or less into cyborgs a.k.a. cybernetic organisms.  If we see the path we’re on already with all the smart phones, apps, and other gadgets and computer systems that have started to become extensions of ourselves, we can see that the next obvious step (which I’ve mentioned elsewhere, here and here) is to remove the external peripherals so that they are directly accessible via our consciousness with no need of interfacing with external hardware and so forth.  If we can access “the cloud” with our minds (say, via bluetooth or the like), then the apps and all the fancy features can become a part of our minds, adding to the ways that we will be able to think, providing an internet worth of knowledge at our cognitive disposal, etc.  I could see this technology eventually allowing us to change our senses and perceptions, including an ability to add virtual objects that are amalgamated with the rest of the external reality that we perceive (such as adding virtual friends that we see and interact with that aren’t physically present outside of our minds even though they appear to be).

So if we start to integrate these kinds of technologies into ourselves as we are also creating companion AI, then we may also end up with the ability to reprogram ourselves alongside those programmable companion AI.  In effect, our own qualitatively human social dynamic may start to change markedly and become more and more compatible with that of the future AI.  The way I think this will most likely play out is that we will try to make AI more like us as we try to make us more like AI, where we co-evolve with one another, trying to share advantages with one another and eventually becoming indistinguishable from one another.  Along this journey however we will also become very different from the way we are now, and after enough time passes, we’ll likely change so much that we’d be unrecognizable to people living today.  My hope is that as we use AI to also improve our intelligence and increase our knowledge of the world generally, we will also continue to improve on our knowledge of what makes us happiest (as social creatures or otherwise) and thus how to live the best and most morally fruitful lives that we can.  This will include improving our knowledge of social dynamics and the ways that we can maximize all the interpersonal benefits therein.  Artificial intelligence may help us to accomplish this however paradoxical or counter-intuitive that may seem to us now.

Artificial Intelligence: A New Perspective on a Perceived Threat

There is a growing fear in many people of the future capabilities of artificial intelligence (AI), especially as the intelligence of these computing systems begins to approach that of human beings.  Since it is likely that AI will eventually surpass the intelligence of humans, some wonder if these advancements will be the beginning of the end of us.  Stephen Hawking, the eminent British physicist, was recently quoted by the BBC as saying “The development of full artificial intelligence could spell the end of the human race.”  BBC technology correspondent Rory Cellan-Jones said in a recent article “Prof Hawking says the primitive forms of artificial intelligence developed so far have already proved very useful, but he fears the consequences of creating something that can match or surpass humans.”  Hawking then said “It would take off on its own, and re-design itself at an ever increasing rate.  Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”

Hawking isn’t alone with this fear, and clearly this fear isn’t ill-founded.  It doesn’t take a rocket scientist to realize that human intelligence has allowed us to overcome just about any environmental barrier we’ve come across, driving us to the top of the food chain.  We’ve all seen the benefits of our high intelligence as a species, but we’ve also seen what can happen due to that intelligence being imperfect, having it operate on incomplete or fallacious information, and ultimately lacking an adequate capability of accurately determining the long-term consequences of our actions.  Because we have such a remarkable ability to manipulate our environment, that manipulation can be extremely beneficial or extremely harmful as we’ve seen with the numerous species we’ve threatened on this planet (some having gone extinct).  We’ve even threatened many fellow human beings in the process, whether intentionally or not.  Our intelligence, combined with some elements of short-sightedness, selfishness, and aggression, has led to some pretty abhorrent products throughout human history — anything from the mass enslavement of others spanning back thousands of years to modern forms of extermination weaponry (e.g. bio-warfare and nuclear bombs).  If AI reaches and eventually surpasses our level of intelligence, it is reasonable to consider the possibility that we may find ourselves on a lower rung of the food chain (so to speak), potentially becoming enslaved or exterminated by this advanced intelligence.

AI: Friend or Foe?

So what exactly prompted Stephen Hawking to make these claims?  As the BBC article mentions, “His warning came in response to a question about a revamp of the technology he uses to communicate, which involves a basic form of AI…The theoretical physicist, who has the motor neurone disease amyotrophic lateral sclerosis (ALS), is using a new system developed by Intel to speak.  Machine learning experts from the British company Swiftkey were also involved in its creation. Their technology, already employed as a smartphone keyboard app, learns how the professor thinks and suggests the words he might want to use next.”

Reading this article suggests another possibility or perspective that I don’t think a lot of people are considering with regard to AI technology.  What if AI simply replaces us gradually, by actually becoming the new “us”?  That is, as we further progress in Cyborg (i.e. cybernetic organism) technologies, using advancements similar to Stephen Hawking’s communication ability upgrade, we are ultimately becoming amalgams of biological and synthetic machines anyway.  Even the technology that we currently operate through an external peripheral interface (like smart phones and all other computers) will likely become integrated into our bodies internally.  Google glasses, voice recognition, and other technologies like those used by Hawking are certainly taking us in that direction.  It’s not difficult to imagine one day being able to think about a particular question or keyword, and having an integrated blue-tooth implant in our brain recognize the mental/physiological command cue, and successfully retrieve the desired information wirelessly from an online cloud or internet database of some form.  Going further still, we will likely one day be able to take sensory information that enters the neuronal network of our brain, and once again, send it wirelessly to supercomputers stored elsewhere that are able to process the information with billions of pattern recognition modules.  The 300 million or so pattern recognition modules that are currently in our brain’s neo-cortex would be dwarfed by this new peripheral-free interface and wirelessly accessible technology.

For those that aren’t very familiar with the function or purpose of the brain’s neo-cortex, we use its 300 million or so pattern recognition modules to notice patterns in the environment around us (and meta patterns of neuronal activity within the brain), thus being able to recognize and memorize sensory data, and think.  Ultimately, we use this pattern recognition to accomplish goals, solve problems, and gain knowledge from past experience.  In short, these pattern recognition modules are our primary source or physiological means for our intelligence.  Thus, being able to increase the number of pattern recognition modules (as well as the number of hierarchies between different sets of them), will only increase our intelligence.  Regardless of whether we integrate computer chips in our brain to do at least some or all of this processing locally, or use a wireless means of communication to an external supercomputer farm or otherwise, we will likely continue to integrate our biology with synthetic analogs to increase our capabilities.

When we realize that a higher intelligence allows us to better predict the consequences of our actions, we can see that our increasing integration with AI will likely have incredible survival benefits.  This integration will also catalyze further technologies that could never have been accomplished with biological brains alone, because we simply aren’t naturally intelligent enough to think beyond a certain level of complexity.  As Hawking said regarding AI that could surpass our intelligence, “It would take off on its own, and re-design itself at an ever increasing rate.  Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”  Yes, but if that AI becomes integrated in us, then really it is humans that are finding a way to circumvent slow biological evolution with a non-biological substrate that supercedes it.

At this time I think it is relevant to mention something else I’ve written about previously, which is the advancements being made in genetic engineering and how they are taking us into our last and grandest evolutionary leap, a “conscious evolution”, thus being able to circumvent our own slow biological evolution through an intentionally engineered design.  So as we gain more knowledge in the field of genetic engineering (combined with the increasing simulation and computing power afforded by AI), we will likely be able to catalyze our own biological evolution such that we can evolve quickly as we increase our Cyborg integrations with AI.  So we will likely see an increase in genetic engineering capabilities developing in close parallel with AI advancements, with each field substantially contributing to the other and ultimately leading to our transhumanism.

Final Thoughts

It seems clear that advancements in AI are providing us with more tools to accomplish ever-more complex goals as a species.  As we continue to integrate AI into ourselves, what we now call “human” is simply going to change as we change.  This would happen regardless, as human biological evolution continues its course into another speciation event, similar to the one that led to humans in the first place.  In fact, if we wanted to maintain the way we are right now as a species, biologically speaking, it would actually require us to use genetic engineering to do so, because genetic differentiation mechanisms (e.g. imperfect DNA replication, mutations, etc.) are inherent in our biology.  Thus, for those that argue against certain technologies based on a desire to maintain humanity and human attributes, they must also realize that the very technologies they despise are in fact their only hope for doing so.  More importantly, the goal of maintaining what humans currently are goes against our natural evolution, and we should embrace change, even if we embrace it with caution.

If AI continues to become further integrated into ourselves, forming a Cyborg amalgam of some kind, as it advances to a certain point we may choose one day to entirely eliminate the biological substrate of that amalgam, if it is both possible and advantageous to do so.  Even if we maintain some of our biology, and merely hybridize with AI, then Hawking was right to point out that “The development of full artificial intelligence could spell the end of the human race.”  Although, rather than a doomsday scenario like we saw in the movie The Terminator, with humans and machines at war with one another, the end of the human race may simply mean that we will end up changing into a different species, just as we’ve done throughout our evolutionary history.  Only this time, it will be a transition from a purely biological evolution to a cybernetic hybrid variation.  Furthermore, if it happens, it will be a transition that will likely increase our intelligence (and many other capabilities) to unfathomable levels, giving us an unprecedented ability to act based on more knowledge of the consequences of our actions as we move forward.  We should be cautious indeed, but we should also embrace our ongoing evolution and eventual transhumanism.