“The Book”

Who am I? What am I worth?
Shall I consult the good book?
Grasping for likes, shares, and pokes
Torn asunder by algorithms
The new social realm is but a joke
Psyches disrupted, mental schisms

Why meet face-to-face
When I can live in abstraction?
Brave new worlds of Silicon Valley
Filled with scores of meaningless bullshit
Dopamine released as likes are tallied
Echo chambers robbing the human spirit

How many true friends lie in this realm?
‘Tis but a tiny fraction of that number
That number on display for all to see
Most couldn’t care less if I’m there or not
That number is but a social formality
As ones and zeroes mark the spot

Who am I? What am I worth?
Shall I consult the good book?
Brewing political and cultural troubles
Treating people as if a commodity
Trapped in its ideological bubbles
The digitization of our humanity

Advertisement

“Black Mirror” Reflections: U.S.S. Callister (S4, E1)

This Black Mirror reflection will explore season 4, episode 1, which is titled “U.S.S. Callister”.  You can click here to read my last Black Mirror reflection (season 3, episode 2: “Playtest”).  In U.S.S. Callister, we’re pulled into the life of Robert Daly (Jesse Plemons), the Chief Technical Officer at Callister Inc., a game development company that has produced a multiplayer simulated reality game called Infinity.  Within this game, users control a starship (an obvious homage to Star Trek), although Daly, the brilliant programmer behind this revolutionary game, has his own offline version of the game which has been modded to look like his favorite TV show Space Fleet, where Daly is the Captain of the ship.

We quickly learn that most of the employees at Callister Inc. don’t treat Daly very kindly, including his company’s co-founder James Walton (Jimmy Simpson).  Daly appears to be an overly passive, shy, introvert.  During one of Daly’s offline gaming sessions at home, we come to find out that the Space Fleet characters aboard the starship look just like his fellow employees, and as Captain of his simulated crew, he indulges in berating them all.  Due to the fact that Daly is Captain and effectively controls the game, he is rendered nearly omnipotent, and able to force his crew to perpetually bend to his will, lest they suffer immensely.

It turns out that these characters in the game are actually conscious, created from Daly having surreptitiously acquired his co-workers’ DNA and somehow replicated their consciousness and memories and uploaded them into the game (and any biologists or neurologists out there, let’s just forget for the moment that DNA isn’t complex enough to store this kind of neurological information).  At some point, a wrench is thrown into Daly’s deviant exploits when a new co-worker, programmer Nanette Cole (Cristin Milioti), is added to his game and manages to turn the crew against him.  Once the crew finds a backdoor means of communicating with the world outside the game, Daly’s world is turned upside down with a relatively satisfying ending chock-full of poetic justice, as his digitized, enslaved crew members manage to escape while he becomes trapped inside his own game as it’s being shutdown and destroyed.

Daly stuck

This episode is rife with a number of moral issues that build on one another, all deeply coupled with the ability to engineer a simulated reality (perceptual augmentation).  Virtual worlds carry a level of freedom that just isn’t possible in the real world, where one can behave in countless ways with little or no consequence, whether acting with beneficence or utter malice.  One can violate physical laws as well as prescriptive laws, opening up a new world of possibilities that are free to evolve without the feedback of social norms, legal statutes, and law enforcement.

People have long known about various ways of escaping social and legal norms through fiction and game playing, where one can imagine they are somebody else, living in a different time and place, and behave in ways they’d never even think of doing in the real world.

But what happens when they’re finished with the game and go back to the real world with all its consequences, social norms and expectations?  Doesn’t it seem likely that at least some of the behaviors cultivated in the virtual world will begin to rear their ugly heads in the real world?  One can plausibly argue that violent game playing is simply a form of psychological sublimation, where we release many of our irrational and violent impulses in a way that’s more or less socially acceptable.  But there’s also bound to be a difference between playing a very abstract game involving violence or murder, such as the classic board-game Clue, and playing a virtual reality game where your perceptions are as realistic as can be and you choose to murder some other character in cold blood.

Clearly in this episode, Daly was using the simulated reality as a means of releasing his anger and frustration, by taking it out on reproductions of his own co-workers.  And while a simulated experiential alternative could be healthy in some cases, in terms of its therapeutic benefit and better control over the consequences of the simulated interaction, we can see that Daly took advantage of his creative freedom, and wielded it to effectively fashion a parallel universe where he was free to become a psychopath.

It would already be troubling enough if Daly behaved as he did to virtual characters that were not actually conscious, because it would still show Daly pretending that they are conscious; a man who wants them to be conscious.  But the fact that Daly knows they are conscious makes him that much more sadistic.  He is effectively in the position of a god, given his powers over the simulated world and every conscious being trapped within it, and he has used these powers to generate a living hell (thereby also illustrating the technology’s potential to, perhaps one day, generate a living heaven).  But unlike the hell we hear about in myths and religious fables, this is an actual hell, where a person can suffer the worst fates imaginable (it is in fact only limited by the programmer’s imagination) such as experiencing the feeling of suffocation, yet unable to die and thus with no end in sight.  And since time is relative, in this case based on the ratio of real time to simulated time (or the ratio between one simulated time and another), a character consciously suffering in the game could feel as if they’ve been suffering for months, when the god-like player has only felt several seconds pass.  We’ve never had to morally evaluate these kinds of situations before, and we’re getting to a point where it’ll be imperative for us to do so.

Someday, it’s very likely that we’ll be able to create an artificial form of intelligence that is conscious, and it’s up to us to initiate and maintain a public conversation that addresses how our ethical and moral systems will need to accommodate new forms of conscious moral agents.

Jude Law, Haley Joel Osment, Brendan Gleeson, and Brian Turk in Artificial Intelligence: AI (2001)

We’ll also need to figure out how to incorporate a potentially superhuman level of consciousness into our moral frameworks, since these frameworks often have an internal hierarchy that is largely based on the degree or level of consciousness that we ascribe to other individuals and to other animals.  If we give moral preference to a dog over a worm, and preference to a human over a dog (for example), then where would a being with superhuman consciousness fit within that framework?  Most people certainly wouldn’t want these beings to be treated as gods, but we shouldn’t want to treat them like slaves either.  If nothing else, they’ll need to be treated like people.

Technologies will almost always have that dual potential, where they can be used for achieving truly admirable goals and to enhance human well being, or used to dominate others and to exacerbate human suffering.  So we need to ask ourselves, given a future world where we have the capacity to make any simulated reality we desire, what kind of world do we want?  What kind of world should we want?  And what kind of person do you want to be in that world?  Answering these questions should say a lot about our moral qualities, serving as a kind of window into the soul of each and every one of us.

“Black Mirror” Reflections: Playtest (S3, E2)

Cooper

Black Mirror, the British science-fiction anthology series created by Charlie Brooker, does a pretty good job experimenting with a number of illuminating concepts that highlight how modern culture is becoming increasingly shaped by (and vulnerable to) various technological advances and changes.  I’ve been interested in a number of these concepts for many years now, not least because of the many important philosophical implications (including a number of moral issues) that they point to.  I’ve decided to start a blog post series that will explore the contents of these episodes.  My intention with this blog post series, which I’m calling “Black Mirror” Reflections, will be to highlight some of my own takeaways from some of my favorite episodes.

I’d like to begin with season 2, episode 3, “Playtest”.  I’m just going to give a brief summary here as you can read the full episode summary in the link provided above.  In this episode, Cooper (Wyatt Russell) decides to travel around the world, presumably as a means of dealing with the recent death of his father, who died from early-onset Alzheimer’s.  After finding his way to London, his last destination before planning to return home to America, he runs into a problem with his credit card and bank account where he can’t access the money he needs to buy his plane ticket.

While he waits for his bank to fix the problem with his account, Cooper decides to earn some cash using an “Oddjobs” app, which provides him with a number of short-term job listings in the area, eventually leading him to “SaitoGemu,” a video game company looking for game testers to try a new kind of personalized horror game involving a (seemingly) minimally invasive brain implant procedure.  He briefly hesitates but, desperate for money and reasonably confident in its safety, he eventually consents to the procedure whereby the implant is intended to wire itself into the gamer’s brain, resulting in a form of perceptual augmentation and a semi-illusory reality.

cooper-mad

The implant is designed to (among other things) scan your memories and learn what your worst fears are, in order to integrate these into the augmented perceptions, producing a truly individualized, and maximally frightening horror game experience.  Needless to say, at some point Cooper begins to lose track of what’s real and what’s illusory, and due to a malfunction, he’s unable to exit the game and he ends up going mad and eventually dying as a result of the implant unpredictably overtaking (and effectively frying) his brain.

nanobots in brain

There are a lot of interesting conceptual threads in this story, and the idea of perceptual augmentation is a particularly interesting theme that finds it’s way into a number of other Black Mirror episodes.  While holographic and VR-headset gaming technologies can produce their own form of augmented reality, perceptual augmentation carried out on a neurological level isn’t even in the same ballpark, having qualitative features that are far more advanced and which are more akin to those found in the Wachowski’s The Matrix trilogy or James Cameron’s Total Recall.  Once the user is unable to distinguish between the virtual world and the external reality, with the simulator having effectively passed a kind of graphical version of the Turing Test, then one’s previous notion of reality is effectively shattered.  To me, this technological idea is one of the most awe-inspiring (yet sobering) ideas within the Black Mirror series.

The inability to discriminate between the two worlds means that both worlds are, for all practical purposes, equally “real” to the person experiencing them.  And the fact that one simulationcan’t tell the difference between such worlds ought to give a person pause to re-evaluate what it even means for something to be real.  If you doubt this, then just imagine if you were to find out one day that your entire life has really been the result of a computer simulation, created and fabricated by some superior intelligence living in a layer of reality above your own (we might even think of this being as a “god”).  Would this realization suddenly make your remembered experiences imaginary and meaningless?  Or would your experiences remain just as “real” as they’ve always been, even if they now have to be reinterpreted within a context that grounds them in another layer of reality?

To answer this question honestly, we ought to first realize that we’re likely fully confident that what we’re experiencing right now is reality, is real, is authentic, and is meaningful (just as Cooper was at some point in his gaming “adventure”).  And this seems to be at least a partial basis for how we define what is real, and how we differentiate the most vivid and qualitatively rich experiences from those we might call imaginary, illusory, or superficial.  If what we call reality is really just a set of perceptions, encompassing every conscious experience from the merely quotidian to those we deem to be extraordinary, would we really be justified in dismissing all of these experiences and their value to us if we were to discover that there’s a higher layer of reality, residing above the only reality we’ve ever known?

For millennia, humans have pondered over whether or not the world is “really” the way we see it, with perhaps the most rigorous examination of this metaphysical question undertaken by the German philosopher Immanuel Kant, with his dichotomy of the phenomenon and the noumenon (i.e. the way we see the world or something in the world versus the way the world or thing “really is” in itself, independent of our perception).  Even if we assume the noumenon exists, we can never know anything about the thing in itself, by our being fundamentally limited by our own perceptual categories and the way we subjectively interpret the world.  Similarly, we can never know for certain whether or not we’re in a simulation.

Looking at the situation through this lens, we can then liken the question of how to (re)define reality within the context of a simulation with the question of how to (re)define reality within the context of a world as it really is in itself, independent of our perception.  Both the possibility of our being in a simulation and the possibility of our perceptions stemming from an unknowable noumenal world could be true (and would likely be unfalsifiable), and yet we still manage to use and maintain a relatively robust conception and understanding of reality.  This leads me to conclude that reality is ultimately defined by pragmatic considerations (mostly those pertaining to our ability to make successful predictions and achieve our goals), and thus the possibility of our one day learning about a new, higher level of reality should merely add to our existing conception of reality, rather than completely negating it, even if it turns out to be incomplete.

Another interesting concept in this episode involves the basic design of the individualized horror game itself, where a computer can read your thoughts and memories, and then surmise what your worst fears are.  This is a technological feat that is possible in principle, and one with far-reaching implications that concern our privacy, safety, and autonomy.  Just imagine if such a power were unleashed by corporations, or mind-readingthe governments owned by those corporations, to acquire whatever information they wanted from your mind, to find out how to most easily manipulate you in terms of what you buy, who you vote for, what you generally care about, or what you do any and every day of your life.  The Orwellian possibilities are endless.

Marketing firms (both corporatocratic and political) have already been making use of discoveries in psychology and neuroscience, finding new ways to more thoroughly exploit our cognitive biases to get us to believe and desire whatever will benefit them most.  Adding to this the future ability to read our thoughts and manipulate our perceptions (even if this is first implemented as a seemingly innocuous video game), this will establish a new means of mass surveillance, where we can each become a potential “camera” watching one another (a theme also highlighted in BM, S4E3: Crocodile), while simultaneously exposing our most private of thoughts, and transmitting them to some centralized database.  Once we reach these technological heights (it’s not a matter of if but when), depending on how it’s employed, we may find ourselves no longer having the freedom to lie or to keep a secret, nor the freedom of having any mental privacy whatsoever.

To be fair, we should realize that there are likely to be undeniable benefits in our acquiring these capacities (perceptual augmentation and mind-reading), such as making virtual paradises with minimal resources, finding new ways of treating phobias, PTSD, brain to cloudand other pathologies; giving us the power of telepathy and superhuman intelligence by connecting our brains to the cloud, giving us the ability to design error-proof lie detectors and other vast enhancements in maximizing personal security and reducing crime.  But there are also likely to be enormous losses in personal autonomy, as our available “choices” are increasingly produced and constrained by automated algorithms; there are likely to be losses in privacy, and increasing difficulties in ascertaining what is true and what isn’t, since our minds will be vulnerable to artificially generated perceptions created by entities and institutions that want to deceive us.

Although we’ll constantly need to be on the lookout for these kinds of potential dangers as they arise, in the end, we may find ourselves inadvertently allowing these technologies to creep into our lives, one consumer product at a time.

Technology, Mass-Culture, and the Prospects of Human Liberation

Cultural evolution is arguably just as fascinating as biological evolution (if not more so), with new ideas and behaviors stemming from the same kinds of natural selective pressures that lead to new species along with their novel morphologies and capacities.  And as with biological evolution where it, in a sense, takes off on its own unbeknownst to the new organisms it produces and independent of the intentions they may have (with our species being the notable exception given our awareness of evolutionary history and our ever-growing control over genetics), so too cultural evolution takes off on its own, where cultural changes are made manifest through a number of causal influences that we’re largely unaware of, despite our having some conscious influence over this vastly transformative process.

Alongside these cultural changes, human civilizations have striven to find new means of manipulating nature and to better predict the causal structure that makes up our reality.  One unfortunate consequence of this is that, as history has shown us, within any particular culture’s time and place, people have a decidedly biased overconfidence in the perceived level of truth or justification for the status quo and their present world view (both on an individual and collective level).  Undoubtedly, the “group-think” or “herd mentality” that precipitates from our simply having social groups often reinforces this overconfidence, and this is so in spite of the fact that what actually influences a mass of people to believe certain things or to behave as they do is highly contingent, unstable, and amenable to irrational forms of persuasion including emotive, sensationalist propaganda that prey on our cognitive biases.

While we as a society have an unprecedented amount of control over the world around us, this type of control is perhaps best described as a system of bureaucratic organization and automated information processing, that gives less and less individual autonomy, liberty, and basic freedom, as it further expands its reach.  How much control do we as individuals really have in terms of the information we have access to, and given the implied picture of reality that is concomitant with this information in the way it’s presented to us?  How much control do we have in terms of the number of life trajectories and occupations made available to us, what educational and socioeconomic resources we have access to given the particular family, culture, and geographical location we’re born and raised in?

As more layers of control have been added to our way of life and as certain criteria for organizational efficiency are continually implemented, our lives have become externally defined by increasing layers of abstraction, and our modes of existence are further separated cognitively and emotionally from an aesthetically and otherwise psychologically valuable sense of meaning and purpose.

While the Enlightenment slowly dragged our species, kicking and screaming, out of the theocratic, anti-intellectual epistemologies of the Medieval period of human history, the same forces that unearthed a long overdue appreciation for (and development of) rationality and technological progress, unknowingly engendered a vulnerability to our misusing this newfound power.  There was an overcompensation of rationality when it was deployed to (justifiably) respond to the authoritarian dogmatism of Christianity and to the demonstrably unreliable nature of superstitious beliefs and of many of our intuitions.

This overcompensatory effect was in many ways accounted for, or anticipated within the dialectical theory of historical development as delineated by the German philosopher Georg Hegel, and within some relevant reformulations of this dialectical process as theorized by the German philosopher Karl Marx (among others).  Throughout history, we’ve had an endless clash of ideas whereby the prevailing worldviews are shown to be inadequate in some way, failing to account for some notable aspect of our perceived reality, or shown to be insufficient for meeting our basic psychological or socioeconomic needs.  With respect to any problem we’ve encountered, we search for a solution (or wait for one to present itself to us), and then we become overconfident in the efficacy of the solution.  Eventually we end up overgeneralizing its applicability, and then the pendulum swings too far the other way, thereby creating new problems in need of a solution, with this process seemingly repeating itself ad infinitum.

Despite the various woes of modernity, as explicated by the modern existentialist movement, it does seem that history, from a long-term perspective at least, has been moving in the right direction, not only with respect to our heightened capacity of improving our standard of living, but also in terms of the evolution of our social contracts and our conceptions of basic and universal human rights.  And we should be able to plausibly reconcile this generally positive historical trend with the Hegelian view of historical development, and the conflicts that arise in human history, by noting that we often seem to take one step backward followed by taking two steps forward in terms of our moral and epistemological progress.

Regardless of the progress we’ve made, we seem to be at a crucial point in our history where the same freedom-limiting authoritarian reach that plagued humanity (especially during the Middle Ages) has undergone a kind of morphogenesis, having been reinstantiated albeit in a different form.  The elements of authoritarianism have become built into the very structure of mass-culture, with an anti-individualistic corporatocracy largely mediating the flow of information throughout this mass-culture, and also mediating its evolution over time as it becomes more globalized, interconnected, and cybernetically integrated into our day-to-day lives.

Coming back to the kinds of parallels in biology that I opened up with, we can see human autonomy and our culture (ideas and behaviors) as having evolved in ways that are strikingly similar to the biological jump that life made long ago, where single-celled organisms eventually joined forces with one another to become multi-cellular.  This biological jump is analogous to the jump we made during the early onset of civilization, where we employed an increasingly complex distribution of labor and occupational specialization, allowing us to survive many more environmental hurdles than ever before.  Once civilization began, the spread of culture became much more effective for transmitting ideas both laterally within a culture and longitudinally from generation to generation, with this process heavily enhanced by our having adopted various forms of written language, allowing us to store and transmit information in much more robust ways, similar to genetic information storage and transfer via DNA, RNA, and proteins.

Although the single-celled bacterium or amoeba (for example) may be thought of as having more “autonomy” than a cell that is forcefully interconnected within a multi-cellular organism, we can see how the range of capacities available to single cells were far more limited before making the symbiotic jump, just as humans living before the onset of civilization had more “freedom” (at least of a certain type) and yet the number of possible life trajectories and experiences was minuscule when compared to a human living in a post-cultural world.  But once multi-cellular organisms began to form a nervous system and eventually a brain, the entire collection of cells making up an organism became ultimately subservient to a centralized form of executive power — just as humans have become subservient to the executive authority of the state or government (along with various social pressures of conformity).

And just as the fates of each cell in a multi-cellular organism became predetermined and predictable by its particular set of available resources and the specific information it received from neighboring cells, similarly our own lives are becoming increasingly predetermined and predictable by the socioeconomic resources made available to us and the information we’re given which constitutes our mass-culture.  We are slowly morphing from individual brains into something akin to individual neurons within a global brain of mass-consciousness and mass-culture, having our critical thinking skills and creative aspirations exchanged for rehearsed responses and docile expectations that maintain the status quo and which continually transfers our autonomy to an oligarchic power structure.

We might wonder if this shift has been inevitable, possibly being yet another example of a “fractal pattern” recapitulated in sociological form out of the very same freely floating rationales that biological evolution has been making use of for eons.  In any case, it’s critically important that we become aware of this change, so we can try and actively achieve and effectively maintain the liberties and level of individual autonomy that we so highly cherish.  We ought to be thinking about what kinds of ways we can remain cognizant of, and critical to, our culture and its products; how we can reconcile or transform technological rationality and progress with a future world comprised of truly liberated individuals; and how to transform our corporatocratic capitalist society into one that is based on a mixed economy with a social safety net that even the wealthiest citizens would be content with living under, so as to maximize the actual creative freedom people have once their basic existential needs have been met.

Will unchecked capitalism, social-media, mass-media, and the false needs and epistemological bubbles they’re forming lead to our undoing and destruction?  Or will we find a way to rise above this technologically-induced setback, and take advantage of the opportunities it has afforded us, to make the world and our technology truly compatible with our human psychology?  Whatever the future holds for us, it is undoubtedly going to depend on how many of us begin to critically think about how we can seriously restructure our educational system and how we disseminate information, how we can re-prioritize and better reflect on what our personal goals ought to be, and also how we ought to identify ourselves as free and unique individuals.

Book Review: Niles Schwartz’s “Off the Map: Freedom, Control, and the Future in Michael Mann’s Public Enemies”

Elijah Davidson begins his foreword to Niles Schwartz’s book titled Off the Map: Freedom, Control, and the Future in Michael Mann’s Public Enemies with a reference to Herman Melville’s Moby Dick, where he mentions how that book was, among other things, about God.  While it wasn’t a spiritual text in any traditional sense of the term, it nevertheless pointed to the finitude of human beings, to our heavy reliance on one another, and highlighted the fact that the natural world we live in is entirely indifferent to our needs and desires at best if not outright threatening to our imperative of self-preservation.  Davidson also points out another theme present in Moby Dick, the idea that people corrupt institutions rather than the other way around—a theme that we’ll soon see as relevant to Michael Mann’s Public Enemies.  But beyond this human attribute of fallibility, and in some ways what reinforces its instantiation, is our incessant desire to find satisfaction in something greater than ourselves, often taken to be some kind of conception of God.  It is in this way that Davidson refers to Melville’s classic as “a spiritual treatise par excellence.”

When considering Mann’s films, which are often a cinematic dichotomous interplay of “cops and robbers” or “predator and prey,” they are, on a much more basic level, about “freedom and control.”  We also see his filmography as colored with a dejection of, or feeling of malaise with respect to, the modern world.  And here, Mann’s films can also be seen as spiritual in the sense that they make manifest a form of perspectivism centered around a denunciation of modernity, or at least a disdain for the many depersonalizing or externally imposed meta-narratives that it’s generated.  Schwartz explores this spiritual aspect of Mann’s work, most especially as it relates to PE, but also connecting it to the (more explicitly spiritual) works of directors like Terrence Malick (The Thin Red Line, The New World, The Tree of Life, to name a few).  Schwartz proceeds to give us his own understanding of what Mann had accomplished in PE, and this is despite there being a problematic, irreconcilable set of interpretations as he himself admits: “Interpreting Public Enemies is troubling because it has theological and philosophical precepts that are rife with contradictions.”  This should be entirely expected however when PE is considered within the broader context of Michael Mann’s work generally, for Mann’s entire milieu is formulated on paradox:

“Michael Mann is a director of contradictions: aesthetic and didactive, abstract and concrete, phantasmagorical and brutally tactile, expressionistic and anthropological, heralding the individual and demanding social responsibility, bold experimenter and studio genre administrator, romantic and futurist, Dillinger freedom seeker and Hoover control freak, outsider and insider.”

Regardless of this paradoxical nature that is ubiquitous throughout Mann’s filmography, Schwartz provides a light to take us through a poetic journey of Mann’s work and his vision, most especially through a meticulous view of PE, and all within a rich and vividly structured context centered on the significance of, and relation between, freedom, control, and of course, the future.  His reference to “the future” in OTM is multi-dimensional to say the least, and the one I personally find the most interesting, cohesive, and salient.  Among other things, it’s referring to John Dillinger’s hyper-focused projection toward the future, where his identity is much more defined by where he wants to be (an abstract, utopian future that is “off the map,” or free from the world of control and surveillance) rather than defined by where he’s been (fueled by the obvious fact that he’s always on the run), and this is so even if he also seems to be stuck in the present, with Dillinger’s phenomenology as well as the film’s structure often traversing from one fleeting moment to the next.  But I think we can take Dillinger at his own word as he tells his true love, Billie Frechette, after whisking her away to dine in a high-class restaurant: “That’s ‘cuz they’re all about where people come from.  The only thing that’s important, is where somebody’s going.” 

This conception of “the future” is also referencing the transcendent quality of being human, where our identity is likewise defined in large part by the future, our life projects, and our striving to become a future version of ourselves, however idealized or mythologized that future-self conception may be (I think it’s fair to say Dillinger’s was, to a considerable degree).  The future is a reference to where our society is heading, how our society is becoming increasingly automated, taken over by a celeritously expanding cybernetic infrastructure of control, evolving and expanding in parallel with technology and our internet-catalyzed global interconnectedness.  Our lives are being built upon increasing layers of abstraction as our form of organized life continues to trudge along its chaotic arc of cultural evolution, and we’re losing more and more of our personal freedom as a result of evermore external influences, operating on a number of different levels (socially, politically, technologically), known and unknown, consciously and unconsciously.  In PE, the expanding influences were best exemplified by the media, the mass-surveillance, and of course Hoover’s Bureau and administration, along with the arms race taking place between Dillinger’s crew and the FBI (where the former gave the latter a run for their money).

As Schwartz explains about J. Edgar’s overreach of power: “Hoover’s Bureau is increasingly amoral as it reaches for a kind of Hobbesian, sovereign super control.”  Here of course we get our first glimpse of a noteworthy dualism, namely freedom and control, and the myriad of ways that people corrupt institutions (as Davidson explored in his foreword), though contrary to Davidson’s claim, it seems undeniable to surmise that once an institution has become corrupted by certain people, that institution is more likely to corrupt other individuals, both internally and externally (and thus, institutions do corrupt individuals, not merely the other way around).  If bad ideas are engineered into our government’s structure, our laws, our norms, our capitalist market, or any other societal system, they can seemingly take off on their own, reinforced by the structure itself and the automated information processing that’s inherent to bureaucracies, the media, our social networks, and even inherent to us as individuals who are so often submerged in a barrage of information.

Relating Hoover to the power and influence of the media, Schwartz not only mentions the fact that PE is undoubtedly “conscientious of how media semiotics affect and control people,” but he also mentions a piece of dialogue that stood out to me as well, where FBI Director Hoover (Billy Crudup) just got out of the Congressional Hearing Room, having been chastised by Senator McKellar (Ed Bruce), and he says to his deputy, Clyde Tolson (Chandler Williams): “If we will not contest him in his committee room, we will fight him on the front page,” showing us a glimpse of the present day where news (whether “fake” news or not), and the overall sensationalism of a story is shown to be incredibly powerful at manipulating the masses and profoundly altering our trajectory, one (believed) story at a time.  If you can get somebody to believe the story, whether based on truth or deception, the battle is half won already.  Even Dillinger himself, who’s own persona is wrapped in a cloud of mythology, built up by Hollywood and the media’s portrayal of his life and image, shows us in a very concrete way, just how far deception can get you.

For example, Schwartz reminds us of how Dillinger managed to escape through six doors of Crown Point jail with nothing other than a mock gun made of wood.  Well, nothing but a mock gun and a hell of a good performance, which is the key point here, since the gun was for all practical purposes real.  By the time Dillinger breaks into Warden Baker’s office to steal some Tommy guns before finishing his escape, Warden Baker (David Warshofsky) even says to him “That wasn’t real was it?,” which resonated with the idea of how powerful persuasion and illusion can be in our lives, and maybe indirectly showing us that what is real to us in the ways that matter most is defined by what’s salient to us in our raw experience and what we believe to be true since that’s all that affects our behavior anyway.  The guards believed Dillinger’s mock gun was real, and so it was real, just as a false political campaign promise is real, or a bluffed winning-hand in poker, or any other force, whether operating under the pretenses of honesty or deception, pushing us individually and collectively in one direction or another.

The future is also a reference to Michael Mann’s 2015 cyberthriller Blackhat—a movie that Mann had been building up to, and a grossly underappreciated one at that, with various degrees of its foreshadowing in PE.  This progression and relation between PE and Blackhat is in fact central to Schwartz’s principle aim in OTM:

“My aim in this book is to explore Public Enemies as an extraordinary accomplishment against a backdrop of other digital films, its meditations on the form precipitating Blackhat, Mann’s stunning and widely ignored cyberthriller that converts the movie-house celluloid of its predecessors into a beguiling labyrinth of code that’s colonized the heretofore tangible firmament right under our noses.”

And what an extraordinary accomplishment it is; and fortunately for us we’re in a better position to appreciate it after reading Schwartz’s highly perceptive analysis of such a phenomenal artist.  In PE, the future is essentially projected into the past, which is interesting to me in its own right, since this is also the case with human memory, where we reconstruct our memories upon recall in ways that are augmented by, and amalgamated with, our present knowledge, despite not having such knowledge when the memory itself was first formed.  So too in PE, we see a number of questions posed especially as it relates to the existentialist movement, which hadn’t been well-developed or nearly as influential until some time after the 1930s (especially after WWII, with the advent of existential philosophers including Sartre, Camus, and Heidegger), and as it relates to the critical theory stemming from the Frankfurt School of social research (Marcuse, Adorno, Horkheimer, et al.), neither of which being nearly as pertinent or socially and politically relevant then as they are now in the present day.  And this is where Blackhat becomes decidedly apropos.

So what future world was foreshadowed in PE?  Schwartz describes the world in Blackhat as an utterly subliminal and cybernetic realm: “The world is pattern recognition and automatic information processing, the stuff of advertising.”  Though I would argue that our entire phenomenology is fundamentally based on pattern recognition where our perception, imagination, actions, and even desires are mediated by the models of the world’s causal structure that our brains create and infer through our experiences.  But this doesn’t mean that our general schema of pattern recognition hasn’t been influenced by modernity such that we’ve become noticeably more automated, and where many have seemingly lost their capacity for contemplative reflection and lost the historically less-hindered psychological channel between reason and emotion.  The Blackhat world Schwartz is describing here is one where the way information is being processed is relatively alien to the more traditional conceptions of what it means to be human.  And this cybernetic world is a world where cultural and technological evolution are accelerating far faster than our biological evolution ever could (though genetic engineering has the potential to re-synchronize the two if we dedicate ourselves to such an ambitious project), and this bio-cultural desynchronization has made us increasingly maladapted to the Digital Age we’ve now entered.  Furthermore, this new world has made us far more susceptible to subliminal, corporatocratic and sociopolitical influences, and it has driven us toward an increasing reliance on a cybernetically controlled way of life.

The social relevance of these conceptions makes Blackhat a much-needed lens for fully appreciating our current existential predicament, and as Schwartz says of Mann’s (perhaps unavoidably ironic) digital implementation of this somewhat polemical techno-thriller:

“Conversely, Mann’s embrace of the digital is a paradoxical realization of tactile historical and spatial phenomenology, lucidly picturing an end of identity, while leaping, as through faith, toward the possibility of individuation in nature, free from institutional conscriptions and the negative assignments of cybernetics.”

Schwartz illustrates that concomitant with identity, “… the film prompts us to ask where nature ends and the virtual begins,” though perhaps we could also infer that in Blackhat there’s somewhat of a dissolution of the boundary (or at least a feeling of arbitrariness regarding how the boundary is actually defined) between the real and the virtual, the natural and the artificial, the human and the transhuman.  And maybe each of these fuzzy boundaries implies how best to resolve the contradictions in Mann’s work that Schwartz describes in OTM, with this possible resolution coming about through a process of symbiotic fusion or some kind of memetic “speciation” transitionally connecting what seem to be distinct concepts through a brilliantly structured narrative.

And to take the speciation analogy even further, I think it can also be applied to the changes in filmmaking and culture (including the many changes that Schwartz covers in his tour de force), where many of the changes are happening so gradually that we simply fail to notice them, at least not until a threshold of change has occurred.  But there’s also a punctuated equilibrium form of speciation in filmmaking, where occasionally a filmmaker does something extraordinary in one fell swoop, setting the bar for a new genre of cinema, just as James Cameron arguably did with the heavily CGI-amalgamated world in Avatar (with his planet Pandora “doubling for the future of cinema [itself]…” as Schwartz mentions), and to a somewhat lesser technological extent, in Michael Mann’s PE, where even though the analog to digital leap had already been made by others, “Mann’s distinctly idiosyncratic use of HD cameras rattled viewers with its alien video-ness, explicating to viewers that they were perched on a separate filmic architecture that may require a new way of seeing.”

Similarly in Blackhat, Mann takes us through seamless transitions of multiple scales of both time and space, opening up a window that allows us to see how mechanized and deterministic our modern world is, from the highest cosmological scales, down to cities populated with an innumerable number of complex yet predictable humans, and finally down to the digital information processing schema at the micro and nanotechnological scales of transistors.  Within each perspective level, we fail to notice the causal influences operating at the levels above or below, and yet Mann seamlessly joins these together, showing us a kind of fractal recapitulation that we wouldn’t otherwise fully appreciate.  After reflecting on many of the references to freedom Schwartz posits in OTM, I’ve begun to more seriously ponder over the idea that human freedom is ultimately in the eye of the beholder, dependent on one’s awareness of what is being controlled by another, what is predictable and what isn’t, and one’s perception of what constitutes self-authored behavior or a causa sui formulation of free will.  Once we realize that, at the smallest scales, existence is governed by deterministic processes infused with quantum randomness, it is less surprising to see the same kind of organized, predictable causal structure at biological, sociological, and even cosmological scales.

Aside from Blackhat, Schwartz seamlessly ties together a number of Mann’s other films including Thief, Miami Vice, and my personal favorite, Heat.  There’s also a notable progression or at least an underlying and evolving conceptual structure connecting characters and ideas from one film to the next (above and beyond the transition from PE to Blackhat), as Schwartz eloquently points out, which I see as illustrating how various salient psychological and sociological forces and manifestations are so often reiterated in multiple contexts varying in time and space.  Clearly Mann is building off of his previous work, adapting previous ideas to new narratives, and doing so while continuously trying to use, as Schwartz puts it: “alchemic cinema tools to open a window and transform our perception,” thus giving us a new way to view the world and our own existential status.

An important dynamic that Schwartz mentions, not only as it pertains to Mann’s films, but of (especially well-crafted) films in general, is the notable interplay between the audience and the film or “the image.”  The image changes us, which in turn changes the image ad infinitum, establishing a feedback loop and a co-evolution between the viewer’s interpretation of the film as it relates to their own experiences, and the ongoing evolution of every new cinematic product.  There may be some indoctrinatory danger in this cycle if the movie you’re engaging with is a mass-produced product, since this product is, insofar as it’s mass-produced, deeply connected to “the system”, indeed the very same system trying to capture John Dillinger in PE.  And yet, even though the mass-produced product is a part of the system, and despite its being a cog in the wheel of what we might call a cybernetic infrastructure of control, Schwartz highlights an important potentiality in film viewing that is often taken for granted:

“…The staggering climax inside the Biograph beseeches us to aspire to the images conscientiously, reconciling the mass-produced product with our private histories and elevating the picture and our lives with the media.”

In other words, even in the case of mass-produced cinema, we as viewers stand to potentially gain a valuable experience, and possibly a spiritual or philosophical one at that by our forming a personal relationship with the film, synthesizing the external stimuli with our inner sense of self, coalescing with the characters and integrating them into our own relationships and to ourselves, thus expanding our set of perspectives by connecting to someone “off the map”.  On the other hand, Schwartz also mentions Herbert Marcuse’s views, which aptly describes the inherent problem of art insofar as it becomes a part of “the system”:

“…Herbert Marcuse writes that as long as art has become part of the system, it ceases in questioning it, and thus impedes social change.  Poetic language must transcend the “real” world of the society, and in order to transcend that world it must stand opposed to it, questioning it, quelling us out of it.”

But we need also realize that art will inevitably be influenced by the system, because there are simply too many subliminal or even explicit system-orchestrated ideas that the artist (and everyone else in society) has been instilled with, even if entirely unbeknownst to them.  It seems that the capacity for poetic language to transcend the real world of the society lies in its simply providing any new perspective at all, making use of allegory, metaphor, and the crossing of contextual boundaries, and it can do so even if this new perspective doesn’t necessarily or explicitly oppose some other (even mainstream) perspective.

This is most definitely not to in any way discount Marcuse’s overarching point of how art’s connection to the system is a factor that limits its efficacy in enacting social change and its ability to positively feed the public’s collective stream of consciousness, but merely to point out that art’s propensity for transcending the status quo isn’t entirely inhibited from its unavoidable connection to the system.  And to once again bring us back to the scene in PE where Dillinger is fully absorbing (or being fully absorbed by) his viewing of Manhattan Melodrama in the Biograph theater, Schwartz seems to describe the artistic value of a film as something at least partially dependent on whether or not it facilitates a personal and transcendent experience in any of the viewers:

“Cinema is elevated to a ceremony of transubstantiation where fixed bodies are resurrected through the mercurial alchemy of speed and light, contradicting the consumption we saw earlier during the newsreel.  Dillinger’s connection to cinema is a meditative and private one…”

I think it’s fair to say that, if there’s anything that can elevate our own experience of Mann’s work in particular, and film viewing more generally, Schwartz’s Off the Map is a great philosophical and spiritual primer to help get us there.  Both comprehensive and brilliantly written, Schwartz’s contribution to film scholarship should become required reading for anybody interested in cinema and film viewing.

Irrational Man: An Analysis (Part 4, Chapter 11: The Place of the Furies)

In my last post in this series on William Barrett’s Irrational Man, I examined some of the work of existential philosopher Jean-Paul Sartre, which concluded part 3 of Barrett’s book.  The final chapter, Ch. 11: The Place of the Furies, will be briefly examined here and this will conclude my eleven part post-series.  I’ve enjoyed this very much, but it’s time to move on to new areas of interest, so let’s begin.

1. The Crystal Palace Unmanned

“The fact is that a good dose of intellectualism-genuine intellectualism-would be a very helpful thing in American life.  But the essence of the existential protest is that rationalism can pervade a whole civilization, to the point where the individuals in that civilization do less and less thinking, and perhaps wind up doing none at all.  It can bring this about by dictating the fundamental ways and routines by which life itself moves.  Technology is one material incarnation of rationalism, since it derives from science; bureaucracy is another, since it aims at the rational control and ordering of social life; and the two-technology and bureaucracy-have come more and more to rule our lives.”

Regarding the importance and need for more intellectualism in our society, I think this can be better described as the need for more critical thinking skills and the need for people to be able to discern fact from fiction, to recognize their own cognitive biases, and to respect the adage that extraordinary claims require extraordinary evidence.  At the same time, in order to appreciate the existentialist’s concerns, we ought to recognize that there are aspects of human psychology including certain psychological needs that are inherently irrational, including with respect to how we structure our lives, how we express our emotions and creativity, how we maintain a balance in our psyche, etc.  But, since technology is not only a material incarnation of rationalism, but also an outlet for our creativity, there has to be a compromise here where we shouldn’t want to abandon technology, but simply to keep it in check such that it doesn’t impede our psychological health and our ability to live a fulfilling life.

“But it is not so much rationalism as abstractness that is the existentialists’ target; and the abstractness of life in this technological and bureaucratic age is now indeed something to reckon with.  The last gigantic step forward in the spread of technologism has been the development of mass art and mass media of communication: the machine no longer fabricates only material products; it also makes minds. (stereotypes, etc.).”

Sure enough, we’re living in a world where many of our occupations are but one of many layers of abstraction constituting our modern “machine of civilization”.  And the military industrial complex that has taken over the modern world has certainly gone beyond the mass production of physical stuff to be consumed by the populace, and now includes the mass production and viral dissemination of memes as well.  Ideas can spread like viruses and in our current globally interconnected world (which Barrett hadn’t yet seen to the same degree when writing this book), the spread of these ideas is much faster and influential on culture than ever before.  The degree of indoctrination, and the perpetuated cycles of co-dependence between citizens and the corporatocratic, sociopolitical forces ruling our lives from above, have resulted in making our way of life and our thinking much more collective and less personal than at any other time in human history.

“Kierkegaard condemned the abstractness of his time, calling it an Age of Reflection, but what he seems chiefly to have had in mind was the abstractness of the professorial intellectual, seeing not real life but the reflection of it in his own mind.”

Aside from the increasingly abstract nature of modern living then, there’s also the abstractness that pervades our thinking about life, which detracts from our ability to actually experience life.  Kierkegaard had a legitimate point here, by pointing out the fact that theory cannot be a complete substitute for practice; that thought cannot be a complete substitute for action.  We certainly don’t want the reverse either, since action without sufficient forethought leads to foolishness and bad consequences.

I think the main point here is that we don’t want to miss out on living life by thinking about it too much.  Since living a philosophically examined life is beneficial, it remains an open question exactly what balance is best for any particular individual to live the most fulfilling life.  In the mean time we ought to simply recognize that there is the potential for an imbalance, and to try our best to avoid it.

“To be rational is not the same as to be reasonable.  In my time I have heard the most hair-raising and crazy things from very rational men, advanced in a perfectly rational way; no insight or feelings had been used to check the reasoning at any point.”

If you ignore our biologically-grounded psychological traits, or ignore the fact that there’s a finite range of sociological conditions for achieving human psychological health and well-being, then you can’t develop any theory that’s supposed to apply to humans and expect it to be reasonable or tenable.  I would argue that ignoring this subjective part of ourselves when making theories that are supposed to guide our behavior in any way is irrational, at least within the greater context of aiming to have not only logically valid arguments but logically sound arguments as well.  But, if we’re going to exclude the necessity for logical soundness in our conception of rationality, then the point is well taken.  Rationality is a key asset in responsible decision making but it should be used in a way that relies on or seeks to rely on true premises, taking our psychology and sociology into account.

“The incident (making hydrogen bombs without questioning why) makes us suspect that, despite the increase in the rational ordering of life in modern times, men have not become the least bit more reasonable in the human sense of the word.  A perfect rationality might not even be incompatible with psychosis; it might, in fact, even lead to the latter.”

Again, I disagree with Barrett’s use or conception of rationality here, but semantics aside, his main point still stands.  I think that we’ve been primarily using our intelligence as a species to continue to amplify our own power and maximize our ability to manipulate the environment in any way we see fit.  But as our technological capacity advances, our cultural evolution is getting increasingly out of sync with our biological evolution, and we haven’t been anywhere close to sufficient in taking our psychological needs and limitations into account as we continue to engineer the world of tomorrow.

What we need is rationality that relies on true or at least probable premises, as this combination should actually lead a person to what is reasonable or likely to be reasonable.  I have no doubt that without making use of a virtue such as reasonableness, rationality can become dangerous and destructive, but this is the case with every tool we use whether it’s rationality or other capacities both mental and material; tools can always be harmful when misused.

“If, as the Existentialists hold, an authentic life is not handed to us on a platter but involves our own act of self-determination (self-finitization) within our time and place, then we have got to know and face up to that time, both in its (unique) threats and its promises.”

And our use of rationality on an individual level should be used to help reach this goal of self-finitization, so that we can balance the benefits of the collective with the freedom of each individual that makes up that collective.

“I for one am personally convinced that man will not take his next great step forward until he has drained to the lees the bitter cup of his own powerlessness.”

And when a certain kind of change is perceived as something to be avoided, the only way to go through with it is by perceiving the status quo as something that needs to be avoided even more so, so that a foray out of our comfort zone is perceived as an actual improvement to our way of life.  But as long as we see ourselves as already all powerful and masters over our domain, we won’t take any major leap in a new direction of self and collective improvement.  We need to come to terms with our current position in the modern world so that we can truly see what needs repair.  Whether or not we succumb to one or both of the two major existential threats facing our species, climate change and nuclear war, is contingent on whether or not we set our eyes on a new prize.

“Sartre recounts a conversation he had with an American while visiting in this country.  The American insisted that all international problems could be solved if men would just get together and be rational; Sartre disagreed and after a while discussion between them became impossible.  “I believe in the existence of evil,” says Sartre, “and he does not.” What the American has not yet become aware of is the shadow that surrounds all human Enlightenment.”

Once again, if rationality is accompanied with true premises that take our psychology into account, then international problems could be solved (or many of them at least), but Sartre is also right insofar as there are bad ideas that exist, and people that have cultivated their lives around them.  It’s not enough to have people thinking logically, nor is some kind of rational idealism up to the task of dealing with human emotion, cognitive biases, psychopathy, and other complications in human behavior that exist.

The crux of the matter is that some ideas hurt us and other ideas help us, with our perception of these effects being the main arbiter driving our conception of what is good and evil; but there’s also a disparity between what people think is good or bad for them and what is actually good or bad for them.  I think that one could very plausibly argue that if people really knew what was good or bad for them, then applying rationality (with true or plausible premises) would likely work to solve a number of issues plaguing the world at large.

“…echoing the Enlightenment’s optimistic assumption that, since man is a rational animal, the only obstacles to his fulfillment must be objective and social ones.”

And here’s an assumption that’s certainly difficult to ground since it’s based on false premises, namely that humans are inherently rational.  We are unique in the animal kingdom in the sense that we are the only animal (or one of only a few animals) that have the capacity for rational thought, foresight, and the complex level of organization made possible from its use.  I also think that the obstacles to fulfillment are objective since they can be described as facts pertaining to our psychology, sociology, and biology, even if our psychology (for example) is instantiated in a subjective way.  In other words, our subjectivity and conscious experiences are grounded on or describable in objective terms relating to how our particular brains function, how humans as a social species interact with one another, etc.  But, fulfillment can never be reached let alone maximized without taking our psychological traits and idiosyncrasies into account, for these are the ultimate constraints on what can make us happy, satisfied, fulfilled, and so on.

“Behind the problem of politics, in the present age, lies the problem of man, and this is what makes all thinking about contemporary problems so thorny and difficult…anyone who wishes to meddle in politics today had better come to some prior conclusions as to what man is and what, in the end, human life is all about…The speeches of our politicians show no recognition of this; and yet in the hands of these men, on both sides of the Atlantic, lies the catastrophic power of atomic energy.”

And as of 2018, we’ve seen the Doomsday clock now reach two minutes to midnight, having inched one minute closer to our own destruction since 2017.  The dominance hierarchy being led and reinforced by the corporatocratic plutocracy are locked into a narrow form of tunnel vision, hell bent on maintaining if not exacerbating the wealth and power disparities that plague our country and the world as a whole, despite the fact that this is not sustainable in the long run, nor best for the fulfillment of those promoting it.

We the people do share a common goal of trying to live a good, fulfilling life; to have our basic needs met, and to have no fewer rights than anybody else in society.  You’d hardly know that this common ground exists between us when looking at the state of our political sphere, likely as polarized now in the U.S. (if not more so) than even during the Civil War.  Clearly, we have a lot of work to do to reevaluate what our goals ought to be, what our priorities ought to be, and we need a realistic and informed view of what it means to be human before any of these goals can be realized.

“Existentialism is the counter-Enlightenment come at last to philosophic expression; and it demonstrates beyond anything else that the ideology of the Enlightenment is thin, abstract, and therefore dangerous.”

Yes, but the Enlightenment has also been one of the main driving forces leading us out of theocracy, out of scientific illiteracy, and towards an appreciation of reason and evidence (something the U.S. at least, is in short supply of these days), and thus it has been crucial in giving us the means for increasing our standard of living, and solving many of our problems.  While the technological advancements derived from the Enlightenment have also been a large contributor to many of our problems, the current existential threats we face including climate change and nuclear war are more likely to be solved by new technologies, not an abolition of technology nor an abolition of the Enlightenment-brand of thinking that led to technological progress.  We simply need to better inform our technological goals of the actual needs and constraints of human beings, our psychology, and so on.

“The finitude of man, as established by Heidegger, is perhaps the death blow to the ideology of the Enlightenment, for to recognize this finitude is to acknowledge that man will always exist in untruth as well as truth.  Utopians who still look forward to a future when all shadows will be dispersed and mankind will dwell in a resplendent Crystal Palace will find this recognition disheartening.  But on second thought, it may not be such a bad thing to free ourselves once and for all from the worship of the idol of progress; for utopianism-whether the brand of Marx or or Nietzsche-by locating the meaning of man in the future leaves human beings here and how, as well as all mankind up to this point, without their own meaning.  If man is to be given meaning, the Existentialists have shown us, it must be here and now; and to think this insight through is to recast the whole tradition of Western thought.”

And we ought to take our cue from Heidegger, at the very least, to admit that we are finite, our knowledge is limited, and it always will be.  We will not be able to solve every problem, and we would do ourselves and the world a lot better if we admitted our own limitations.  But to avoid being overly cynical and simply damning progress altogether, we need to look for new ways of solving our existential problems.  Part of the solution that I see for humanity moving forward is going to be a combination of advancements in a few different fields.

By making better use of genetic engineering, we’ll one day have the ability to change ourselves in remarkable ways in order to become better adapted to our current world.  We will be able to re-sync our biological evolution with our cultural evolution so we no longer feel uprooted, like a fish out of water.  Continuing research in neuroscience will allow us to learn more about how our brains function and how to optimize that functioning.  Finally, the strides we make in computing and artificial intelligence should allow us to vastly improve our simulation power and arm us with greater intelligence for solving all the problems that we face.

Overall, I don’t see progress as the enemy, but rather that we have an alignment problem between our current measures of progress, and what will actually lead to maximally fulfilling lives.

“The realization that all human truth must not only shine against an enveloping darkness, but that such truth is even shot through with its own darkness may be depressing, and not only to utopians…But it has the virtue of restoring to man his sense of the primal mystery surrounding all things, a sense of mystery from which the glittering world of his technology estranges him, but without which he is not truly human.”

And if we actually use technology to change who we are as human beings, by altering the course of natural selection and our ongoing evolution (which is bound to happen with or without our influence, for better or worse), then it’s difficult to say what the future really holds for us.  There are cultural and technological forces that are leading to transhumanism, and this may mean that one day “human beings” (or whatever name is chosen for the new species that replaces us) will be inherently rational, or whatever we’d like our future species to be.  We’ve stumbled upon the power to change our very nature, and so it’s far too naive, simplistic, unimaginative, and short-sighted to say that humans will “always” or “never” be one way or another.  Even if this were true, it wouldn’t negate the fact that one day modern humans will be replaced by a superseding species which has different qualities than what we have now.

2. The Furies

“…Existentialism, as we have seen, seeks to bring the whole man-the concrete individual in the whole context of his everyday life, and in his total mystery and questionableness-into philosophy.”

This is certainly an admirable goal to combat simplistic abstractions of what it means to be human.  We shouldn’t expect to be able to abstract a certain capacity of human beings (such as rationality), consider it in isolation (no consideration of context), formulate a theory around that capacity, and then expect to get a result that is applicable to human beings as they actually exist in the world.  Furthermore, all of the uncertainties and complexities in our human experiences, no matter how difficult they may be to define or describe, should be given their due consideration in any comprehensive philosophical system.

“In modern philosophy particularly (philosophy since Descartes), man has figured almost exclusively as an epistemological subject-as an intellect that registers sense-data, makes propositions, reasons, and seeks the certainty of intellectual knowledge, but not as the man underneath all this, who is born, suffers, and dies…But the whole man is not whole without such unpleasant things as death, anxiety, guilt, fear and trembling, and despair, even though journalists and the populace have shown what they think of these things by labeling any philosophy that looks at such aspects of human life as “gloomy” or “merely a mood of despair.”  We are still so rooted in the Enlightenment-or uprooted in it-that these unpleasant aspects of life are like Furies for us: hostile forces from which we would escape (the easiest way is to deny that the Furies exist).”

My take on all of this is simply that multiple descriptions of human existence are needed to account for all of our experiences, thoughts, values, and behavior.  And it is what we value given our particular subjectivity that needs to be primary in these descriptions, and primary with respect to how we choose to engineer the world we live in.  Who we are as a species is a complex mixture of good and bad, lightness and darkness, and stability and chaos; and we shouldn’t deny any of these attributes nor repress them simply because they make us uncomfortable.  Instead, we would do much better to face who we are head on, and then try to make our lives better while taking our limitations into account.

“We are the children of an enlightenment, one which we would like to preserve; but we can do so only by making a pact with the old goddesses.  The centuries-long evolution of human reason is one of man’s greatest triumphs, but it is still in process, still incomplete, still to be.  Contrary to the rationalist tradition, we now know that it is not his reason that makes man man, but rather that reason is a consequence of that which really makes him man.  For it is man’s existence as a self-transcending self that has forged and formed reason as one of its projects.”

This is well said, although I’d prefer to couch this in different terms: it is ultimately our capacity to imagine that makes us human and able to transcend our present selves by pointing toward a future self and a future world.  We do this in part by updating our models of the world, simulating new worlds, and trying to better understand the world we live in by engaging with it, re-shaping it, and finding ways of better predicting its causal structure.  Reason and rationality have been integral in updating our models of the world, but they’ve also been high-jacked to some degree by a kind of super-normal stimuli reinforced by technology and our living in a world that is entirely alien to our evolutionary niche, and which have largely dominated our lives in a way that overlooks our individualism, emotions, and feelings.

Abandoning reason and rationality is certainly not the answer to this existential problem; rather, we just need to ensure that they’re being applied in a way that aligns with our moral imperatives and that they’re ultimately grounded on our subjective human psychology.

Irrational Man: An Analysis (Part 3, Chapter 8: Nietzsche)

In the last post on this series on William Barrett’s Irrational Man, we began exploring Part 3: The Existentialists, beginning with chapter 7 on Kierkegaard.  In this post, we’ll be taking a look at Nietzsche’s philosophy in more detail.

Ch. 8 – Nietzsche

Nietzsche shared many of the same concerns as Kierkegaard, given the new challenges of modernity brought about by the Enlightenment; but each of these great thinkers approached this new chapter of our history from perspectives that were in many ways, diametrically opposed.  Kierkegaard had a grand goal of sharing with others what he thought it meant to be a Christian, and he tried to revive Christianity within a society that he found to be increasingly secularized.  He also saw that of the Christianity that did exist, it was largely organized and depersonalized, and he felt the need to stress the importance of a much more personal form of the religion.  Nietzsche on the other hand, an atheist, rejected Christianity and stressed the importance of re-establishing our values given the fact that modernity was now living in a godless world, albeit a culturally Christianized world, but one that was now without a Christ.

Despite their differences, both philosophers felt that discovering the true meaning of our lives was paramount to living an authentic life, and this new quest for meaning was largely related to the fact that our view of ourselves had changed significantly:

“By the middle of the nineteenth century, as we have seen, the problem of man had begun to dawn on certain minds in a new and more radical form: Man, it was seen, is a stranger to himself and must discover, or rediscover, who he is and what his meaning is.”

Nietzsche also understood and vividly illustrated the crucial differences between mankind and the rest of nature, most especially our self-awareness:

“It was Nietzsche who showed in its fullest sense how thoroughly problematical is the nature of man: he can never be understood as an animal species within the zoological order of nature, because he has broken free of nature and has thereby posed the question of his own meaning-and with it the meaning of nature as well-as his destiny.”

And for Nietzsche, the meaning one was to find for their life included first abandoning what he saw as an oppressive and disempowering Christian morality, which he saw as severely inhibiting our potential as human beings.  He was much more sympathetic to Greek culture and morality, and even though (perhaps ironically) Christianity itself was derived from a syncretism between Judaism and Greek Hellenism, it’s moral and psychologically relevant differences were too substantial to avoid Nietzsche’s criticism.  He thought that a return to a more archaic form of Greek culture and mentality was the solution we needed to restore or at least re-validate our instincts:

“Dionysus reborn, Nietzsche thought, might become a savior-god for the whole race, which seemed everywhere to show symptoms of fatigue and decline.”

Nietzsche learned about the god Dionysus while he was studying Greek tragedy, and he gravitated toward the Dionysian symbolism, for it contained a kind of reconciliation between high culture and our primal instincts.  Dionysus was after all the patron god of the Greek tragic festivals and so was associated with beautiful works of art; but he was also the god of wine and ritual madness, bringing people together through the grape harvest and the joy of intoxication.  As Barrett describes it:

“This god thus united miraculously in himself the height of culture with the depth of instinct, bringing together the warring opposites that divided Nietzsche himself.”

It was this window into the hidden portions of our psyche that Nietzsche tried to look through and which became an integral basis for much of his philosophy.  But the chaotic nature of our species combined with our immense power to manipulate the environment also concerned him greatly; and he thought that the way modernity was repressing many of our instincts needed to be circumvented, lest we continue to build up this tension in our unconscious only to release itself in an explosive eruption of violence:

“It is no mere matter of psychological curiosity but a question of life and death for man in our time to place himself again in contact with the archaic life of his unconscious.  Without such contact he may become the Titan who slays himself.  Man, this most dangerous of the animals, as Nietzsche called him, now holds in his hands the dangerous power of blowing himself and his planet to bits; and it is not yet even clear that this problematic and complex being is really sane.”

Indeed our species has been removed from our natural evolutionary habitat; and we’ve built our modern lives around socio-cultural and technological constructs that have, in many ways at least, inhibited the open channel between our unconscious and conscious mind.  One could even say that the prefrontal cortex region of our brain, important as it is for conscious planning and decision making, has been effectively high-jacked by human culture (memetic selection) and now it’s constantly running on overtime, over-suppressing the emotional centers of the brain (such as the limbic system).

While we’ve come to realize that emotional suppression is sometimes necessary in order to function well as a cooperative social species that depends on complex long-term social relationships, we also need to maintain a healthy dose of emotional expression as well.  If we can’t release this more primal form of “psychological energy” (for lack of a better term), then it shouldn’t be surprising to see it eventually manifest into some kind of destructive behavior.

We ought not forget that we’re just like other animals in terms of our brain being best adapted to a specific balance of behaviors and cognitive functions; and if this balance is disrupted through hyperactive or hypoactive use of one region or another, doesn’t it stand to reason that we may wind up with either an impairment in brain function or less healthy behavior?  If we’re not making a sufficient use of our emotional capacities, shouldn’t we expect them to diminish or atrophy in some way or other?  And if some neurological version of the competitive exclusion principle exists, then this would further reinforce the idea that our suppression of emotion by other capacities may cause those other capacities to become permanently dominant.  We can only imagine the kinds of consequences that ensue when this psychological impairment falls upon a species with our level of power.

1.  Ecce Homo

“In the end one experiences only oneself,” Nietzsche observes in his Zarathustra, and elsewhere he remarks, in the same vein, that all the systems of the philosophers are just so many forms of personal confession, if we but had eyes to see it.”

Here Nietzsche is expressing one of his central ideas, by pointing out the fact that we can’t separate ourselves from our thoughts, and therefore the ideas put forward by any philosopher are really betraying some set of values they hold whether unconsciously or explicitly, and they also betray one’s individual perspective; an unavoidable perspective stemming from our subjective experience which colors our interpretation of those experiences.  Perspectivism, or the view that reality and our ideas about what’s true and what we value can be mapped onto many different possible conceptual schemes, was a big part of Nietzsche’s overall philosophy, and it was interwoven into his ideas on meaning, epistemology, and ontology.

As enlightening as Nietzsche was in giving us a much needed glimpse into some of the important psychological forces that give our lives the shape they have (for better or worse), his written works show little if any psychoanalytical insight applied to himself, in terms of the darker and less pleasant side of his own psyche.

“Nietzsche’s systematic shielding of himself from the other side is relevant to his explanation of the death of God: Man killed God, he says, because he could not bear to have anyone looking at his ugliest side.”

But even if he didn’t turn his psychoanalytical eye inward toward his own unconscious, he was still very effective in shining a light on the collective unconscious of humanity as a whole.  It’s not enough that we marvel over the better angels of our nature, even though it’s important to acknowledge and understand our capacities and our potential for bettering ourselves and the world we live in; we also have to acknowledge our flaws and shortcomings and while we may try to overcome these weaknesses in one way or another, some of them are simply a part of our nature and we should accept them as such even if we may not like them.

One of the prevailing theories stemming from Western Rationalism (mentioned earlier in Barrett’s book) was that human beings were inherently perfect and rational, and it was within the realm of possibility to realize that potential; but this kind of wishful thinking was relying on a fundamental denial of an essential part of the kind of animal we really are.  And Nietzsche did a good job of putting this fact out in the limelight, and explaining how we couldn’t truly realize our potential until we came to terms with this fairly ugly side of ourselves.

Barrett distinguishes between the attitudes stemming from various forms of atheism, including that of Nietzsche’s:

“The urbane atheism of Bertrand Russell, for example, presupposes the existence of believers against whom he can score points in an argument and get off some of his best quips.  The atheism of Sartre is a more somber affair, and indeed borrows some of its color from Nietzsche: Sartre relentlessly works out the atheistic conclusion that in a universe without God man is absurd, unjustified, and without reason, as Being itself is.”

In fact Nietzsche was more concerned with the atheists that hadn’t yet accepted the full ramifications of a Godless world, rather than believers themselves.  He felt that those who believed in God still had a more coherent belief system given many of their moral values and the meaning they attached to their lives, since they were largely derived from their religious beliefs (even if the religious beliefs were unwarranted or harmful).  Those that were atheists, many of them at least, hadn’t yet taken the time to build a new foundation for their previously held values and the meaning in their lives; and if they couldn’t rebuild this foundation then they’d have to change those values and what they find to be meaningful to reflect that shift in foundation.

Overall, Nietzsche even changes the game a little in terms of how the conception of God was to be understood in philosophy:

“If God is taken as a metaphysical object whose existence has to be proved, then the position held by scientifically minded philosophers like Russell must inevitably be valid: the existence of such an object can never be empirically proved.  Therefore, God must be a superstition held by primitive and childish minds.  But both these alternative views are abstract, whereas the reality of God is concrete, a thoroughly autonomous presence that takes hold of men but of which, of course, some men are more conscious than others.  Nietzsche’s atheism reveals the true meaning of God-and does so, we might add, more effectively than a good many official forms of theism.”

Even though God may not actually exist as a conscious being, the idea of God and the feelings associated with what one labels as an experience of God, is as real as any other idea or experience and so it should still be taken seriously even in a world where no gods exist.  As an atheist myself (formerly a born-again Protestant Christian), I’ve come to appreciate the real breadth of possible meaning attached to the term “God”.  Even though the God or gods described in various forms of theism do not track onto objective reality as was pointed out by Russell, this doesn’t negate the subjective reality underlying theism.  People ascribe the label “God” to a set of certain feelings and forces that they feel are controlling their lives, their values, and their overall vision of the world.  Nietzsche had his own vision of the world as well, and so one could label this as his “god”, despite the confusion that this may cause when discussing God on metaphysical terms.

2. What Happens In “Zarathustra”; Nietzsche as Moralist

One of the greatest benefits of producing art is our ability to tap into multiple perspectives that we might otherwise never explore.  And by doing this, we stand a better chance of creating a window into our own unconscious:

“[Thus Spoke Zarathustra] was Nietzsche’s poetic work and because of this he could allow the unconscious to take over in it, to break through the restraints imposed elsewhere by the philosophic intellect. [in poetry] One loses all perception of what is imagery and simile-that is to say, the symbol itself supersedes thought, because it is richer in meaning.”

By avoiding the common philosophical trap of submitting oneself to a preconceived structure and template for processing information, poetry and other works of art can inspire new ideas and novel perspectives through the use of metaphor and allegory, emotion, and the seemingly infinite powers of our own imagination.  Not only is this true for the artist herself, but also for the spectator who stands in a unique position to interpret what they see and to make sense of it the best they can.  And in the rarest of cases, one may be able to experience something in a work of art that fundamentally changes themselves in the process.  Either way, by presenting someone with an unusual and ambiguous stimulus that they are left to interpret, they can hope to gain at least some access into their unconscious and to inspire an expression of their creativity.

In his Thus Spoke Zarathustra, Nietzsche was focusing on how human beings could ever hope to regain a feeling of completeness; for modernity had fractured the individual and placed too much emphasis on collective specialization:

“For man, says Schiller, the problem is one of forming individuals.  Modern life has departmentalized, specialized, and thereby fragmented the being of man.  We now face the problem of putting the fragments together into a whole.  In the course of his exposition, Schiller even referred back, as did Nietzsche, to the example of the Greeks, who produced real individuals and not mere learned abstract men like those of the modern age.”

A part of this reassembly process would necessarily involve reincorporating what Nietzsche thought of as the various attributes of our human nature, which many of us are in denial of (to go back to the point raised earlier in this post).  For Nietzsche, this meant taking on some characteristics that may seem inherently immoral:

“Man must incorporate his devil or, as he put it, man must become better and more evil; the tree that would grow taller must send its roots down deeper.”

Nietzsche seems to be saying that if there are aspects of our psychology that are normal albeit seemingly dark or evil, then we ought to structure our moral values along the same lines rather than in opposition to these more primal parts of our psyche:

“(Nietzsche) The whole of traditional morality, he believed, had no grasp of psychological reality and was therefore dangerously one-sided and false.”

And there is some truth to this though I wouldn’t go as far as Nietzsche does, as I believe that there are many basic and cross-cultural moral prescriptions that do in fact take many of our psychological needs into account.  Most cultures have some place for various virtues such as compassion, honesty, generosity, forgiveness, commitment, integrity, and many others; and these virtues have been shown within the fields of moral psychology and sociology to help us flourish and lead psychologically fulfilling lives.  We evolved as a social species that operates within some degree of a dominance hierarchy, and sure enough our psychological traits are what we’d expect given our evolutionary history.

However, it’s also true that we ought to avoid slipping into some form of a naturalistic fallacy; for we don’t want to simply assume that all the behaviors that were more common prior to modernity let alone prior to civilization were beneficial to us and our psychology.  For example, raping and stealing were far more common per capita prior to humans establishing some kind of civil state or society.  And a number of other behaviors that we’ve discovered to be good or bad for our physical or psychological health have been the result of a long process of trial and error, cultural evolution, and the cultural transmission of newfound ideas from one generation to the next.  In any case, we don’t want to assume that just because something has been a tradition or even a common instinctual behavior for many thousands or even hundreds of thousands of years, that it is therefore a moral behavior.  Quite the contrary; instead we need to look closely at which behaviors lead to more fulfilling lives and overall maximal satisfaction and then aim to maximize those behaviors over those that detract from this goal.

Nietzsche did raise a good point however, and one that’s supported by modern moral psychology; namely, the fact that what we think we want or need isn’t always in agreement with our actual wants and needs, once we dive below the surface.  Psychology has been an indispensable tool in discovering many of our unconscious desires and the best moral theories are going to be those that take both our conscious and unconscious motivations and traits into account.  Only then can we have a moral theory that is truly going to be sufficiently motivating to follow in the long term as well as most beneficial to us psychologically.  If one is basing their moral theory on wishful thinking rather than facts about their psychology, then they aren’t likely to develop a viable moral theory.  Nietzsche realized this and promoted it in his writings, and this was an important contribution as it bridged human psychology with a number of important topics in philosophy:

“On this point Nietzsche has a perfectly sober and straightforward case against all those idealists, from Plato onward, who have set universal ideas over and above the individual’s psychological needs.  Morality itself is blind to the tangle of its own psychological motives, as Nietzsche showed in one of his most powerful book, The Genealogy of Morals, which traces the source of morality back to the drives of power and resentment.”

I’m sure that drives of power and resentment have played some role in certain moral prescriptions and moral systems, but I doubt that this is likely to be true generally speaking; and even in cases where a moral prescription or system arose out of one of these unconscious drives, such as resentment, this doesn’t negate it’s validity nor demonstrate whether or not it’s warranted or psychologically beneficial.  The source of morality isn’t as important as whether or not the moral system itself works, though the sources of morality can be both psychologically relevant and revealing.

Our ego certainly has a lot to do with our moral behavior and whether or not we choose to face our inner demons:

“Precisely what is hardest for us to take is the devil as the personification of the pettiest, paltriest, meanest part of our personality…the one that most cruelly deflates ones egotism.”

If one is able to accept both the brighter and darker sides of themselves, they’ll also be more likely to willingly accept Nietzsche’s idea of the Eternal Return:

“The idea of the Eternal Return thus expresses, as Unamuno has pointed out, Nietzsche’s own aspirations toward eternal and immortal life…For Nietzsche the idea of the Eternal Return becomes the supreme test of courage: If Nietzsche the man must return to life again and again, with the same burden of ill health and suffering, would it not require the greatest affirmation and love of life to say Yes to this absolutely hopeless prospect?”

Although I wouldn’t expect most people to embrace this idea, because it goes against our teleological intuitions of time and causation leading to some final state or goal, it does provide a valuable means of establishing whether or not someone is really able to accept this life for what it is with all of its absurdities, pains and pleasures.  If someone willingly accepts the idea, then by extension they likely accept themselves, their lives, and the rest of human history (and even the history of life on this planet, no less).  This doesn’t mean that by accepting the idea of the Eternal Return, that people have to put every action, event, or behavior on equal footing, for example, by condoning slavery or the death of millions as a result of countless wars, just because these events are to be repeated for eternity.  But one is still accepting that all of these events were necessary in some sense; that they couldn’t have been any other way, and this acceptance should translate to one’s attitude toward life reflecting this by some kind of overarching contentment.

I think that the more common reaction to this kind of idea is slipping into some kind of nihilism, where a person no longer cares about anything, and no longer finds any meaning or purpose in their lives.  And this reaction is perfectly understandable given our teleological intuitions; most people don’t or wouldn’t want to invest time and effort into some goal if they knew that the goal would never be accomplished or knew that it would eventually be undone.  But, oddly enough, even if the Eternal Return isn’t an actual fact of our universe, we still run into the same basic situation within our own lifetimes whereby we know that there are a number of goals that will never be achieved because of our own inevitable death.

There’s also the fact that even if we did achieve a number of goals, once we die, we can no longer reap any benefits from them.  There may be other people that can benefit from our own past accomplishments, but eventually they will die as well.  Eventually all life in the universe will expire, and this is bound to happen long before the inevitable heat death of the universe transpires; and once it happens, there won’t be anybody experiencing anything at all let alone reaping the benefits of a goal from someone’s distant past.  Once this is taken into account, Nietzsche’s idea doesn’t sound all that bad, does it?  If we knew that there would always be time and some amount of conscious experience for eternity, then there will always be some form of immortality for consciousness; and if it’s eternally recurring, then we end up becoming immortal in some sense.

Perhaps the idea of the Eternal Return (or Recurrence), even though this idea began many hundreds if not a few thousand years before Nietzsche, was motivated by Nietzsche’s own fear of death.  Even though he thought of the idea as horrifying and paralyzing (not least because of all the undesirable parts of our own personal existence), since he didn’t believe in a traditional afterlife, maybe the fear of death motivated him to adopt such an idea, even though he also saw it as a physically plausible consequence of probability and the laws of physics.  But either way, beyond it being far more plausible an idea compared to that of an eternal paradise after death, it’s also a very different idea; not only because one’s memory or identity isn’t preserved such that one actually experiences immortality when the “temporal loop” defining their lifetime starts all over again (i.e. the “movie” of one’s life is just played over and over again, unbeknownst to the person in the movie), but this idea is also different because it involves accepting the world the way it is for all eternity rather than an idea revolving around the way they wish it was.  The Eternal Return fully engages with the existentialist reality of human life with all its absurdities, contingencies, and the rest.

3. Power and Nihilism

Nietzsche, like Kierkegaard before him, was considered to be an unsystematic thinker by many, though Barrett points out that this view is mistaken when it comes to Nietzsche; a common view resulting from the form his writings took which were largely aphoristic.  Nietzsche himself even said that he was viewing science and philosophy through the eyes of art, but nevertheless his work eventually revealed a systematic structure:

“As thinking gradually took over the whole person, and everything else in his life being starved out, it was inevitable that this thought should tend to close itself off in a system.”

This systematization of his philosophy was revealed in the notes he was making for The Will to Power, a work he never finished but one that was more or less assembled and published posthumously by his sister Elisabeth.  The systematization was centered on the idea that a will to power was the underlying force that ultimately guided our behavior, striving to achieve the highest possible position in life as opposed to being fundamentally driven by some biological imperative to survive.  But it’s worth pointing out here that Nietzsche didn’t seem to be merely talking about a driving force behind human behavior, but rather he seemed to be referencing an even more fundamental driving force that was in some sense driving all of reality; and this more inclusive view would be analogous to Schopenhauer’s will to live.

It’s interesting to consider this perspective from the lens of biology, where if we try and reconcile this with a Darwinian account of evolution in which survival is key, we could surmise that a will to power is but a mechanism for survival.  If we grant Nietzsche’s position and the position of the various anti-Darwinian thinkers that influenced him, that is, that a will to survive or survival more generally is somehow secondary to a will to power, then we run into a problem: if survival is a secondary drive or goal, then we’d expect survival to be less important than power; but without survival you can’t have power and thus it makes far more evolutionary sense that a will to survive is more basic than a will to power.

However, I am willing to grant that as soon as organisms began evolving brains along with the rest of their nervous system, the will to power (or something analogous to it) became increasingly important; and by the time humans came on the scene with their highly complex brains, the will to power may have become manifest in our neurological drive to better predict our environment over time.  I wrote about this idea in a previous post where I had explored Nietzsche’s Beyond Good and Evil, where I explained what I saw as a correlation between Nietzsche’s will to power and the idea of the brain as a predictive processor:

“…I’d like to build on it a little and suggest that both expressions of a will to power can be seen as complementary strategies to fulfill one’s desire for maximal autonomy, but with the added caveat that this autonomy serves to fulfill a desire for maximal causal power by harnessing as much control over our experience and understanding of the world as possible.  On the one hand, we can try and change the world in certain ways to fulfill this desire (including through the domination of other wills to power), or we can try and change ourselves and our view of the world (up to and including changing our desires if we find them to be misdirecting us away from our greatest goal).  We may even change our desires such that they are compatible with an external force attempting to dominate us, thus rendering the external domination powerless (or at least less powerful than it was), and then we could conceivably regain a form of power over our experience and understanding of the world.

I’ve argued elsewhere that I think that our view of the world as well as our actions and desires can be properly described as predictions of various causal relations (this is based on my personal view of knowledge combined with a Predictive Processing account of brain function).  Reconciling this train of thought with Nietzsche’s basic idea of a will to power, I think we could…equate maximal autonomy with maximal predictive success (including the predictions pertaining to our desires). Looking at autonomy and a will to power in this way, we can see that one is more likely to make successful predictions about the actions of another if they subjugate the other’s will to power by their own.  And one can also increase the success of their predictions by changing them in the right ways, including increasing their complexity to better match the causal structure of the world, and by changing our desires and actions as well.”

On the other hand, if we treat the brain’s predictive activity as a kind of mechanistic description of how the brain works rather than a type of drive per se, and if we treat our underlying drives as something that merely falls under this umbrella of description, then we might want to consider whether or not any of the drives that are typically posited in psychological theory (such as power, sex, or a will to live) are actually more basic than any other or typically dominant over the others.  Barrett suggests an alternative, saying that we ought to look at these elements more holistically:

“What if the human psyche cannot be carved up into compartments and one compartment wedged in under another as being more basic?  What if such dichotomizing really overlooks the organic unity of the human psyche, which is such that a single impulse can be just as much an impulse toward love on the one hand as it is toward power on the other?”

I agree with Barrett at least in the sense that these drives seem to be operating together much of the time, and when they aren’t in unison, they often seem to be operating on the same level at least.  Another way to put this could be to say that as a social species, we’ve evolved a number of different modular behavioral strategies that are typically best suited for particular social circumstances, though some circumstances may be such that multiple strategies will work and thus multiple strategies may be used simultaneously without working against each other.

But I also think that our conscious attention plays a role as well where the primary drive(s) used to produce the subsequent behavior may be affected by thinking about certain things or in a certain way, such as how much you love someone, what you think you can gain from them, etc.  And this would mean that the various underlying drives for our behavior are individually selected or prioritized based on one’s current experiences and where their attention is directed at any particular moment.  It may still be the case that some drives are typically dominant over others, such as a will to power, even if certain circumstances can lead to another drive temporarily taking over, however short-lived that may be.

The will to power, even if it’s not primary, would still help to explain some of the enormous advancements we’ve made in our scientific and technological progress, while also explaining (at least to some degree) the apparent disparity between our standard of living and overall physio-psychological health on the one hand, and our immense power to manipulate the environment on the other:

“Technology in the twentieth century has taken such enormous strides beyond that of the nineteenth that it now bulks larger as an instrument of naked power than as an instrument for human well-being.”

We could fully grasp Barrett’s point here by thinking about the opening scene in Stanley Kubrick’s 2001: A Space Odyssey, where “The Dawn of Man” was essentially made manifest with the discovery of tools, specifically weapons.  Once this seemingly insignificant discovery was made, it completely changed the game for our species.  While it gave us a new means of defending ourselves and an ability to hunt other animals, thus providing us with the requisite surplus of protein and calories needed for brain development and enhanced intelligence, it also catalyzed an arms race.  And with the advent of human civilization, our historical trajectory around the globe became largely dominated by our differential means of destruction; whoever had the biggest stick, the largest army, the best projectiles, and ultimately the most concentrated form of energy, would likely win the battle, forever changing the fate of the parties involved.

Indeed war has been such a core part of human history, so it’s no wonder Nietzsche stumbled upon such an idea; and even if he hadn’t, someone else likely would have even if they wouldn’t have done so with the same level of creativity and intellectual rigor.  If our history has been so colored with war, which is a physical instantiation of a will to power in order to dominate another group, then we should expect many to see it as a fundamental part of who we are.  Personally, I don’t think we’re fundamentally war-driven creatures but rather that war results from our ability to engage in it combined with a perceived lack of and desire for resources, freedom, and stability in our lives; but many of these goods, freedom in particular, imply a particular level of power over oneself and their environment, so even a fight for freedom is in one way or another a kind of will to power as well.

And what are we to make of our quest for power in terms of the big picture?  Theoretically, there is an upper limit to the amount of power any individual or group can possibly attain, and if one gets to a point where they are seeking power for power’s sake, and using their newly acquired power to harness even more power, then what will happen when we eventually hit that ceiling?  If our values become centered around power, and we lose any anchor we once had to provide meaning for our lives, then it seems we would be on a direct path toward nihilism:

“For Nietzsche, the problem of nihilism arose out of the discovery that “God is dead.” “God” here means the historical God of the Christian faith.  But in a wider philosophical sense it means also the whole realm of supersensible reality-Platonic Ideas, the Absolute, or what not-that philosophy has traditionally posited beyond the sensible realm, and in which it has located man’s highest values.  Now that this other, higher, eternal realm is gone, Nietzsche declared, man’s highest values lose their value…The only value Nietzsche can set up to take the place of these highest values that have lost their value for contemporary man is: Power.”

As Barrett explains, Nietzsche seemed to think that power was the only thing that could replace the eternal realm that we valued so much; but of course this does nothing to alleviate the problem of nihilism in the long term since the infinite void still awaits those at the end of the finite road to maximal power.

“If this moment in Western history is but the fateful outcome of the fundamental ways of thought that lie at the very basis of our civilization-and particularly of that way of thought that sunders man from nature, sees nature as a realm of objects to be mastered and conquered, and can therefore end only with the exaltation of the will to power-then we have to find out how this one-sided and ultimately nihilistic emphasis upon the power over things may be corrected.”

I think the answer to this problem lies in a combination of strategies, but with the overarching goal of maximizing life fulfillment.  We need to reprogram our general attitude toward power such that it is truly instrumental rather than perceived as intrinsically valuable; and this means that our basic goal becomes accumulating power such that we can maximize our personal satisfaction and life fulfillment, and nothing more.  Among other things, the power we acquire should be centered around a power over ourselves and our own psychology; finding ways of living and thinking which are likely to include the fostering of a more respectful relationship with the nature that created us in the first place.

And now that we’re knee deep in the Information Age, we will be able to harness the potential of genetic engineering, artificial intelligence, and artificial reality; and within these avenues of research we will be able to change our own biology such that we can quickly adapt our psychology to the ever-changing cultural environment of the modern world, and we’ll be able to free up our time to create any world we want.  We just need to keep the ultimate goal in mind, fulfillment and contentment, and direct our increasing power towards this goal in particular instead of merely chipping away at it in the background as some secondary priority.  If we don’t prioritize it, then we may simply perpetuate and amplify the meaningless sources of distraction in our lives, eventually getting lost in the chaos, and forgetting about our ultimate potential and the imperatives needed to realize it.

I’ll be posting a link here to part 9 of this post-series, exploring Heidegger’s philosophy, once it has been completed.

Looking Beyond Good & Evil

Nietzche’s Beyond Good and Evil serves as a thorough overview of his philosophy and is definitely one of his better works (although I think Thus Spoke Zarathustra is much more fun to read).  It’s definitely worth reading (if you haven’t already) to put a fresh perspective on many widely held philosophical assumptions that are often taken for granted.  While I’m not interested in all of the claims he makes (and he makes many, in nearly 300 aphorisms spread over nine chapters), there are at least several ideas that I think are worth analyzing.

One theme presented throughout the book is Nietzsche’s disdain for classical conceptions of truth and any form of absolutism or dogmatism.  With regard to truth or his study on the nature of truth (what we could call his alethiology), he subscribes to a view that he coined as perspectivism.  For Nietzsche, there are no such things as absolute truths but rather there are only different perspectives about reality and our understanding of it.  So he insists that we shouldn’t get stuck in the mud of dogmatism, and should instead try to view what is or isn’t true with an open mind and from as many points of view as possible.

Nietzsche’s view here is in part fueled by his belief that the universe is in a state of constant change, as well as his belief in the fixity of language.  Since the universe is in a state of constant change, language generally fails to capture this dynamic essence.  And since philosophy is inherently connected to the use of language, false inferences and dogmatic conclusions will often manifest from it.  Wittgenstein belonged to a similar school of thought (likely building off of Nietzsche’s contributions) where he claimed that most of the problems in philosophy had to do with the limitations of language, and therefore, that those philosophical problems could only be solved (if at all) through an in-depth evaluation of the properties of language and how they relate to its use.

This school of thought certainly has merit given the facts of language having more of a fixed syntactic structure yet also having a dynamic or probabilistic semantic structure.  We need language to communicate our thoughts to one another, and so this requires some kind of consistency or stability in its syntactic structure.  But the meaning behind the words we use is something that is established through use, through context, and ultimately through associations between probabilistic conceptual structures and relatively stable or fixed visual and audible symbols (written or spoken words).  Since the semantic structure of language has fuzzy boundaries, and yet is attached to relatively fixed words and grammar, it produces the illusion of a reality that is essentially unchanging.  And this results in the kinds of philosophical problems and dogmatic thinking that Nietzsche warns us of.

It’s hard to disagree with Nietzsche’s view that dogmatism is to be avoided at all costs, and that absolute truths are generally specious at best (let alone dangerous), and philosophy owes a lot to Nietzsche for pointing out the need to reject this kind of thinking.  Nietzsche’s rejection of absolutism and dogmatism is made especially clear in his views on the common conceptions of God and morality.  He points out how these concepts have changed a lot over the centuries (where, for example, the meaning of good has undergone a complete reversal throughout its history), and this is despite the fact that throughout that time, the proponents of those particular views of God or morality believe that these concepts have never changed and will never change.

Nietzsche believes that all change is ultimately driven by a will to power, where this will is a sort of instinct for autonomy, and which also consists of a desire to impose one’s will onto others.  So the meaning of these concepts (such as God or morality) and countless others have only changed because they’ve been re-appropriated by different or conflicting wills to power.  As such, he thinks that the meaning and interpretation of a concept illustrate the attributes of the particular will making use of those concepts, rather than some absolute truth about reality.  I think this idea makes a lot of sense if we regard the will to power as not only encompassing the desires belonging to any individual (most especially their desire for autonomy), but also the inferences they’ve made about reality, it’s apparent causal relations, etc., which provide the content for those desires.  So any will to power is effectively colored by the world view held by the individual, and this world view or set of inferred causal relations includes one’s inferences pertaining to language and any meaning ascribed to the words and concepts that one uses.

Even more interesting to me is the distinction Nietzsche makes between what he calls an unrefined use or version of the will to power and one that is refined.  The unrefined form of a will to power takes the desire for autonomy and directs it outward (perhaps more instinctually) in order to dominate the will of others.  The refined version on the other hand takes this desire for autonomy and directs it inward toward oneself, manifesting itself as a kind of cruelty which causes a person to constantly struggle to make themselves stronger, more independent, and to provide them with a deeper character and perhaps even a deeper understanding of themselves and their view of the world.  Both manifestations of the will to power seem to try and simply maximize one’s power over as much as possible, but the latter refined version is believed by Nietzsche to be superior and ultimately a more effective form of power.

We can better illustrate this view by considering a person who tries to dominate others to gain power in part because they lack the ability to gain power over their own autonomy, and then compare this to a person who gains control over their own desires and autonomy and therefore doesn’t need to compensate for any inadequacy by dominating others.  A person who feels a need to dominate others is in effect dependent on those subordinates (and dependence implies a certain lack of power), but a person who increases their power over themselves gains more independence and thus a form of freedom that is otherwise not possible.

I like this internal/external distinction that Nietzsche makes, but I’d like to build on it a little and suggest that both expressions of a will to power can be seen as complementary strategies to fulfill one’s desire for maximal autonomy, but with the added caveat that this autonomy serves to fulfill a desire for maximal causal power by harnessing as much control over our experience and understanding of the world as possible.  On the one hand, we can try and change the world in certain ways to fulfill this desire (including through the domination of other wills to power), or we can try and change ourselves and our view of the world (up to and including changing our desires if we find them to be misdirecting us away from our greatest goal).  We may even change our desires such that they are compatible with an external force attempting to dominate us, thus rendering the external domination powerless (or at least less powerful than it was), and then we could conceivably regain a form of power over our experience and understanding of the world.

I’ve argued elsewhere that I think that our view of the world as well as our actions and desires can be properly described as predictions of various causal relations (this is based on my personal view of knowledge combined with a Predictive Processing account of brain function).  Reconciling this train of thought with Nietzsche’s basic idea of a will to power, I think we could say that our will to power depends on our predictions of the world and its many apparent causal relations.  We could equate maximal autonomy with maximal predictive success (including the predictions pertaining to our desires). Looking at autonomy and a will to power in this way, we can see that one is more likely to make successful predictions about the actions of another if they subjugate the other’s will to power by their own.  And one can also increase the success of their predictions by changing them in the right ways, including increasing their complexity to better match the causal structure of the world, and by changing our desires and actions as well.

Another thing to consider about a desire for autonomy is that it is better formulated if it includes whatever is required for its own sustainability.  Dominating other wills to power will often serve to promote a sustainable autonomy for the dominator because then those other wills aren’t as likely to become dominators themselves and reverse the direction of dominance, and this preserves the autonomy of the dominating will to power.  This shows how this particular external expression of a will to power could be naturally selected for (under certain circumstances at least) which Nietzsche himself even argued (though in an anti-Darwinian form since genes are not the unit of selection here, but rather behaviors).  This type of behavioral selection would explain it’s prevalence in the animal kingdom including in a number of primate species aside from human beings.  I do think however that we’ve found many ways of overcoming the need or impulse to dominate and it has a lot to do with having adopted social contract theory, since in my view it provides a way of maximizing the average will to power for all parties involved.

Coming back to Nietzsche’s take on language, truth, and dogmatism, we can also see that an increasingly potent will to power is more easily achievable if it is able to formulate and test new tentative predictions about the world, rather than being locked in to some set of predictions (which is dogmatism at it’s core).  Being able to adapt one’s predictions is equivalent to considering and adopting a new point of view, a capability which Nietzsche described as inherent in any free spirit.  It also amounts to being able to more easily free ourselves from the shackles of language, just as Nietzsche advocated for, since new points of view affect the meaning that we ascribe to words and concepts.  I would add to this, the fact that new points of view can also increase our chances of making more successful predictions that constitute our understanding of the world (and ourselves), because we can test them against our previous world view and see if this leads to more or less error, better parsimony, and so on.

Nietzsche’s hope was that one day all philosophy would be flexible enough to overcome its dogmatic prejudices, its claims of absolute truths, including those revolving around morality and concepts like good and evil.  He sees these concepts as nothing more than superficial expressions of one particular will to power or another, and thus he wants philosophy to eventually move itself beyond good and evil.  Personally, I am a proponent of an egoistic goal theory of morality, which grounds all morality on what maximizes the satisfaction and life fulfillment of the individual (which includes cultivating virtues such as compassion, honesty, and reasonableness), and so I believe that good and evil, when properly defined, are more than simply superficial expressions.

But I agree with Nietzsche in part, because I think these concepts have been poorly defined and misused such that they appear to have no inherent meaning.  And I also agree with Nietzsche in that I reject moral absolutism and its associated dogma (as found in religion most especially), because I believe morality to be dynamic in various ways, contingent on the specific situations we find ourselves in and the ever-changing idiosyncrasies of our human psychology.  Human beings often find themselves in completely novel situations and cultural evolution is making this happen more and more frequently.  And our species is also changing because of biological evolution as well.  So even though I agree that moral facts exist (with an objective foundation), and that a subset of these facts are likely to be universal due to the overlap between our biology and psychology, I do not believe that any of these moral facts are absolute because there are far too many dynamic variables for an absolute morality to work even in principle.  Nietzsche was definitely on the right track here.

Putting this all together, I’d like to think that our will to power has an underlying impetus, namely a drive for maximal satisfaction and life fulfillment.  If this is true then our drive for maximal autonomy and control over our experience and understanding of the world serves to fulfill what we believe will maximize this overall satisfaction and fulfillment.  However, when people are irrational, dogmatic, and/or are not well-informed on the relevant facts (and this happens a lot!), this is more likely to lead to an unrefined will to power that is less conducive to achieving that goal, where dominating the wills of others and any number of other immoral behaviors overtakes the character of the individual.  Our best chance of finding a fulfilling path in life (while remaining intellectually honest) is going to require an open mind, a willingness to criticize our own beliefs and assumptions, and a concerted effort to try and overcome our own limitations.  Nietzsche’s philosophy (much of it at least) serves as a powerful reminder of this admirable goal.

CGI, Movies and Truth…

After watching Rogue One: A Star Wars Story, which I liked, though not nearly as much as the original trilogy (Episodes IV, V, and VI), it got me thinking more about something I hadn’t thought about since the most recent presidential election.  As I watched Grand Moff Tarkin and Princess Leia, both characters made possible in large part thanks to CGI (as well as the help of actors Ingvild Deila and Guy Henry), I realized that although this is still a long way away, it is inevitable that (barring a nuclear world war or some other catastrophe that kills us all or sets us back to a pre-industrialized age) the pace of this technology will eventually lead to CGI products that are completely indistinguishable from reality.

This means that eventually, the “fake news” issue that many have been making a lot of noise about as of late, will one day take a new and ugly turn for the worse.  Not only is video and graphic technology accelerating at a fairly rapid pace to exacerbate this problem, but similar concerns are also arising as a result of voice editing software.  By simply gathering several seconds of sample audio from a person of interest, various forms of software are getting better and better at synthesizing their speech in order to mimic them — putting whatever words into “their” mouths that one so desires.

The irony here is that this means that despite the fact that we are going to continue becoming more and more globally interconnected, technologically advanced, and gain more global knowledge, it seems that we will eventually reach a point where each individual becomes less and less able to know what is true and what isn’t in all the places that you are electronically connected to.  One reason for this is that, as per the opening reference to Rogue One, it will become increasingly difficult to judge the veracity of videos that go viral on the internet and/or through news outlets.  We can imagine seeing a video (or many series of videos) released on the news and throughout the internet containing shocking events with real world leaders or other famous people, places, and so forth, events that could possibly start a civil or world war, alter one’s vote, or otherwise — but with the caveat that these events are entirely manufactured by some Machiavellian warmonger or power seeking elite.

Pragmatically speaking, we must still live our lives trusting what we see in proportion to the evidence we have, thus believing ordinary claims with a higher degree of confidence than extraordinary ones.  We will still need to hold to the general rule of extraordinary claims requiring extraordinary evidence in order to meet their burden of proof.  But it will become more difficult to trust certain forms of evidence (including in a court of law), so we’ll have to take that into consideration so that actions that result in more catastrophic consequences (if your assumptions/information turn out to be based on false evidence) require a higher burden of proof — once we are able to successfully pass some kind of graphics Touring Test.

This is by no means an endorsement for conspiracy theories generally nor any other anti-intellectual or dogmatic non-sense. We don’t want people to start doubting everything they see nor to start doubting everything they don’t WANT to see (which would be a proverbial buffet for our cognitive biases and the conspiracy theorists that make use of these epistemological flaws regularly), we still need to take this dynamic technological factor into account to maintain a world view based on proper Bayesian reasoning.

On the brighter side of things, we are going to get to enjoy much of what the new CGI capabilities will bring to us, because movies and all visual entertainment are going to be revolutionarily changed forever in many ways that will be worth celebrating, including our use of virtual reality generally (many various forms that we do and will continue to consciously and rationally desire). We just need to pay attention and exercise some careful moral deliberation as we develop these technologies. Our brains simply didn’t evolve to easily avoid being duped by artificial realities like the ones we’re developing (we already get duped far too often within our actual reality), so we need to engineer our path forward in a way that will better safeguard us from our own cognitive biases so we can maximize our well being once this genie is out of the bottle.

Virtual Reality & Its Moral Implications

There’s a lot to be said about virtual reality (VR) in terms of our current technological capabilities, our likely prospects for future advancements, and the vast amount of utility that we gain from it.  But, as it is with all other innovations, with great power comes great responsibility.

While there are several types of VR interfaces on the market, used for video gaming or various forms of life simulation, they do have at least one commonality, namely the explicit goal of attempting to convince the user that what they are experiencing is in fact real in at least some sense.  This raises a number of ethical concerns.  While we can’t deny the fact that even reading books and watching movies influences our behavior to some degree, VR is bound to influence our behavior much more readily because of the sheer richness of the qualia and the brain’s inability to distinguish significant differences between a virtual reality and our natural one.  Since the behavioral conditioning schema that our brain employs has evolved to be well adapted to our natural reality, any virtual variety that increasingly approximates it is bound to increasingly affect our behavior.  So we need to be concerned with VR in terms of how it can affect our beliefs, our biases, and our moral inclinations and other behaviors.

One concern with VR is the desensitization to, or normalization of, violence and other undesirable or immoral behaviors.  Many video games have been criticized over the years for this very reason, with the claim that they promote similar behaviors in the users of those games (most especially younger users with more impressionable minds).  These claims have been significantly validated by the American Psychological Association and the American Academy of Pediatrics, where they have both taken firm stances against children and teens playing violent video games, as a result of the accumulated research and meta studies showing a strong link between violent video gaming and increased aggression, anti-social behavior, and sharp decreases in moral engagement and empathy.

Thus, the increasingly realistic nature of VR and the ever-consistent increase in the capacities one has at their disposal within such a virtual space, is bound to exacerbate these types of problems.  If people are able to simulate rape or pedophilia among other morally reprehensible actions and social taboos, will they too become more susceptible to actually engaging in these behaviors once they leave the virtual space and re-enter the real world?  Even if they don’t increase their susceptibility to perform those behaviors, what does such a virtual escapade do to that person’s moral character?  Are they more likely to condone those behaviors (even if they don’t participate in them directly), or to condone other behaviors that have some kind of moral relevance or cognitive overlap with one another?

On the flip side, what if it was possible to use VR as a therapeutic tool to help cure pedophilia or other behavioral problems?  What if one was able to simulate rape, pedophilia or otherwise to reduce their chances of performing those acts in the real world?  Hardly anyone would argue that a virtual rape or molestation is anywhere near as abhorrent or consequential as real instances of such crimes would be, horrific crimes made against real human beings.  While this may only apply to a small number of people, it is at least plausible that such a therapeutic utility would make the world a better place if it prevented an actual rape or other crime from taking place.  If certain people have hard-wired impulses that would normally ruin their lives or the lives of others if left unchecked, then it would be prudent if not morally obligatory to do what we can to prevent such harms from taking place.  So even though this technology could make the otherwise healthy user begin to engage in bad behaviors, it could also be used as an outlet of expression for those already afflicted with similar impulses.  Just as they’ve used VR to help cure anxiety disorders, phobias, PTSD, and other pathologies, by exposing people to various stimuli that help them to overcome their ills, so too may VR possibly provide a cure for other types of mental illnesses and aggressive predispositions such as those related to murder, sexual assault, etc.

Whether VR is used as an outlet for certain behaviors to prevent them from actually happening in the real world, or as a means of curing a person from those immoral inclinations (where the long term goal is to eventually no longer need any VR treatment at all), there are a few paths that could show some promising results to decrease crime and so forth.  But, beyond therapeutic uses, we need to be careful about how these technologies are used generally and how that usage will increasingly affect our moral inclinations.

If society chose to implement some kind of prohibition to limit the types of things people could do in these virtual spaces, that may be of some use, but beyond the fact that this kind of prohibition would likely be difficult to enforce, it would also be a form of selective prohibition that may not be justified to implement.  If one chose to prohibit simulated rape and pedophilia (for example), but not prohibit murder or other forms of assault and violence, then what would justify such a selective prohibition?  We can’t simply rely on an intuition that the former simulated behaviors are somehow more repugnant than the latter (and besides, many would say that murder is just as bad if not worse anyway).  It seems that instead we need to better assess the consequences of each type of simulated behavior on our behavioral conditioning to see if at least some simulated activities should be prohibited while allowing others to persist unregulated.  On the other hand, if prohibiting this kind of activity is not practical or if it can only be implemented by infringing on certain liberties that we have good reasons to protect, then we need to think about some counter-strategies to either better inform people about these kinds of dangers and/or to make other VR products that help to encourage the right kinds of behaviors.

I can’t think of a more valuable use for VR than to help us cultivate moral virtues and other behaviors that are conducive to our well-being and to our living a more fulfilled life.  Anything from reducing our prejudices and biases through exposure to various simulated “out-groups” (for example), to modifying our moral character in more profound ways through artificial realities that can encourage the user to help others in need and to develop habits and inclinations that are morally praiseworthy.  We can even use this technology (and have already to some degree) to work out various moral dilemmas and our psychological response to them without anybody actually dying or getting physically hurt.  Overall, VR certainly holds a lot of promise, but it also poses a lot of psychological danger, thus making it incumbent upon us to talk more about these technologies as they continue to develop.