“Meta-phorin”

This is a poem I wrote about how the brain structures its own neural connectivity in order to produce metaphors, poetry, analogies, allegory, and the like, including through its use of semaphorin guidance molecules and such. So one can think of it as a type of meta-poetry I suppose.

“Meta-phorin”

Branches born from distant gardens
Fed by the fruits of senses streamed
Spreading out, a vibrant pattern
Crawling along those ancient trees
Toward the scents, hypnotic dance
Winding paths until they meet

Their tips begin to touch at last
Caressing as they’re intertwined
Hebbian journey, webs of gnosis
Embodied frames are now sublime

Synaptic waters flowing faster
Emotions growing, bearing passion
Creative means no longer foreign
By the meta-semaphorin

Advertisement

“The Bounds of Subjectivity”

The world as it really is
Unknowable, despite conviction
We hope for objectivity
And yet we see prediction

Perception is the mind’s best guess, to make some sense of all the mess
Expectations frame the lens, ontology, each one depends

Controlled hallucination
To see what we want to see
Perception is not a mirror
But the bounds of subjectivity

Our minds are but a model, sensations flowing, open throttle
A story written, a narrative, to which high credence we will give

Grounded on abstraction
Toward emotion reason shouts
Embodied filters blinding us
Wishful thinking wins the bout

Behold the power of intuition, as certainty comes to fruition
Housed in our unconscious mind, yet fallible we ought to find

Beliefs entangled with desire
Can I hold this view of mine?
Search for reasons to confirm
True or not, we all assign

Not all views have equal merit, unable to know, unless we share it
Reducing that complexity, except when over-vexed we’ll be

Worlds are shaped by what we want
In how we act and how we view
Rigid ways take hold of us
The old interprets all the new

Emotions are the reason’s master, ignoring this will bring disaster
Hume was right about the passions, finding reasons is our fashion

When evidence begins to mount
Against a highly prized belief
Minds can change, a last resort
From dissonance, we seek relief

Ignore the proof, for it can’t be! Or change your views for harmony
Highlight all coincidence, though it lacks significance

Must I believe what’s likely true?
Not if I can find a way!
A means to cover my own eyes
Truth be damned, emotions stray

Coincidence we seem to find, memories, tricks of the mind
‘Tis the frequency illusion, we’re falling prey to this delusion

The path of least resistance
Always tempting to the end
Sacrificing truth for self
How far the mind can bend

A marvel of our evolution, the ego fights its dissolution
Fallacies run far and wide, despite the logic by our side

Remember what you must
Your world is up to you
Conveniently forget the rest
And false becomes the true

Few will try to face the truth
Combat the bias, critique the “I”
Only the bravest make attempts
By far the most would rather die

By far the most would rather lie
To themselves, to everyone
Confirmation biases
From human nature, we try to run

You can run but you can’t hide!
Our biases remain
But evidence has verified
There’s knowledge we can gain

“Black Mirror” Reflections: U.S.S. Callister (S4, E1)

This Black Mirror reflection will explore season 4, episode 1, which is titled “U.S.S. Callister”.  You can click here to read my last Black Mirror reflection (season 3, episode 2: “Playtest”).  In U.S.S. Callister, we’re pulled into the life of Robert Daly (Jesse Plemons), the Chief Technical Officer at Callister Inc., a game development company that has produced a multiplayer simulated reality game called Infinity.  Within this game, users control a starship (an obvious homage to Star Trek), although Daly, the brilliant programmer behind this revolutionary game, has his own offline version of the game which has been modded to look like his favorite TV show Space Fleet, where Daly is the Captain of the ship.

We quickly learn that most of the employees at Callister Inc. don’t treat Daly very kindly, including his company’s co-founder James Walton (Jimmy Simpson).  Daly appears to be an overly passive, shy, introvert.  During one of Daly’s offline gaming sessions at home, we come to find out that the Space Fleet characters aboard the starship look just like his fellow employees, and as Captain of his simulated crew, he indulges in berating them all.  Due to the fact that Daly is Captain and effectively controls the game, he is rendered nearly omnipotent, and able to force his crew to perpetually bend to his will, lest they suffer immensely.

It turns out that these characters in the game are actually conscious, created from Daly having surreptitiously acquired his co-workers’ DNA and somehow replicated their consciousness and memories and uploaded them into the game (and any biologists or neurologists out there, let’s just forget for the moment that DNA isn’t complex enough to store this kind of neurological information).  At some point, a wrench is thrown into Daly’s deviant exploits when a new co-worker, programmer Nanette Cole (Cristin Milioti), is added to his game and manages to turn the crew against him.  Once the crew finds a backdoor means of communicating with the world outside the game, Daly’s world is turned upside down with a relatively satisfying ending chock-full of poetic justice, as his digitized, enslaved crew members manage to escape while he becomes trapped inside his own game as it’s being shutdown and destroyed.

Daly stuck

This episode is rife with a number of moral issues that build on one another, all deeply coupled with the ability to engineer a simulated reality (perceptual augmentation).  Virtual worlds carry a level of freedom that just isn’t possible in the real world, where one can behave in countless ways with little or no consequence, whether acting with beneficence or utter malice.  One can violate physical laws as well as prescriptive laws, opening up a new world of possibilities that are free to evolve without the feedback of social norms, legal statutes, and law enforcement.

People have long known about various ways of escaping social and legal norms through fiction and game playing, where one can imagine they are somebody else, living in a different time and place, and behave in ways they’d never even think of doing in the real world.

But what happens when they’re finished with the game and go back to the real world with all its consequences, social norms and expectations?  Doesn’t it seem likely that at least some of the behaviors cultivated in the virtual world will begin to rear their ugly heads in the real world?  One can plausibly argue that violent game playing is simply a form of psychological sublimation, where we release many of our irrational and violent impulses in a way that’s more or less socially acceptable.  But there’s also bound to be a difference between playing a very abstract game involving violence or murder, such as the classic board-game Clue, and playing a virtual reality game where your perceptions are as realistic as can be and you choose to murder some other character in cold blood.

Clearly in this episode, Daly was using the simulated reality as a means of releasing his anger and frustration, by taking it out on reproductions of his own co-workers.  And while a simulated experiential alternative could be healthy in some cases, in terms of its therapeutic benefit and better control over the consequences of the simulated interaction, we can see that Daly took advantage of his creative freedom, and wielded it to effectively fashion a parallel universe where he was free to become a psychopath.

It would already be troubling enough if Daly behaved as he did to virtual characters that were not actually conscious, because it would still show Daly pretending that they are conscious; a man who wants them to be conscious.  But the fact that Daly knows they are conscious makes him that much more sadistic.  He is effectively in the position of a god, given his powers over the simulated world and every conscious being trapped within it, and he has used these powers to generate a living hell (thereby also illustrating the technology’s potential to, perhaps one day, generate a living heaven).  But unlike the hell we hear about in myths and religious fables, this is an actual hell, where a person can suffer the worst fates imaginable (it is in fact only limited by the programmer’s imagination) such as experiencing the feeling of suffocation, yet unable to die and thus with no end in sight.  And since time is relative, in this case based on the ratio of real time to simulated time (or the ratio between one simulated time and another), a character consciously suffering in the game could feel as if they’ve been suffering for months, when the god-like player has only felt several seconds pass.  We’ve never had to morally evaluate these kinds of situations before, and we’re getting to a point where it’ll be imperative for us to do so.

Someday, it’s very likely that we’ll be able to create an artificial form of intelligence that is conscious, and it’s up to us to initiate and maintain a public conversation that addresses how our ethical and moral systems will need to accommodate new forms of conscious moral agents.

Jude Law, Haley Joel Osment, Brendan Gleeson, and Brian Turk in Artificial Intelligence: AI (2001)

We’ll also need to figure out how to incorporate a potentially superhuman level of consciousness into our moral frameworks, since these frameworks often have an internal hierarchy that is largely based on the degree or level of consciousness that we ascribe to other individuals and to other animals.  If we give moral preference to a dog over a worm, and preference to a human over a dog (for example), then where would a being with superhuman consciousness fit within that framework?  Most people certainly wouldn’t want these beings to be treated as gods, but we shouldn’t want to treat them like slaves either.  If nothing else, they’ll need to be treated like people.

Technologies will almost always have that dual potential, where they can be used for achieving truly admirable goals and to enhance human well being, or used to dominate others and to exacerbate human suffering.  So we need to ask ourselves, given a future world where we have the capacity to make any simulated reality we desire, what kind of world do we want?  What kind of world should we want?  And what kind of person do you want to be in that world?  Answering these questions should say a lot about our moral qualities, serving as a kind of window into the soul of each and every one of us.

“Black Mirror” Reflections: Playtest (S3, E2)

Cooper

Black Mirror, the British science-fiction anthology series created by Charlie Brooker, does a pretty good job experimenting with a number of illuminating concepts that highlight how modern culture is becoming increasingly shaped by (and vulnerable to) various technological advances and changes.  I’ve been interested in a number of these concepts for many years now, not least because of the many important philosophical implications (including a number of moral issues) that they point to.  I’ve decided to start a blog post series that will explore the contents of these episodes.  My intention with this blog post series, which I’m calling “Black Mirror” Reflections, will be to highlight some of my own takeaways from some of my favorite episodes.

I’d like to begin with season 2, episode 3, “Playtest”.  I’m just going to give a brief summary here as you can read the full episode summary in the link provided above.  In this episode, Cooper (Wyatt Russell) decides to travel around the world, presumably as a means of dealing with the recent death of his father, who died from early-onset Alzheimer’s.  After finding his way to London, his last destination before planning to return home to America, he runs into a problem with his credit card and bank account where he can’t access the money he needs to buy his plane ticket.

While he waits for his bank to fix the problem with his account, Cooper decides to earn some cash using an “Oddjobs” app, which provides him with a number of short-term job listings in the area, eventually leading him to “SaitoGemu,” a video game company looking for game testers to try a new kind of personalized horror game involving a (seemingly) minimally invasive brain implant procedure.  He briefly hesitates but, desperate for money and reasonably confident in its safety, he eventually consents to the procedure whereby the implant is intended to wire itself into the gamer’s brain, resulting in a form of perceptual augmentation and a semi-illusory reality.

cooper-mad

The implant is designed to (among other things) scan your memories and learn what your worst fears are, in order to integrate these into the augmented perceptions, producing a truly individualized, and maximally frightening horror game experience.  Needless to say, at some point Cooper begins to lose track of what’s real and what’s illusory, and due to a malfunction, he’s unable to exit the game and he ends up going mad and eventually dying as a result of the implant unpredictably overtaking (and effectively frying) his brain.

nanobots in brain

There are a lot of interesting conceptual threads in this story, and the idea of perceptual augmentation is a particularly interesting theme that finds it’s way into a number of other Black Mirror episodes.  While holographic and VR-headset gaming technologies can produce their own form of augmented reality, perceptual augmentation carried out on a neurological level isn’t even in the same ballpark, having qualitative features that are far more advanced and which are more akin to those found in the Wachowski’s The Matrix trilogy or James Cameron’s Total Recall.  Once the user is unable to distinguish between the virtual world and the external reality, with the simulator having effectively passed a kind of graphical version of the Turing Test, then one’s previous notion of reality is effectively shattered.  To me, this technological idea is one of the most awe-inspiring (yet sobering) ideas within the Black Mirror series.

The inability to discriminate between the two worlds means that both worlds are, for all practical purposes, equally “real” to the person experiencing them.  And the fact that one simulationcan’t tell the difference between such worlds ought to give a person pause to re-evaluate what it even means for something to be real.  If you doubt this, then just imagine if you were to find out one day that your entire life has really been the result of a computer simulation, created and fabricated by some superior intelligence living in a layer of reality above your own (we might even think of this being as a “god”).  Would this realization suddenly make your remembered experiences imaginary and meaningless?  Or would your experiences remain just as “real” as they’ve always been, even if they now have to be reinterpreted within a context that grounds them in another layer of reality?

To answer this question honestly, we ought to first realize that we’re likely fully confident that what we’re experiencing right now is reality, is real, is authentic, and is meaningful (just as Cooper was at some point in his gaming “adventure”).  And this seems to be at least a partial basis for how we define what is real, and how we differentiate the most vivid and qualitatively rich experiences from those we might call imaginary, illusory, or superficial.  If what we call reality is really just a set of perceptions, encompassing every conscious experience from the merely quotidian to those we deem to be extraordinary, would we really be justified in dismissing all of these experiences and their value to us if we were to discover that there’s a higher layer of reality, residing above the only reality we’ve ever known?

For millennia, humans have pondered over whether or not the world is “really” the way we see it, with perhaps the most rigorous examination of this metaphysical question undertaken by the German philosopher Immanuel Kant, with his dichotomy of the phenomenon and the noumenon (i.e. the way we see the world or something in the world versus the way the world or thing “really is” in itself, independent of our perception).  Even if we assume the noumenon exists, we can never know anything about the thing in itself, by our being fundamentally limited by our own perceptual categories and the way we subjectively interpret the world.  Similarly, we can never know for certain whether or not we’re in a simulation.

Looking at the situation through this lens, we can then liken the question of how to (re)define reality within the context of a simulation with the question of how to (re)define reality within the context of a world as it really is in itself, independent of our perception.  Both the possibility of our being in a simulation and the possibility of our perceptions stemming from an unknowable noumenal world could be true (and would likely be unfalsifiable), and yet we still manage to use and maintain a relatively robust conception and understanding of reality.  This leads me to conclude that reality is ultimately defined by pragmatic considerations (mostly those pertaining to our ability to make successful predictions and achieve our goals), and thus the possibility of our one day learning about a new, higher level of reality should merely add to our existing conception of reality, rather than completely negating it, even if it turns out to be incomplete.

Another interesting concept in this episode involves the basic design of the individualized horror game itself, where a computer can read your thoughts and memories, and then surmise what your worst fears are.  This is a technological feat that is possible in principle, and one with far-reaching implications that concern our privacy, safety, and autonomy.  Just imagine if such a power were unleashed by corporations, or mind-readingthe governments owned by those corporations, to acquire whatever information they wanted from your mind, to find out how to most easily manipulate you in terms of what you buy, who you vote for, what you generally care about, or what you do any and every day of your life.  The Orwellian possibilities are endless.

Marketing firms (both corporatocratic and political) have already been making use of discoveries in psychology and neuroscience, finding new ways to more thoroughly exploit our cognitive biases to get us to believe and desire whatever will benefit them most.  Adding to this the future ability to read our thoughts and manipulate our perceptions (even if this is first implemented as a seemingly innocuous video game), this will establish a new means of mass surveillance, where we can each become a potential “camera” watching one another (a theme also highlighted in BM, S4E3: Crocodile), while simultaneously exposing our most private of thoughts, and transmitting them to some centralized database.  Once we reach these technological heights (it’s not a matter of if but when), depending on how it’s employed, we may find ourselves no longer having the freedom to lie or to keep a secret, nor the freedom of having any mental privacy whatsoever.

To be fair, we should realize that there are likely to be undeniable benefits in our acquiring these capacities (perceptual augmentation and mind-reading), such as making virtual paradises with minimal resources, finding new ways of treating phobias, PTSD, brain to cloudand other pathologies; giving us the power of telepathy and superhuman intelligence by connecting our brains to the cloud, giving us the ability to design error-proof lie detectors and other vast enhancements in maximizing personal security and reducing crime.  But there are also likely to be enormous losses in personal autonomy, as our available “choices” are increasingly produced and constrained by automated algorithms; there are likely to be losses in privacy, and increasing difficulties in ascertaining what is true and what isn’t, since our minds will be vulnerable to artificially generated perceptions created by entities and institutions that want to deceive us.

Although we’ll constantly need to be on the lookout for these kinds of potential dangers as they arise, in the end, we may find ourselves inadvertently allowing these technologies to creep into our lives, one consumer product at a time.

Technology, Mass-Culture, and the Prospects of Human Liberation

Cultural evolution is arguably just as fascinating as biological evolution (if not more so), with new ideas and behaviors stemming from the same kinds of natural selective pressures that lead to new species along with their novel morphologies and capacities.  And as with biological evolution where it, in a sense, takes off on its own unbeknownst to the new organisms it produces and independent of the intentions they may have (with our species being the notable exception given our awareness of evolutionary history and our ever-growing control over genetics), so too cultural evolution takes off on its own, where cultural changes are made manifest through a number of causal influences that we’re largely unaware of, despite our having some conscious influence over this vastly transformative process.

Alongside these cultural changes, human civilizations have striven to find new means of manipulating nature and to better predict the causal structure that makes up our reality.  One unfortunate consequence of this is that, as history has shown us, within any particular culture’s time and place, people have a decidedly biased overconfidence in the perceived level of truth or justification for the status quo and their present world view (both on an individual and collective level).  Undoubtedly, the “group-think” or “herd mentality” that precipitates from our simply having social groups often reinforces this overconfidence, and this is so in spite of the fact that what actually influences a mass of people to believe certain things or to behave as they do is highly contingent, unstable, and amenable to irrational forms of persuasion including emotive, sensationalist propaganda that prey on our cognitive biases.

While we as a society have an unprecedented amount of control over the world around us, this type of control is perhaps best described as a system of bureaucratic organization and automated information processing, that gives less and less individual autonomy, liberty, and basic freedom, as it further expands its reach.  How much control do we as individuals really have in terms of the information we have access to, and given the implied picture of reality that is concomitant with this information in the way it’s presented to us?  How much control do we have in terms of the number of life trajectories and occupations made available to us, what educational and socioeconomic resources we have access to given the particular family, culture, and geographical location we’re born and raised in?

As more layers of control have been added to our way of life and as certain criteria for organizational efficiency are continually implemented, our lives have become externally defined by increasing layers of abstraction, and our modes of existence are further separated cognitively and emotionally from an aesthetically and otherwise psychologically valuable sense of meaning and purpose.

While the Enlightenment slowly dragged our species, kicking and screaming, out of the theocratic, anti-intellectual epistemologies of the Medieval period of human history, the same forces that unearthed a long overdue appreciation for (and development of) rationality and technological progress, unknowingly engendered a vulnerability to our misusing this newfound power.  There was an overcompensation of rationality when it was deployed to (justifiably) respond to the authoritarian dogmatism of Christianity and to the demonstrably unreliable nature of superstitious beliefs and of many of our intuitions.

This overcompensatory effect was in many ways accounted for, or anticipated within the dialectical theory of historical development as delineated by the German philosopher Georg Hegel, and within some relevant reformulations of this dialectical process as theorized by the German philosopher Karl Marx (among others).  Throughout history, we’ve had an endless clash of ideas whereby the prevailing worldviews are shown to be inadequate in some way, failing to account for some notable aspect of our perceived reality, or shown to be insufficient for meeting our basic psychological or socioeconomic needs.  With respect to any problem we’ve encountered, we search for a solution (or wait for one to present itself to us), and then we become overconfident in the efficacy of the solution.  Eventually we end up overgeneralizing its applicability, and then the pendulum swings too far the other way, thereby creating new problems in need of a solution, with this process seemingly repeating itself ad infinitum.

Despite the various woes of modernity, as explicated by the modern existentialist movement, it does seem that history, from a long-term perspective at least, has been moving in the right direction, not only with respect to our heightened capacity of improving our standard of living, but also in terms of the evolution of our social contracts and our conceptions of basic and universal human rights.  And we should be able to plausibly reconcile this generally positive historical trend with the Hegelian view of historical development, and the conflicts that arise in human history, by noting that we often seem to take one step backward followed by taking two steps forward in terms of our moral and epistemological progress.

Regardless of the progress we’ve made, we seem to be at a crucial point in our history where the same freedom-limiting authoritarian reach that plagued humanity (especially during the Middle Ages) has undergone a kind of morphogenesis, having been reinstantiated albeit in a different form.  The elements of authoritarianism have become built into the very structure of mass-culture, with an anti-individualistic corporatocracy largely mediating the flow of information throughout this mass-culture, and also mediating its evolution over time as it becomes more globalized, interconnected, and cybernetically integrated into our day-to-day lives.

Coming back to the kinds of parallels in biology that I opened up with, we can see human autonomy and our culture (ideas and behaviors) as having evolved in ways that are strikingly similar to the biological jump that life made long ago, where single-celled organisms eventually joined forces with one another to become multi-cellular.  This biological jump is analogous to the jump we made during the early onset of civilization, where we employed an increasingly complex distribution of labor and occupational specialization, allowing us to survive many more environmental hurdles than ever before.  Once civilization began, the spread of culture became much more effective for transmitting ideas both laterally within a culture and longitudinally from generation to generation, with this process heavily enhanced by our having adopted various forms of written language, allowing us to store and transmit information in much more robust ways, similar to genetic information storage and transfer via DNA, RNA, and proteins.

Although the single-celled bacterium or amoeba (for example) may be thought of as having more “autonomy” than a cell that is forcefully interconnected within a multi-cellular organism, we can see how the range of capacities available to single cells were far more limited before making the symbiotic jump, just as humans living before the onset of civilization had more “freedom” (at least of a certain type) and yet the number of possible life trajectories and experiences was minuscule when compared to a human living in a post-cultural world.  But once multi-cellular organisms began to form a nervous system and eventually a brain, the entire collection of cells making up an organism became ultimately subservient to a centralized form of executive power — just as humans have become subservient to the executive authority of the state or government (along with various social pressures of conformity).

And just as the fates of each cell in a multi-cellular organism became predetermined and predictable by its particular set of available resources and the specific information it received from neighboring cells, similarly our own lives are becoming increasingly predetermined and predictable by the socioeconomic resources made available to us and the information we’re given which constitutes our mass-culture.  We are slowly morphing from individual brains into something akin to individual neurons within a global brain of mass-consciousness and mass-culture, having our critical thinking skills and creative aspirations exchanged for rehearsed responses and docile expectations that maintain the status quo and which continually transfers our autonomy to an oligarchic power structure.

We might wonder if this shift has been inevitable, possibly being yet another example of a “fractal pattern” recapitulated in sociological form out of the very same freely floating rationales that biological evolution has been making use of for eons.  In any case, it’s critically important that we become aware of this change, so we can try and actively achieve and effectively maintain the liberties and level of individual autonomy that we so highly cherish.  We ought to be thinking about what kinds of ways we can remain cognizant of, and critical to, our culture and its products; how we can reconcile or transform technological rationality and progress with a future world comprised of truly liberated individuals; and how to transform our corporatocratic capitalist society into one that is based on a mixed economy with a social safety net that even the wealthiest citizens would be content with living under, so as to maximize the actual creative freedom people have once their basic existential needs have been met.

Will unchecked capitalism, social-media, mass-media, and the false needs and epistemological bubbles they’re forming lead to our undoing and destruction?  Or will we find a way to rise above this technologically-induced setback, and take advantage of the opportunities it has afforded us, to make the world and our technology truly compatible with our human psychology?  Whatever the future holds for us, it is undoubtedly going to depend on how many of us begin to critically think about how we can seriously restructure our educational system and how we disseminate information, how we can re-prioritize and better reflect on what our personal goals ought to be, and also how we ought to identify ourselves as free and unique individuals.

Irrational Man: An Analysis (Part 3, Chapter 8: Nietzsche)

In the last post on this series on William Barrett’s Irrational Man, we began exploring Part 3: The Existentialists, beginning with chapter 7 on Kierkegaard.  In this post, we’ll be taking a look at Nietzsche’s philosophy in more detail.

Ch. 8 – Nietzsche

Nietzsche shared many of the same concerns as Kierkegaard, given the new challenges of modernity brought about by the Enlightenment; but each of these great thinkers approached this new chapter of our history from perspectives that were in many ways, diametrically opposed.  Kierkegaard had a grand goal of sharing with others what he thought it meant to be a Christian, and he tried to revive Christianity within a society that he found to be increasingly secularized.  He also saw that of the Christianity that did exist, it was largely organized and depersonalized, and he felt the need to stress the importance of a much more personal form of the religion.  Nietzsche on the other hand, an atheist, rejected Christianity and stressed the importance of re-establishing our values given the fact that modernity was now living in a godless world, albeit a culturally Christianized world, but one that was now without a Christ.

Despite their differences, both philosophers felt that discovering the true meaning of our lives was paramount to living an authentic life, and this new quest for meaning was largely related to the fact that our view of ourselves had changed significantly:

“By the middle of the nineteenth century, as we have seen, the problem of man had begun to dawn on certain minds in a new and more radical form: Man, it was seen, is a stranger to himself and must discover, or rediscover, who he is and what his meaning is.”

Nietzsche also understood and vividly illustrated the crucial differences between mankind and the rest of nature, most especially our self-awareness:

“It was Nietzsche who showed in its fullest sense how thoroughly problematical is the nature of man: he can never be understood as an animal species within the zoological order of nature, because he has broken free of nature and has thereby posed the question of his own meaning-and with it the meaning of nature as well-as his destiny.”

And for Nietzsche, the meaning one was to find for their life included first abandoning what he saw as an oppressive and disempowering Christian morality, which he saw as severely inhibiting our potential as human beings.  He was much more sympathetic to Greek culture and morality, and even though (perhaps ironically) Christianity itself was derived from a syncretism between Judaism and Greek Hellenism, it’s moral and psychologically relevant differences were too substantial to avoid Nietzsche’s criticism.  He thought that a return to a more archaic form of Greek culture and mentality was the solution we needed to restore or at least re-validate our instincts:

“Dionysus reborn, Nietzsche thought, might become a savior-god for the whole race, which seemed everywhere to show symptoms of fatigue and decline.”

Nietzsche learned about the god Dionysus while he was studying Greek tragedy, and he gravitated toward the Dionysian symbolism, for it contained a kind of reconciliation between high culture and our primal instincts.  Dionysus was after all the patron god of the Greek tragic festivals and so was associated with beautiful works of art; but he was also the god of wine and ritual madness, bringing people together through the grape harvest and the joy of intoxication.  As Barrett describes it:

“This god thus united miraculously in himself the height of culture with the depth of instinct, bringing together the warring opposites that divided Nietzsche himself.”

It was this window into the hidden portions of our psyche that Nietzsche tried to look through and which became an integral basis for much of his philosophy.  But the chaotic nature of our species combined with our immense power to manipulate the environment also concerned him greatly; and he thought that the way modernity was repressing many of our instincts needed to be circumvented, lest we continue to build up this tension in our unconscious only to release itself in an explosive eruption of violence:

“It is no mere matter of psychological curiosity but a question of life and death for man in our time to place himself again in contact with the archaic life of his unconscious.  Without such contact he may become the Titan who slays himself.  Man, this most dangerous of the animals, as Nietzsche called him, now holds in his hands the dangerous power of blowing himself and his planet to bits; and it is not yet even clear that this problematic and complex being is really sane.”

Indeed our species has been removed from our natural evolutionary habitat; and we’ve built our modern lives around socio-cultural and technological constructs that have, in many ways at least, inhibited the open channel between our unconscious and conscious mind.  One could even say that the prefrontal cortex region of our brain, important as it is for conscious planning and decision making, has been effectively high-jacked by human culture (memetic selection) and now it’s constantly running on overtime, over-suppressing the emotional centers of the brain (such as the limbic system).

While we’ve come to realize that emotional suppression is sometimes necessary in order to function well as a cooperative social species that depends on complex long-term social relationships, we also need to maintain a healthy dose of emotional expression as well.  If we can’t release this more primal form of “psychological energy” (for lack of a better term), then it shouldn’t be surprising to see it eventually manifest into some kind of destructive behavior.

We ought not forget that we’re just like other animals in terms of our brain being best adapted to a specific balance of behaviors and cognitive functions; and if this balance is disrupted through hyperactive or hypoactive use of one region or another, doesn’t it stand to reason that we may wind up with either an impairment in brain function or less healthy behavior?  If we’re not making a sufficient use of our emotional capacities, shouldn’t we expect them to diminish or atrophy in some way or other?  And if some neurological version of the competitive exclusion principle exists, then this would further reinforce the idea that our suppression of emotion by other capacities may cause those other capacities to become permanently dominant.  We can only imagine the kinds of consequences that ensue when this psychological impairment falls upon a species with our level of power.

1.  Ecce Homo

“In the end one experiences only oneself,” Nietzsche observes in his Zarathustra, and elsewhere he remarks, in the same vein, that all the systems of the philosophers are just so many forms of personal confession, if we but had eyes to see it.”

Here Nietzsche is expressing one of his central ideas, by pointing out the fact that we can’t separate ourselves from our thoughts, and therefore the ideas put forward by any philosopher are really betraying some set of values they hold whether unconsciously or explicitly, and they also betray one’s individual perspective; an unavoidable perspective stemming from our subjective experience which colors our interpretation of those experiences.  Perspectivism, or the view that reality and our ideas about what’s true and what we value can be mapped onto many different possible conceptual schemes, was a big part of Nietzsche’s overall philosophy, and it was interwoven into his ideas on meaning, epistemology, and ontology.

As enlightening as Nietzsche was in giving us a much needed glimpse into some of the important psychological forces that give our lives the shape they have (for better or worse), his written works show little if any psychoanalytical insight applied to himself, in terms of the darker and less pleasant side of his own psyche.

“Nietzsche’s systematic shielding of himself from the other side is relevant to his explanation of the death of God: Man killed God, he says, because he could not bear to have anyone looking at his ugliest side.”

But even if he didn’t turn his psychoanalytical eye inward toward his own unconscious, he was still very effective in shining a light on the collective unconscious of humanity as a whole.  It’s not enough that we marvel over the better angels of our nature, even though it’s important to acknowledge and understand our capacities and our potential for bettering ourselves and the world we live in; we also have to acknowledge our flaws and shortcomings and while we may try to overcome these weaknesses in one way or another, some of them are simply a part of our nature and we should accept them as such even if we may not like them.

One of the prevailing theories stemming from Western Rationalism (mentioned earlier in Barrett’s book) was that human beings were inherently perfect and rational, and it was within the realm of possibility to realize that potential; but this kind of wishful thinking was relying on a fundamental denial of an essential part of the kind of animal we really are.  And Nietzsche did a good job of putting this fact out in the limelight, and explaining how we couldn’t truly realize our potential until we came to terms with this fairly ugly side of ourselves.

Barrett distinguishes between the attitudes stemming from various forms of atheism, including that of Nietzsche’s:

“The urbane atheism of Bertrand Russell, for example, presupposes the existence of believers against whom he can score points in an argument and get off some of his best quips.  The atheism of Sartre is a more somber affair, and indeed borrows some of its color from Nietzsche: Sartre relentlessly works out the atheistic conclusion that in a universe without God man is absurd, unjustified, and without reason, as Being itself is.”

In fact Nietzsche was more concerned with the atheists that hadn’t yet accepted the full ramifications of a Godless world, rather than believers themselves.  He felt that those who believed in God still had a more coherent belief system given many of their moral values and the meaning they attached to their lives, since they were largely derived from their religious beliefs (even if the religious beliefs were unwarranted or harmful).  Those that were atheists, many of them at least, hadn’t yet taken the time to build a new foundation for their previously held values and the meaning in their lives; and if they couldn’t rebuild this foundation then they’d have to change those values and what they find to be meaningful to reflect that shift in foundation.

Overall, Nietzsche even changes the game a little in terms of how the conception of God was to be understood in philosophy:

“If God is taken as a metaphysical object whose existence has to be proved, then the position held by scientifically minded philosophers like Russell must inevitably be valid: the existence of such an object can never be empirically proved.  Therefore, God must be a superstition held by primitive and childish minds.  But both these alternative views are abstract, whereas the reality of God is concrete, a thoroughly autonomous presence that takes hold of men but of which, of course, some men are more conscious than others.  Nietzsche’s atheism reveals the true meaning of God-and does so, we might add, more effectively than a good many official forms of theism.”

Even though God may not actually exist as a conscious being, the idea of God and the feelings associated with what one labels as an experience of God, is as real as any other idea or experience and so it should still be taken seriously even in a world where no gods exist.  As an atheist myself (formerly a born-again Protestant Christian), I’ve come to appreciate the real breadth of possible meaning attached to the term “God”.  Even though the God or gods described in various forms of theism do not track onto objective reality as was pointed out by Russell, this doesn’t negate the subjective reality underlying theism.  People ascribe the label “God” to a set of certain feelings and forces that they feel are controlling their lives, their values, and their overall vision of the world.  Nietzsche had his own vision of the world as well, and so one could label this as his “god”, despite the confusion that this may cause when discussing God on metaphysical terms.

2. What Happens In “Zarathustra”; Nietzsche as Moralist

One of the greatest benefits of producing art is our ability to tap into multiple perspectives that we might otherwise never explore.  And by doing this, we stand a better chance of creating a window into our own unconscious:

“[Thus Spoke Zarathustra] was Nietzsche’s poetic work and because of this he could allow the unconscious to take over in it, to break through the restraints imposed elsewhere by the philosophic intellect. [in poetry] One loses all perception of what is imagery and simile-that is to say, the symbol itself supersedes thought, because it is richer in meaning.”

By avoiding the common philosophical trap of submitting oneself to a preconceived structure and template for processing information, poetry and other works of art can inspire new ideas and novel perspectives through the use of metaphor and allegory, emotion, and the seemingly infinite powers of our own imagination.  Not only is this true for the artist herself, but also for the spectator who stands in a unique position to interpret what they see and to make sense of it the best they can.  And in the rarest of cases, one may be able to experience something in a work of art that fundamentally changes themselves in the process.  Either way, by presenting someone with an unusual and ambiguous stimulus that they are left to interpret, they can hope to gain at least some access into their unconscious and to inspire an expression of their creativity.

In his Thus Spoke Zarathustra, Nietzsche was focusing on how human beings could ever hope to regain a feeling of completeness; for modernity had fractured the individual and placed too much emphasis on collective specialization:

“For man, says Schiller, the problem is one of forming individuals.  Modern life has departmentalized, specialized, and thereby fragmented the being of man.  We now face the problem of putting the fragments together into a whole.  In the course of his exposition, Schiller even referred back, as did Nietzsche, to the example of the Greeks, who produced real individuals and not mere learned abstract men like those of the modern age.”

A part of this reassembly process would necessarily involve reincorporating what Nietzsche thought of as the various attributes of our human nature, which many of us are in denial of (to go back to the point raised earlier in this post).  For Nietzsche, this meant taking on some characteristics that may seem inherently immoral:

“Man must incorporate his devil or, as he put it, man must become better and more evil; the tree that would grow taller must send its roots down deeper.”

Nietzsche seems to be saying that if there are aspects of our psychology that are normal albeit seemingly dark or evil, then we ought to structure our moral values along the same lines rather than in opposition to these more primal parts of our psyche:

“(Nietzsche) The whole of traditional morality, he believed, had no grasp of psychological reality and was therefore dangerously one-sided and false.”

And there is some truth to this though I wouldn’t go as far as Nietzsche does, as I believe that there are many basic and cross-cultural moral prescriptions that do in fact take many of our psychological needs into account.  Most cultures have some place for various virtues such as compassion, honesty, generosity, forgiveness, commitment, integrity, and many others; and these virtues have been shown within the fields of moral psychology and sociology to help us flourish and lead psychologically fulfilling lives.  We evolved as a social species that operates within some degree of a dominance hierarchy, and sure enough our psychological traits are what we’d expect given our evolutionary history.

However, it’s also true that we ought to avoid slipping into some form of a naturalistic fallacy; for we don’t want to simply assume that all the behaviors that were more common prior to modernity let alone prior to civilization were beneficial to us and our psychology.  For example, raping and stealing were far more common per capita prior to humans establishing some kind of civil state or society.  And a number of other behaviors that we’ve discovered to be good or bad for our physical or psychological health have been the result of a long process of trial and error, cultural evolution, and the cultural transmission of newfound ideas from one generation to the next.  In any case, we don’t want to assume that just because something has been a tradition or even a common instinctual behavior for many thousands or even hundreds of thousands of years, that it is therefore a moral behavior.  Quite the contrary; instead we need to look closely at which behaviors lead to more fulfilling lives and overall maximal satisfaction and then aim to maximize those behaviors over those that detract from this goal.

Nietzsche did raise a good point however, and one that’s supported by modern moral psychology; namely, the fact that what we think we want or need isn’t always in agreement with our actual wants and needs, once we dive below the surface.  Psychology has been an indispensable tool in discovering many of our unconscious desires and the best moral theories are going to be those that take both our conscious and unconscious motivations and traits into account.  Only then can we have a moral theory that is truly going to be sufficiently motivating to follow in the long term as well as most beneficial to us psychologically.  If one is basing their moral theory on wishful thinking rather than facts about their psychology, then they aren’t likely to develop a viable moral theory.  Nietzsche realized this and promoted it in his writings, and this was an important contribution as it bridged human psychology with a number of important topics in philosophy:

“On this point Nietzsche has a perfectly sober and straightforward case against all those idealists, from Plato onward, who have set universal ideas over and above the individual’s psychological needs.  Morality itself is blind to the tangle of its own psychological motives, as Nietzsche showed in one of his most powerful book, The Genealogy of Morals, which traces the source of morality back to the drives of power and resentment.”

I’m sure that drives of power and resentment have played some role in certain moral prescriptions and moral systems, but I doubt that this is likely to be true generally speaking; and even in cases where a moral prescription or system arose out of one of these unconscious drives, such as resentment, this doesn’t negate it’s validity nor demonstrate whether or not it’s warranted or psychologically beneficial.  The source of morality isn’t as important as whether or not the moral system itself works, though the sources of morality can be both psychologically relevant and revealing.

Our ego certainly has a lot to do with our moral behavior and whether or not we choose to face our inner demons:

“Precisely what is hardest for us to take is the devil as the personification of the pettiest, paltriest, meanest part of our personality…the one that most cruelly deflates ones egotism.”

If one is able to accept both the brighter and darker sides of themselves, they’ll also be more likely to willingly accept Nietzsche’s idea of the Eternal Return:

“The idea of the Eternal Return thus expresses, as Unamuno has pointed out, Nietzsche’s own aspirations toward eternal and immortal life…For Nietzsche the idea of the Eternal Return becomes the supreme test of courage: If Nietzsche the man must return to life again and again, with the same burden of ill health and suffering, would it not require the greatest affirmation and love of life to say Yes to this absolutely hopeless prospect?”

Although I wouldn’t expect most people to embrace this idea, because it goes against our teleological intuitions of time and causation leading to some final state or goal, it does provide a valuable means of establishing whether or not someone is really able to accept this life for what it is with all of its absurdities, pains and pleasures.  If someone willingly accepts the idea, then by extension they likely accept themselves, their lives, and the rest of human history (and even the history of life on this planet, no less).  This doesn’t mean that by accepting the idea of the Eternal Return, that people have to put every action, event, or behavior on equal footing, for example, by condoning slavery or the death of millions as a result of countless wars, just because these events are to be repeated for eternity.  But one is still accepting that all of these events were necessary in some sense; that they couldn’t have been any other way, and this acceptance should translate to one’s attitude toward life reflecting this by some kind of overarching contentment.

I think that the more common reaction to this kind of idea is slipping into some kind of nihilism, where a person no longer cares about anything, and no longer finds any meaning or purpose in their lives.  And this reaction is perfectly understandable given our teleological intuitions; most people don’t or wouldn’t want to invest time and effort into some goal if they knew that the goal would never be accomplished or knew that it would eventually be undone.  But, oddly enough, even if the Eternal Return isn’t an actual fact of our universe, we still run into the same basic situation within our own lifetimes whereby we know that there are a number of goals that will never be achieved because of our own inevitable death.

There’s also the fact that even if we did achieve a number of goals, once we die, we can no longer reap any benefits from them.  There may be other people that can benefit from our own past accomplishments, but eventually they will die as well.  Eventually all life in the universe will expire, and this is bound to happen long before the inevitable heat death of the universe transpires; and once it happens, there won’t be anybody experiencing anything at all let alone reaping the benefits of a goal from someone’s distant past.  Once this is taken into account, Nietzsche’s idea doesn’t sound all that bad, does it?  If we knew that there would always be time and some amount of conscious experience for eternity, then there will always be some form of immortality for consciousness; and if it’s eternally recurring, then we end up becoming immortal in some sense.

Perhaps the idea of the Eternal Return (or Recurrence), even though this idea began many hundreds if not a few thousand years before Nietzsche, was motivated by Nietzsche’s own fear of death.  Even though he thought of the idea as horrifying and paralyzing (not least because of all the undesirable parts of our own personal existence), since he didn’t believe in a traditional afterlife, maybe the fear of death motivated him to adopt such an idea, even though he also saw it as a physically plausible consequence of probability and the laws of physics.  But either way, beyond it being far more plausible an idea compared to that of an eternal paradise after death, it’s also a very different idea; not only because one’s memory or identity isn’t preserved such that one actually experiences immortality when the “temporal loop” defining their lifetime starts all over again (i.e. the “movie” of one’s life is just played over and over again, unbeknownst to the person in the movie), but this idea is also different because it involves accepting the world the way it is for all eternity rather than an idea revolving around the way they wish it was.  The Eternal Return fully engages with the existentialist reality of human life with all its absurdities, contingencies, and the rest.

3. Power and Nihilism

Nietzsche, like Kierkegaard before him, was considered to be an unsystematic thinker by many, though Barrett points out that this view is mistaken when it comes to Nietzsche; a common view resulting from the form his writings took which were largely aphoristic.  Nietzsche himself even said that he was viewing science and philosophy through the eyes of art, but nevertheless his work eventually revealed a systematic structure:

“As thinking gradually took over the whole person, and everything else in his life being starved out, it was inevitable that this thought should tend to close itself off in a system.”

This systematization of his philosophy was revealed in the notes he was making for The Will to Power, a work he never finished but one that was more or less assembled and published posthumously by his sister Elisabeth.  The systematization was centered on the idea that a will to power was the underlying force that ultimately guided our behavior, striving to achieve the highest possible position in life as opposed to being fundamentally driven by some biological imperative to survive.  But it’s worth pointing out here that Nietzsche didn’t seem to be merely talking about a driving force behind human behavior, but rather he seemed to be referencing an even more fundamental driving force that was in some sense driving all of reality; and this more inclusive view would be analogous to Schopenhauer’s will to live.

It’s interesting to consider this perspective from the lens of biology, where if we try and reconcile this with a Darwinian account of evolution in which survival is key, we could surmise that a will to power is but a mechanism for survival.  If we grant Nietzsche’s position and the position of the various anti-Darwinian thinkers that influenced him, that is, that a will to survive or survival more generally is somehow secondary to a will to power, then we run into a problem: if survival is a secondary drive or goal, then we’d expect survival to be less important than power; but without survival you can’t have power and thus it makes far more evolutionary sense that a will to survive is more basic than a will to power.

However, I am willing to grant that as soon as organisms began evolving brains along with the rest of their nervous system, the will to power (or something analogous to it) became increasingly important; and by the time humans came on the scene with their highly complex brains, the will to power may have become manifest in our neurological drive to better predict our environment over time.  I wrote about this idea in a previous post where I had explored Nietzsche’s Beyond Good and Evil, where I explained what I saw as a correlation between Nietzsche’s will to power and the idea of the brain as a predictive processor:

“…I’d like to build on it a little and suggest that both expressions of a will to power can be seen as complementary strategies to fulfill one’s desire for maximal autonomy, but with the added caveat that this autonomy serves to fulfill a desire for maximal causal power by harnessing as much control over our experience and understanding of the world as possible.  On the one hand, we can try and change the world in certain ways to fulfill this desire (including through the domination of other wills to power), or we can try and change ourselves and our view of the world (up to and including changing our desires if we find them to be misdirecting us away from our greatest goal).  We may even change our desires such that they are compatible with an external force attempting to dominate us, thus rendering the external domination powerless (or at least less powerful than it was), and then we could conceivably regain a form of power over our experience and understanding of the world.

I’ve argued elsewhere that I think that our view of the world as well as our actions and desires can be properly described as predictions of various causal relations (this is based on my personal view of knowledge combined with a Predictive Processing account of brain function).  Reconciling this train of thought with Nietzsche’s basic idea of a will to power, I think we could…equate maximal autonomy with maximal predictive success (including the predictions pertaining to our desires). Looking at autonomy and a will to power in this way, we can see that one is more likely to make successful predictions about the actions of another if they subjugate the other’s will to power by their own.  And one can also increase the success of their predictions by changing them in the right ways, including increasing their complexity to better match the causal structure of the world, and by changing our desires and actions as well.”

On the other hand, if we treat the brain’s predictive activity as a kind of mechanistic description of how the brain works rather than a type of drive per se, and if we treat our underlying drives as something that merely falls under this umbrella of description, then we might want to consider whether or not any of the drives that are typically posited in psychological theory (such as power, sex, or a will to live) are actually more basic than any other or typically dominant over the others.  Barrett suggests an alternative, saying that we ought to look at these elements more holistically:

“What if the human psyche cannot be carved up into compartments and one compartment wedged in under another as being more basic?  What if such dichotomizing really overlooks the organic unity of the human psyche, which is such that a single impulse can be just as much an impulse toward love on the one hand as it is toward power on the other?”

I agree with Barrett at least in the sense that these drives seem to be operating together much of the time, and when they aren’t in unison, they often seem to be operating on the same level at least.  Another way to put this could be to say that as a social species, we’ve evolved a number of different modular behavioral strategies that are typically best suited for particular social circumstances, though some circumstances may be such that multiple strategies will work and thus multiple strategies may be used simultaneously without working against each other.

But I also think that our conscious attention plays a role as well where the primary drive(s) used to produce the subsequent behavior may be affected by thinking about certain things or in a certain way, such as how much you love someone, what you think you can gain from them, etc.  And this would mean that the various underlying drives for our behavior are individually selected or prioritized based on one’s current experiences and where their attention is directed at any particular moment.  It may still be the case that some drives are typically dominant over others, such as a will to power, even if certain circumstances can lead to another drive temporarily taking over, however short-lived that may be.

The will to power, even if it’s not primary, would still help to explain some of the enormous advancements we’ve made in our scientific and technological progress, while also explaining (at least to some degree) the apparent disparity between our standard of living and overall physio-psychological health on the one hand, and our immense power to manipulate the environment on the other:

“Technology in the twentieth century has taken such enormous strides beyond that of the nineteenth that it now bulks larger as an instrument of naked power than as an instrument for human well-being.”

We could fully grasp Barrett’s point here by thinking about the opening scene in Stanley Kubrick’s 2001: A Space Odyssey, where “The Dawn of Man” was essentially made manifest with the discovery of tools, specifically weapons.  Once this seemingly insignificant discovery was made, it completely changed the game for our species.  While it gave us a new means of defending ourselves and an ability to hunt other animals, thus providing us with the requisite surplus of protein and calories needed for brain development and enhanced intelligence, it also catalyzed an arms race.  And with the advent of human civilization, our historical trajectory around the globe became largely dominated by our differential means of destruction; whoever had the biggest stick, the largest army, the best projectiles, and ultimately the most concentrated form of energy, would likely win the battle, forever changing the fate of the parties involved.

Indeed war has been such a core part of human history, so it’s no wonder Nietzsche stumbled upon such an idea; and even if he hadn’t, someone else likely would have even if they wouldn’t have done so with the same level of creativity and intellectual rigor.  If our history has been so colored with war, which is a physical instantiation of a will to power in order to dominate another group, then we should expect many to see it as a fundamental part of who we are.  Personally, I don’t think we’re fundamentally war-driven creatures but rather that war results from our ability to engage in it combined with a perceived lack of and desire for resources, freedom, and stability in our lives; but many of these goods, freedom in particular, imply a particular level of power over oneself and their environment, so even a fight for freedom is in one way or another a kind of will to power as well.

And what are we to make of our quest for power in terms of the big picture?  Theoretically, there is an upper limit to the amount of power any individual or group can possibly attain, and if one gets to a point where they are seeking power for power’s sake, and using their newly acquired power to harness even more power, then what will happen when we eventually hit that ceiling?  If our values become centered around power, and we lose any anchor we once had to provide meaning for our lives, then it seems we would be on a direct path toward nihilism:

“For Nietzsche, the problem of nihilism arose out of the discovery that “God is dead.” “God” here means the historical God of the Christian faith.  But in a wider philosophical sense it means also the whole realm of supersensible reality-Platonic Ideas, the Absolute, or what not-that philosophy has traditionally posited beyond the sensible realm, and in which it has located man’s highest values.  Now that this other, higher, eternal realm is gone, Nietzsche declared, man’s highest values lose their value…The only value Nietzsche can set up to take the place of these highest values that have lost their value for contemporary man is: Power.”

As Barrett explains, Nietzsche seemed to think that power was the only thing that could replace the eternal realm that we valued so much; but of course this does nothing to alleviate the problem of nihilism in the long term since the infinite void still awaits those at the end of the finite road to maximal power.

“If this moment in Western history is but the fateful outcome of the fundamental ways of thought that lie at the very basis of our civilization-and particularly of that way of thought that sunders man from nature, sees nature as a realm of objects to be mastered and conquered, and can therefore end only with the exaltation of the will to power-then we have to find out how this one-sided and ultimately nihilistic emphasis upon the power over things may be corrected.”

I think the answer to this problem lies in a combination of strategies, but with the overarching goal of maximizing life fulfillment.  We need to reprogram our general attitude toward power such that it is truly instrumental rather than perceived as intrinsically valuable; and this means that our basic goal becomes accumulating power such that we can maximize our personal satisfaction and life fulfillment, and nothing more.  Among other things, the power we acquire should be centered around a power over ourselves and our own psychology; finding ways of living and thinking which are likely to include the fostering of a more respectful relationship with the nature that created us in the first place.

And now that we’re knee deep in the Information Age, we will be able to harness the potential of genetic engineering, artificial intelligence, and artificial reality; and within these avenues of research we will be able to change our own biology such that we can quickly adapt our psychology to the ever-changing cultural environment of the modern world, and we’ll be able to free up our time to create any world we want.  We just need to keep the ultimate goal in mind, fulfillment and contentment, and direct our increasing power towards this goal in particular instead of merely chipping away at it in the background as some secondary priority.  If we don’t prioritize it, then we may simply perpetuate and amplify the meaningless sources of distraction in our lives, eventually getting lost in the chaos, and forgetting about our ultimate potential and the imperatives needed to realize it.

I’ll be posting a link here to part 9 of this post-series, exploring Heidegger’s philosophy, once it has been completed.

Irrational Man: An Analysis (Part 2, Chapter 4: “The Sources of Existentialism in the Western Tradition”)

In the previous post in this series on William Barrett’s Irrational Man, I explored Part 1, Chapter 3: The Testimony of Modern Art, where Barrett illustrates how existentialist thought is best exemplified in modern art.  The feelings of alienation, discontent, and meaninglessness pervade a number of modern expressions, and so as is so often the case throughout history, we can use artistic expression as a window to peer inside our evolving psyche and witness the prevailing views of the world at any point in time.

In this post, I’m going to explore Part II, Chapter 4: The Sources of Existentialism in the Western Tradition.  This chapter has a lot of content and is quite dense, and so naturally this fact is reflected in the length of this post.

Part II: “The Sources of Existentialism in the Western Tradition”

Ch. 4 – Hebraism and Hellenism

Barrett begins this chapter by pointing out two forces governing the historical trajectory of Western civilization and Western thought: Hebraism and Hellenism.  He mentions an excerpt of Matthew Arnold’s, in his book Culture and Anarchy; a book that was concerned with the contemporary situation occurring in nineteenth-century England, where Arnold writes:

“We may regard this energy driving at practice, this paramount sense of the obligation of duty, self-control, and work, this earnestness in going manfully with the best light we have, as one force.  And we may regard the intelligence driving at those ideas which are, after all, the basis of right practice, the ardent sense for all the new and changing combinations of them which man’s development brings with it, the indomitable impulse to know and adjust them perfectly, as another force.  And these two forces we may regard as in some sense rivals–rivals not by the necessity of their own nature, but as exhibited in man and his history–and rivals dividing the empire of the world between them.  And to give these forces names from the two races of men who have supplied the most splendid manifestations of them, we may call them respectively the forces of Hebraism and Hellenism…and it ought to be, though it never is, evenly and happily balanced between them.”

And while we may have felt a stronger attraction to one force over the other at different points in our history, both forces have played an important role in how we’ve structured our individual lives and society at large.  What distinguishes these two forces ultimately comes down to the difference between doing and knowing; between the Hebrew’s concern for practice and right conduct, and the Greek’s concern for knowledge and right thinking.  I can’t help but notice this Hebraistic influence in one of the earliest expressions of existentialist thought when Soren Kierkegaard (in one of his earlier journals) had said: “What I really need is to get clear about what I must do, not what I must know, except insofar as knowledge must precede every act”.  Here we see that some set of moral virtues are what form the fundamental substance and meaning of life within Hebraism (and by extension, Kierkegaard’s philosophy), in contrast with the Hellenistic subordination of the moral virtues to those of the intellectual variety.

I for one am tempted to mention Aristotle here, since he was a Greek philosopher, yet one who formulated and extolled moral virtues, forming the foundation for most if not all modern systems of virtue ethics; despite all of his work on science, logic and epistemology.  And sure enough, Arnold does mention Aristotle briefly, but he says that for Aristotle the moral virtues are “but the porch and access to the intellectual (virtues), and with these last is blessedness.”  So it’s still fair to say that the intellectual virtues were given priority over the moral virtues within Greek thought, even if moral virtues were an important element, serving as a moral means to a combined moral-intellectual end.  We’re still left then with a distinction of what is prioritized or what the ultimate teleology is for Hebraism and Hellenism: moral man versus intellectual man.

One perception of Arnold’s that colors his overall thesis is a form of uneasiness that he sees as lurking within the Hebrew conception of man; an uneasiness stemming from a conception of man that has been infused with the idea of sin, which is simply not found in Greek philosophy.  Furthermore, this idea of sin that pervades the Hebraistic view of man is not limited to one’s moral or immoral actions; rather it penetrates into the core being of man.  As Barrett puts it:

“But the sinfulness that man experiences in the Bible…cannot be confined to a supposed compartment of the individual’s being that has to do with his moral acts.  This sinfulness pervades the whole being of man: it is indeed man’s being, insofar as in his feebleness and finiteness as a creature he stands naked in the presence of God.”

So we have a predominantly moral conception of man within Hebraism, but one that is amalgamated with an essential finitude, an acknowledgement of imperfection, and the expectation of our being morally flawed human beings.  Now when we compare this to the philosophers of Ancient Greece, who had a more idealistic conception of man, where humans were believed to have the capacity to access and understand the universe in its entirety, then we can see the groundwork that was laid for somewhat rivalrous but nevertheless important motivations and contributions to the cultural evolution of Western civilization: science and philosophy from the Greeks, and a conception of “the Law” entrenched in religion and religious practice from the Hebrews.

1. The Hebraic Man of Faith

Barrett begins here by explaining how the Law, though important for its effects on having bound the Jewish community together for centuries despite their many tribulations as a people, the Law is not central to Hebraism but rather the basis of the Law is what lies at its center.  To see what this basis is, we are directed to reread the Book of Job in the Hebrew Bible:

“…reread it in a way that takes us beyond Arnold and into our own time, reread it with an historical sense of the primitive or primary mode of existence of the people who gave expression to this work.  For earlier man, the outcome of the Book of Job was not such a foregone conclusion as it is for us later readers, for whom centuries of familiarity and forgetfulness have dulled the violence of the confrontation between man and God that is central to the narrative.”

Rather than simply taking the commandments of one’s religion for granted and following them without pause, Job’s face-to-face confrontation with his Creator and his demand for justification was in some sense the first time the door had been opened to theological critique and reflection.  The Greeks did something similar where eventually they began to apply reason and rationality to examine religion, stepping outside the confines of simply blindly following religious traditions and rituals.  But unlike the Greek, the Hebrew does not proceed with this demand of justification through the use of reason but rather by a direct encounter of the person as a whole (Job, and his violence, passion, and all the rest) with an unknowable and awe-inspiring God.  Job doesn’t solve his problem with any rational resolution, but rather by changing his entire character.  His relation to God involves a mutual confrontation of two beings in their entirety; not just a rational facet of each being looking for a reasonable explanation from one another, but each complete being facing one another, prepared to defend their entire character and identity.

Barrett mentions the Jewish philosopher Martin Buber here to help clarify things a little, by noting that this relation between Job and God is a relation between an I and a Thou (or a You).  Since Barrett doesn’t explain Buber’s work in much detail, I’ll briefly digress here to explain a few key points.  For those unfamiliar with Buber’s work, the I-Thou relation is a mode of living that is contrasted with another mode centered on the connection between an I and an It.  Both modes of living, the I-Thou and the I-It, are, according to Buber, the two ways that human beings can address existence; the two modes of living required for a complete and fulfilled human being.  The I-It mode encompasses the world of experience and sensation; treating entities as discrete objects to know about or to serve some use.  The I-Thou mode on the other hand encompasses the world of relations itself, where the entities involved are not separated by some discrete boundary, and where a living relationship is acknowledged to exist between the two; this mode of existence requires one to be an active participant rather than merely an objective observer.

It is Buber’s contention that modern life has entirely ignored the I-Thou relation, which has led to a feeling of unfulfillment and alienation from the world around us.   The first mode of existence, that of the I-It, involves our acquiring data from the world, analyzing and categorizing it, and then theorizing about it; and this leads to a subject-object separation, or an objectification of what is being experienced.  According to Buber, modern society almost exclusively uses this mode to engage with the world.  In contrast, with the I-Thou relation the I encounters the object or entity such that both are transformed by the relation; and there is a type of holism at play here where the You or Thou is not simply encountered as a sum of its parts but in its entirety.  To put it another way, it’s as if the You encountered were the entire universe, or that somehow the universe existed through this You.

Since this concept is a little nebulous, I think the best way to summarize Buber’s main philosophical point here is to say that we human beings find meaning in our lives through our relationships, and so we need to find ways of engaging the world such as to maximize our relationships with it; and this is not limited to forming relationships with fellow human beings, but with other animals, inanimate objects, etc., even if these relationships differ from one another in any number of ways.

I actually find some relevance between Buber’s “I and Thou” conception and Nietzsche’s idea of eternal recurrence: the idea that given an infinite amount of time and a finite number of ways that matter and energy can be arranged, anything that has ever happened or that ever will happen, will recur an infinite number of times.  In Nietzsche’s The Gay Science, he mentions how the concept of eternal recurrence was, to him, horrifying and paralyzing.  But, he also concluded that the desire for an eternal return or recurrence would show the ultimate affirmation of one’s life:

“What, if some day or night a demon were to steal after you into your loneliest loneliness and say to you: ‘This life as you now live it and have lived it, you will have to live once more and innumerable times more’ … Would you not throw yourself down and gnash your teeth and curse the demon who spoke thus?  Or have you once experienced a tremendous moment when you would have answered him: ‘You are a god and never have I heard anything more divine.

The reason I find this relevant to Buber’s conception is two-fold: first of all, the fact that the universe is causally connected makes the universe inseparable to some degree, where each object or entity could be seen to, in some sense, represent the rest of the whole; and secondly, if one is to wish for the eternal recurrence, then they have an attitude toward the world that is non-resistant, and that can learn to accept the things that are out of one’s control.  The universe effectively takes on the character of fate itself, and offers an opportunity to a being such as ourselves, to have a kind of “faith in fate”; to have a relation of trust with the universe as it is, as it once was, and as it will be in the future.

Now the kind of faith I’m speaking of here isn’t a brand that contradicts reason or evidence, but rather is simply a form of optimism and acceptance that colors one’s expectations and overall experience.  And since we are a part of this universe too, our attitude towards it should in many ways reflect our overall relation to it; which brings us back to Buber, where any encounter I might have with the universe (or any “part” of it) is an encounter with a You, that is to say, it is an I-Thou relation.

This “faith in fate” concept I just alluded to is a good segue to return back to the relation between Job and his God within Hebraism, as it was a relation of never-ending faith.  But importantly, as Barrett points out, this faith of Job’s takes on many shapes including that of anger, dismay, revolt, and confusion.  For Job says, “Though he slay me, yet will I trust in him…but I will maintain my own ways before him.”  So Job’s faith is his maintaining a form of trust in his Creator, even though he says that he will also retain his own identity, his dispositions, and his entire being while doing so.  And this trust ultimately forms the relation between the two.  Barrett describes the kind of faith at play here as more fundamental and primary than that which many modern-day religious proponents would lay claim to.

“Faith is trust before it is belief-belief in the articles, creeds, and tenets of a Church with which later religious history obscures this primary meaning of the word.  As trust, in the sense of the opening up of one being toward another, faith does not involve any philosophical problem about its position relative to faith and reason.  That problem comes up only later when faith has become, so to speak, propositional, when it has expressed itself in statements, creeds, systems.  Faith as a concrete mode of being of the human person precedes faith as the intellectual assent to a proposition, just as truth as a concrete mode of human being precedes the truth of any proposition.”

Although I see faith as belief as fundamentally flawed and dangerous, I can certainly respect the idea of faith as trust, and consequently I can respect the idea of faith as a concrete mode of being; where this mode of being is effectively an attitude of openness taken towards another.  But, whereas I agree with Barrett’s claim that truth as a concrete mode of human being precedes the truth of any proposition, in the sense that a basis and desire for truth are needed prior to evaluating the truth value of any proposition, I don’t believe one can ever justifiably make the succession from faith as trust to faith as belief for the simple reason that if one has good reason to trust another, then they don’t need faith to mediate any beliefs stemming from that trust.  And of course, what matters most here is describing and evaluating what the “Hebrew man of faith” consists of, rather than criticizing the concept of faith itself.

Another interesting historical development stemming from Hebraism pertains to the concept of faith as it evolved within Protestantism.  As Barrett tells us:

“Protestantism later sought to revive this face-to-face confrontation of man with his God, but could produce only a pallid replica of the simplicity, vigor, and wholeness of this original Biblical faith…Protestant man would never have dared confront God and demand an accounting of His ways.  That era in history had long since passed by the time we come to the Reformation.”

As an aside, it’s worth mentioning here that Christianity actually developed as a syncretism between Hebraism and Hellenism; the two very cultures under analysis in this chapter.  By combining Jewish elements (e.g. monotheism, the substitutionary atonement of sins through blood-magic, apocalyptic-messianic resurrection, interpretation of Jewish scriptures, etc.) with Hellenistic religious elements (e.g. dying-and-rising savior gods, virgin birth of a deity, fictive kinship, etc.), the cultural diffusion that occurred resulted in a religion that was basically a cosmopolitan personal salvation cult.

But eventually Christianity became a state religion (after the age of Constantine), resulting in a theocracy that heavily enforced a particular conception of God onto the people.  Once this occurred, the religion was now fully contained within a culture that made it very difficult to question or confront anyone about these conceptions, in order to seek justification for them.  And it may be that the intimidation propagated by the prevailing religious authorities became conflated with an attribute of God; where a fear of questioning any conception of God became integrated in a theological belief about God.

Perhaps it was because the primary conceptions of God, once Christianity entered the Medieval Period, were more externally imposed on everybody rather than discovered in a more personal and introspective way (even if unavoidably initiated and reinforced by the external culture), thus externalizing the attributes of God such that they became more heavily influenced by the perceptibly unquestionable claims of those in power.  Either way, the historical contingencies surrounding the advent of Christianity involved sectarian battles with the winning sect using their dogmatic authority to suppress the views (and the Christian Gospels/scriptures) of the other sects.  And this may have inhibited people from ever questioning their God or demanding justification for what they interpreted to be God’s actions.

One final point in this section that I’d like to highlight is in regard to the concept of knowledge as it relates to Hebraism.  Barrett distinguishes this knowledge from that of the Greeks:

“We have to insist on a noetic content in Hebraism: Biblical man too had his knowledge, though it is not the intellectual knowledge of the Greek.  It is not the kind of knowledge that man can have through reason alone, or perhaps not through reason at all; he has it rather through body and blood, bones and bowels, through trust and anger and confusion and love and fear; through his passionate adhesion in faith to the Being whom he can never intellectually know.  This kind of knowledge a man has only through living, not reasoning, and perhaps in the end he cannot even say what it is he knows; yet it is knowledge all the same, and Hebraism at its source had this knowledge.”

I may not entirely agree with Barrett here, but any disagreement is only likely to be a quibble over semantics, relating to how he and I define noetic and how we each define knowledge.  The word noetic actually derives from the Greek adjective noētikos which means “intellectual”, and thus we are presumably considering the intellectual knowledge found in Hebraism.  Though this knowledge may not be derived from reason alone, I don’t think (as Barrett implies above) that it’s even possible to have any noetic content without at least some minimal use of reason.  It may be that the kind of intellectual content he’s alluding to is that which results from a kind of synthesis between reason and emotion or intuition, but it would seem that reason would still have to be involved, even if it isn’t as primary or dominant as in the intellectual knowledge of the Greek.

With regard to how Barrett and I are each defining knowledge, I must say that just as most other philosophers have done, including those going all the way back to Plato, one must distinguish between knowledge and all other beliefs because knowledge is merely a subset of all of one’s beliefs (otherwise one would be saying that any belief whatsoever is considered knowledge).  To distinguish knowledge as a subset of one’s beliefs, my definition of knowledge can be roughly defined as:

“Recognized patterns of causality that are stored into memory for later recall and use, that positively and consistently correlate with reality, and for which that correlation has been validated by having made successful predictions and/or successfully accomplishing goals through the use of said recalled patterns.”

My conception of knowledge also takes what one might call unconscious knowledge into account (which may operate more automatically and less explicitly than conscious knowledge); as long as it is acquired information that allows you to accomplish goals and make successful predictions about the causal structure of your experience (whether internal feelings or other mental states, or perceptions of the external world), it counts as knowledge nevertheless.  Now there may be different degrees of reliability and usefulness of different kinds of knowledge (such as intuitive knowledge versus rational or scientific knowledge), but those distinctions don’t need to be parsed out here.

What Barrett seems to be describing here as a Hebraistic form of knowledge is something that is deeply embodied in the human being; in the ways that one lives their life that don’t involve, or aren’t dominated by, reason or conscious abstractions.  Instead, there seems to be a more organic process involved of viewing the world and interacting with it in a manner relying more heavily on intuition and emotionally-driven forms of expression.  But, in order to give rise to beliefs that cohere with one another to some degree, to form some kind of overarching narrative, reason (I would argue) is still employed.  There’s still some logical structure to the web of beliefs found therein, even if reason and logic could be used more rigorously and explicitly to turn many of those beliefs on their head.

And when it comes to the Hebraistic conception of God, including the passionate adhesion in faith to a Being whom one can never intellectually know (as Barrett puts it), I think this can be better explained by the fact that we do have reason and rationality, unlike most other animals, as well as cognitive biases such as hyperactive agency detection.  It seems to me that the Hebraistic God concept is more or less derived from an agglomeration of the unknown sources of one’s emotional experiences (especially those involving an experience of the transcendent) and the unpredictable attributes of life itself, then ascribing agency to that ensemble of properties, and (to use Buber’s terms) establishing a relationship with that perceived agency; and in this case, the agent is simply referred to as God (Yahweh).

But in order to infer the existence of such an ensemble, it would seem to require a process of abstracting from particular emotional and unpredictable elements and instances of one’s experience to conclude some universal source for all of them.  Perhaps if this process is entirely unconscious we can say that reason wasn’t at all involved in it, but I suspect that the ascription of agency to something that is on par with the universe itself and its large conjunction of properties which vary over space and time, including its unknowable character, is more likely mediated by a combination of cognitive biases, intuition, emotion, and some degree of rational conscious inference.  But Barrett’s point still stands that the noetic content in Hebraism isn’t dominated by reason as in the case of the Greeks.

2. Greek Reason

Even though existential philosophy is largely a rebellion against the Platonic ideas that have permeated Western thought, Barrett reminds us that there is still some existential aspect to Plato’s works.  Perhaps this isn’t surprising once one realizes that he actually began his adult life aspiring to be a dramatic poet.  Eventually he abandoned this dream undergoing a sort of conversion and decided to dedicate the rest of his life toward a search for wisdom as per the inspiration of Socrates.  Even though he engaged in a life long war against the poets, he still maintained a piece of his poetic character in his writings, including up to the end when he wrote about the great creation myth, Timaeus.

By far, Plato’s biggest contribution to Western thought was his differentiating rational consciousness from the rest of our human psyche.  Prior to this move, rationality was in some sense subsumed under the same umbrella as emotion and intuition.  And this may be one of the biggest changes to how we think, and one that is so often taken for granted.  Barrett describes the position we’re in quite well:

“We are so used today to taking our rational consciousness for granted, in the ways of our daily life we are so immersed in its operations, this it is hard at first for us to imagine how momentous was this historical happening among the Greeks.  Steeped as our age is in the ideas of evolution, we have not yet become accustomed to the idea that consciousness itself is something that has evolved through long centuries and that even today, with us, is still evolving.  Only in this century, through modern psychology, have we learned how precarious a hold consciousness may exert upon life, and we are more acutely aware therefore what a precious deal of history, and of effort, was required for its elaboration, and what creative leaps were necessary at certain times to extend it beyond its habitual territory.”

Barrett’s mention of evolution is important here, because I think we ought to distinguish between the two forms of evolution that have affected how we think and experience the world.  On the one hand, we have our biological evolution to consider, where our brains have undergone dramatic changes in terms of the level of consciousness we actually experience and the degree of causal complexity that the brain can model; and on the other hand, we have our cultural evolution to consider, where our brains have undergone a number of changes in terms of the kinds of “programs” we run on it, how those programs are run and prioritized, and how we process and store information in various forms of external hardware.

In regard to biological evolution, long before our ancestors evolved into humans, the brain had nothing more than a protoself representation of our body and self (to use the neuroscientist Antonio Damasio’s terminology); it had nothing more than an unconscious state that served as a sort of basic map in the brain tracking the overall state of the body as a single entity in order to accomplish homeostasis.  Then, beyond this protoself there evolved a more sophisticated level of core consciousness where the organism became conscious of the feelings and emotions associated with the body’s internal states, and also became conscious that her feelings and thoughts were her own, further enhancing the sense of self, although still limited to the here-and-now or the present moment.  Finally, beyond this layer of self there evolved an extended consciousness: a form of consciousness that required far more memory, but which brought the organism into an awareness of the past and future, forming an autobiographical self with a perceptual simulator (imagination) that transcended space and time in some sense.

Once humans developed language, then we began to undergo our second form of evolution, namely culture.  After cultural evolution took off, human civilization led to a plethora of new words, concepts, skills, interests, and ways of looking at and structuring the world.  And this evolutionary step was vastly accelerated by written language, the scientific method, and eventually the invention of computers.  But in the time leading up to the scientific revolution, the industrial revolution, and finally the advent of computers and the information age, it was arguably Plato’s conceptualization of rational consciousness that paved the way forward to eventually reach those technological and epistemological feats.

It was Plato’s analysis of the human psyche and his effectively distilling the process of reason and rationality from the rest of our thought processes that allowed us to manipulate it, to enhance it, and to explore the true possibilities of this markedly human cognitive capacity.  Aristotle and many others since have merely built upon Plato’s monumental work, developing formal logic, computation, and other means of abstract analysis and information processing.  With the explicit use of reason, we’ve been able to overcome many of our cognitive biases (serving as a kind of “software patch” to our cognitive bugs) in order to discover many true facts about the world, allowing us to unlock a wealth of knowledge that had previously been inaccessible to us.  And it’s important to recognize that Plato’s impact on the history of philosophy has highlighted, more than anything else, our overall psychic evolution as a species.

Despite all the benefits that culture has brought us, there has been one inherent problem with the cultural evolution we’ve undergone: a large desynchronization between our cultural and biological evolution.  That is to say, our culture has evolved far, far faster than our biology ever could, and thus our biology hasn’t kept up with, or adapted us to, the cultural environment we’ve been creating.  And I believe this is a large source of our existential woes; for we have become a “fish out of water” (so to speak) where modern civilization and the way we’ve structured our day-to-day lives is incredibly artificial, filled with a plethora of supernormal stimuli and other factors that aren’t as compatible with our natural psychology.  It makes perfect sense then that many people living in the modern world have had feelings of alienation, loneliness, meaninglessness, anxiety, and disorientation.  And in my opinion, there’s no good or pragmatic way to fix this aside from engineering our genes such that our biology is able to catch up to our ever-changing cultural environment.

It’s also important to recognize that Plato’s idea of the universal, explicated in his theory of Forms, was one of the main impetuses for contemporary existential philosophy; not for its endorsement but rather because the idea that a universal such as “humanity” was somehow more real than any actual individual person fueled a revolt against such notions.  And it wasn’t until Kierkegaard and Nietzsche appeared on the scene in the nineteenth century where we saw an explicit attempt to reverse this Platonic idea; where the individual was finally given precedence and priority over the universal, and where a philosophy of existence was given priority over one of essence.  But one thing the existentialists maintained, as derived from Plato, was his conception that philosophizing was a personal means of salvation, transformation, and a concrete way of living (though this was, according to Plato, best exemplified by his teacher Socrates).

As for the modern existentialists, it was Kierkegaard who eventually brought the figure of Socrates back to life, long after Plato’s later portrayal of Socrates, which had effectively dehumanized him:

“The figure of Socrates as a living human presence dominates all the earlier dialogues because, for the young Plato, Socrates the man was the very incarnation of philosophy as a concrete way of life, a personal calling and search.  It is in this sense too that Kierkegaard, more than two thousand years later, was to revive the figure of Socrates-the thinker who lived his thought and was not merely a professor in an academy-as his precursor in existential thinking…In the earlier, so-called “Socratic”, dialogues the personality of Socrates is rendered in vivid and dramatic strokes; gradually, however, he becomes merely a name, a mouthpiece for Plato’s increasingly systematic views…”

By the time we get to Plato’s student, Aristotle, philosophy had become a purely theoretical undertaking, effectively replacing the subjective qualities of a self-examined life with a far less visceral objective analysis.  Indeed, by this point in time, as Barrett puts it: “…the ghost of the existential Socrates had at last been put to rest.”

As in all cases throughout history, we must take the good with the bad.  And we very likely wouldn’t have the sciences today had it not been for Plato detaching reason from the poetic, religious, and mythological elements of culture and thought, thus giving reason its own identity for the first time in history.  Whatever may have happened, we need to try and understand Greek rationalism as best we can such that we can understand the motivations of those that later objected to it, especially within modern existentialism.

When we look back to Aristotle’s Nicomachean Ethics, for example, we find a fairly balanced perspective of human nature and the many motivations that drive our behavior.  But, in evaluating all the possible goods that humans can aim for, in order to derive an answer to the ethical question of what one ought to do above all else, Aristotle claimed that the highest life one could attain was the life of pure reason, the life of the philosopher and the theoretical scientist.  Aristotle thought that reason was the highest part of our personality, where one’s capacity for reason was treated as the center of one’s real self and the core of their identity.  It is this stark description of rationalism that diffused through Western philosophy until bumping heads with modern existentialist thought.

Aristotle’s rationalism even permeated the Christianity of the Medieval period, where it maintained an albeit uneasy relationship between faith and reason as the center of the human personality.  And the quest for a complete view of the cosmos further propagated a valuing of reason as the highest human function.  The inherent problems with this view simply didn’t surface until some later thinkers began to see human existence and the potential of our reason as having a finite character with limitations.  Only then was the grandiose dream of having a complete knowledge of the universe and of our existence finally shattered.  Once this goal was seen as unattainable, we were left alone on an endless road of knowledge with no prospects for any kind of satisfying conclusion.

We can certainly appreciate the value of theoretical knowledge, and even develop a passion for discovering it, but we mustn’t lose sight of the embodied, subjective qualities of our human nature; nor can we successfully argue any longer that the latter can be dismissed due to a goal of reaching some absolute state of knowledge or being.  That goal is not within our reach, and so trying to make arguments that rely on its possibility is nothing more than an exercise of futility.

So now we must ask ourselves a very important question:

“If man can no longer hold before his mind’s eye the prospect of the Great Chain of Being, a cosmos rationally ordered and accessible from top to bottom to reason, what goal can philosophers set themselves that can measure up to the greatness of that old Greek ideal of the bios theoretikos, the theoretical life, which has fashioned the destiny of Western man for millennia?”

I would argue that the most important goal for philosophy has been and always will be the quest for discovering as many moral facts as possible, such that we can attain eudaimonia as Aristotle advocated for.  But rather than holding a life of reason as the greatest good to aim for, we should simply aim for maximal life fulfillment and overall satisfaction with one’s life, and not presuppose what will accomplish that.  We need to use science and subjective experience to inform us of what makes us happiest and the most fulfilled (taking advantage of the psychological sciences), rather than making assumptions about what does this best and simply arguing from the armchair.

And because our happiness and life fulfillment are necessarily evaluated through subjectivity, we mustn’t make the same mistakes of positivism and simply discard our subjective experience.  Rather we should approach morality with the recognition that we are each an embodied subject with emotions, intuitions, and feelings that motivate our desires and our behavior.  But we should also ensure that we employ reason and rationality in our moral analysis, so that our irrational predispositions don’t lead us farther away from any objective moral truths waiting to be discovered.  I’m sympathetic to a quote of Kierkegaard’s that I mentioned at the beginning of this post:

“What I really need is to get clear about what I must do, not what I must know, except insofar as knowledge must precede every act.”

I agree with Kierkegaard here, in that moral imperatives are the most important topic in philosophy, and should be the most important driving force in one’s life, rather than simply a drive for knowledge for it’s own sake.  But of course, in order to get clear about what one must do, one first has to know a number of facts pertaining to how any moral imperatives are best accomplished, and what those moral imperatives ought to be (as well as which are most fundamental and basic).  I think the most basic axiomatic moral imperative within any moral system that is sufficiently motivating to follow is going to be that which maximizes one’s life fulfillment; that which maximizes one’s chance of achieving eudaimonia.  I can’t imagine any greater goal for humanity than that.

Here is the link for part 5 of this post series.

The Experientiality of Matter

If there’s one thing that Nietzsche advocated for, it was to eliminate dogmatism and absolutism from one’s thinking.  And although I generally agree with him here, I do think that there is one exception to this rule.  One thing that we can be absolutely certain of is our own conscious experience.  Nietzsche actually had something to say about this (in Beyond Good and Evil, which I explored in a previous post), where he responds to Descartes’ famous adage “cogito ergo sum” (the very adage associated with my blog!), and he basically says that Descartes’ conclusion (“I think therefore I am”) shows a lack of reflection concerning the meaning of “I think”.  He wonders how he (and by extension, Descartes) could possibly know for sure that he was doing the thinking rather than the thought doing the thinking, and he even considers the possibility that what is generally described as thinking may actually be something more like willing or feeling or something else entirely.

Despite Nietzsche’s criticism against Descartes in terms of what thought is exactly or who or what actually does the thinking, we still can’t deny that there is thought.  Perhaps if we replace “I think therefore I am” with something more like “I am conscious, therefore (my) conscious experience exists”, then we can retain some core of Descartes’ insight while throwing out the ambiguities or uncertainties associated with how exactly one interprets that conscious experience.

So I’m in agreement with Nietzsche in the sense that we can’t be certain of any particular interpretation of our conscious experience, including whether or not there is an ego or a self (which Nietzsche actually describes as a childish superstition similar to the idea of a soul), nor can we make any certain inferences about the world’s causal relations stemming from that conscious experience.  Regardless of these limitations, we can still be sure that conscious experience exists, even if it can’t be ascribed to an “I” or a “self” or any particular identity (let alone a persistent identity).

Once we’re cognizant of this certainty, and if we’re able to crawl out of the well of solipsism and eventually build up a theory about reality (for pragmatic reasons at the very least), then we must remain aware of the priority of consciousness in any resultant theory we construct about reality, with regard to its structure or any of its other properties.  Personally, I believe that some form of naturalistic physicalism (a realistic physicalism) is the best candidate for an ontological theory that is the most parsimonious and explanatory for all that we experience in our reality.  However, most people that make the move to adopt some brand of physicalism seem to throw the baby out with the bathwater (so to speak), whereby consciousness gets eliminated by assuming it’s an illusion or that it’s not physical (therefore having no room for it in a physicalist theory, aside from its neurophysiological attributes).

Although I used to feel differently about consciousness (and it’s relationship to physicalism), where I thought it was plausible for it to be some kind of an illusion, upon further reflection I’ve come to realize that this was a rather ridiculous position to hold.  Consciousness can’t be an illusion in the proper sense of the word, because the experience of consciousness is real.  Even if I’m hallucinating where my perceptions don’t directly correspond with the actual incoming sensory information transduced through my body’s sensory receptors, then we can only say that the perceptions are illusory insofar as they don’t directly map onto that incoming sensory information.  But I still can’t say that having these experiences is itself an illusion.  And this is because consciousness is self-evident and experiential in that it constitutes whatever is experienced no matter what that experience consists of.

As for my thoughts on physicalism, I came to realize that positing consciousness as an intrinsic property of at least some kinds of physical material (analogous to a property like mass) allows us to avoid having to call consciousness non-physical.  If it is simply an experiential property of matter, that doesn’t negate its being a physical property of that matter.  It may be that we can’t access this property in such a way as to evaluate it with external instrumentation, like we can for all the other properties of matter that we know of such as mass, charge, spin, or what-have-you, but that doesn’t mean an experiential property should be off limits for any physicalist theory.  It’s just that most physicalists assume that everything can or has to be reducible to the externally accessible properties that our instrumentation can measure.  And this suggests that they’ve simply assumed that the physical can only include externally accessible properties of matter, rather than both internally and externally accessible properties of matter.

Now it’s easy to see why science might push philosophy in this direction because its methodology is largely grounded on third-party verification and a form of objectivity involving the ability to accurately quantify everything about a phenomenon with little or no regard for introspection or subjectivity.  And I think that this has caused many a philosopher to paint themselves into a corner by assuming that any ontological theory underlying the totality of our reality must be constrained in the same way that the physical sciences are.  To see why this is an unwarranted assumption, let’s consider a “black box” that can only be evaluated by a certain method externally.  It would be fallacious to conclude that just because we are unable to access the inside of the box, that the box must therefore be empty inside or that there can’t be anything substantially different inside the box compared to what is outside the box.

We can analogize this limitation of studying consciousness with our ability to study black holes within the field of astrophysics, where we’ve come to realize that accessing any information about their interior (aside from how much mass there is) is impossible to do from the outside.  And if we managed to access this information (if there is any) from the inside by leaping past its outer event horizon, it would be impossible for us to escape and share any of that information.  The best we can do is to learn what we can from the outside behavior of the black hole in terms of its interaction with surrounding matter and light and infer something about the inside, like how much matter it contains (e.g. we can infer the mass of a black hole from its outer surface area).  And we can learn a little bit more by considering what is needed to create or destroy a black hole, thus creating or destroying any interior qualities that may or may not exist.

A black hole can only form from certain configurations of matter, particularly aggregates that are above a certain mass and density.  And it can only be destroyed by starving it to death, by depriving it of any new matter, where it will slowly die by evaporating entirely into Hawking radiation, thus destroying anything that was on the inside in the process.  So we can infer that any internal qualities it does have, however inaccessible they may be, can be brought into and out of existence with certain physical processes.

Similarly, we can infer some things about consciousness by observing one’s external behavior including inferring some conditions that can create, modify, or destroy that type of consciousness, but we are unable to know what it’s like to be on the inside of that system once it exists.  We’re only able to know about the inside of our own conscious system, where we are in some sense inside our own black hole with nobody else able to access this perspective.  And I think it is easy enough to imagine that certain configurations of matter simply have an intrinsic, externally inaccessible experiential property, just as certain configurations of matter lead to the creation of a black hole with its own externally inaccessible and qualitatively unknown internal properties.  Despite the fact that we can’t access the black hole’s interior with a strictly external method, to determine its internal properties, this doesn’t mean we should assume that whatever properties may exist inside it are therefore fundamentally non-physical.  Just as we wouldn’t consider alternate dimensions (such as those predicted in M-theory/String-Theory) that we can’t physically access to be non-physical.  Perhaps one or more of these inaccessible dimensions (if they exist) is what accounts for an intrinsic experiential property within matter (though this is entirely speculative and need not be true for the previous points to hold, but it’s an interesting thought nevertheless).

Here’s a relevant quote from the philosopher Galen Strawson, where he outlines what physicalism actually entails:

Real physicalists must accept that at least some ultimates are intrinsically experience-involving. They must at least embrace micropsychism. Given that everything concrete is physical, and that everything physical is constituted out of physical ultimates, and that experience is part of concrete reality, it seems the only reasonable position, more than just an ‘inference to the best explanation’… Micropsychism is not yet panpsychism, for as things stand realistic physicalists can conjecture that only some types of ultimates are intrinsically experiential. But they must allow that panpsychism may be true, and the big step has already been taken with micropsychism, the admission that at least some ultimates must be experiential. ‘And were the inmost essence of things laid open to us’ I think that the idea that some but not all physical ultimates are experiential would look like the idea that some but not all physical ultimates are spatio-temporal (on the assumption that spacetime is indeed a fundamental feature of reality). I would bet a lot against there being such radical heterogeneity at the very bottom of things. In fact (to disagree with my earlier self) it is hard to see why this view would not count as a form of dualism… So now I can say that physicalism, i.e. real physicalism, entails panexperientialism or panpsychism. All physical stuff is energy, in one form or another, and all energy, I trow, is an experience-involving phenomenon. This sounded crazy to me for a long time, but I am quite used to it, now that I know that there is no alternative short of ‘substance dualism’… Real physicalism, realistic physicalism, entails panpsychism, and whatever problems are raised by this fact are problems a real physicalist must face.

— Galen Strawson, Consciousness and Its Place in Nature: Does Physicalism Entail Panpsychism?
.
While I don’t believe that all matter has a mind per se, because a mind is generally conceived as being a complex information processing structure, I think it is likely that all matter has an experiential quality of some kind, even if most instantiations of it are entirely unrecognizable as what we’d generally consider to be “consciousness” or “mentality”.  I believe that the intuitive gap here is born from the fact that the only minds we are confident exist are those instantiated by a brain, which has the ability to make an incredibly large number of experiential discriminations between various causal relations, thus giving it a capacity that is incredibly complex and not observed anywhere else in nature.  On the other hand, a particle of matter on its own would be hypothesized to have the capacity to make only one or two of these kinds of discriminations, making it unintelligent and thus incapable of brain-like activity.  Once a person accepts that an experiential quality can come in varying degrees from say one experiential “bit” to billions or trillions of “bits” (or more), then we can plausibly see how matter could give rise to systems that have a vast range in their causal power, from rocks that don’t appear to do much at all, to living organisms that have the ability to store countless representations of causal relations (memory) allowing them to behave in increasingly complex ways.  And perhaps best of all, this approach solves the mind-body problem by eliminating the mystery of how fundamentally non-experiential stuff could possibly give rise to experientiality and consciousness.

Predictive Processing: Unlocking the Mysteries of Mind & Body (Part VI)

This is the last post I’m going to write for this particular post-series on Predictive Processing (PP).  Here’s the links to parts 1, 2, 3, 4, and 5.  I’ve already explored a bit on how a PP framework can account for folk psychological concepts like beliefs, desires, and emotions, how it accounts for action, language and ontology, knowledge, and also perception, imagination, and reasoning.  In this final post for this series, I’m going to explore consciousness itself and how some theories of consciousness fit very nicely within a PP framework.

Consciousness as Prediction (Predicting The Self)

Earlier in this post-series I explored how PP treats perception (whether online or an offline form like imagination) as simply predictions pertaining to incoming visual information with varying degrees of precision weighting assigned to the resulting prediction error.  In a sense then, consciousness just is prediction.  At the very least, it is a subset of the predictions, likely those that are higher up in the predictive hierarchy.  This is all going to depend on which aspect or level of consciousness we are trying to explain, and as philosophers and cognitive scientists well know, consciousness is difficult to pin down and define in any way.

By consciousness, we could mean any kind of awareness at all, or we could limit this term to only apply to a system that is aware of itself.  Either way we have to be careful here if we’re looking to distinguish between consciousness generally speaking (consciousness in any form, which may be unrecognizable to us) and the unique kind of consciousness that we as human beings experience.  If an ant is conscious, it doesn’t likely have any of the richness that we have in our experience nor is it likely to have self-awareness like we do (even though a dolphin, which has a large neocortex and prefrontal cortex, is far more likely to).  So we have to keep these different levels of consciousness in mind in order to properly assess their being explained by any cognitive framework.

Looking through a PP lens, we can see that what we come to know about the world and our own bodily states is a matter of predictive models pertaining to various inferred causal relations.  These inferred causal relations ultimately stem from bottom-up sensory input.  But when this information is abstracted at higher and higher levels, eventually one can (in principle) get to a point where those higher level models begin to predict the existence of a unified prediction engine.  In other words, a subset of the highest-level predictive models may eventually predict itself as a self.  We might describe this process as the emergence of some level of self-awareness, even if higher levels of self-awareness aren’t possible unless particular kinds of higher level models have been generated.

What kinds of predictions might be involved with this kind of emergence?  Well, we might expect that predictions pertaining to our own autobiographical history, which is largely composed of episodic memories of our past experience, would contribute to this process (e.g. “I remember when I went to that amusement park with my friend Mary, and we both vomited!”).  If we begin to infer what is common or continuous between those memories of past experiences (even if only 1 second in the past), we may discover that there is a form of psychological continuity or identity present.  And if this psychological continuity (this type of causal relation) is coincident with an incredibly stable set of predictions pertaining to (especially internal) bodily states, then an embodied subject or self can plausibly emerge from it.

This emergence of an embodied self is also likely fueled by predictions pertaining to other objects that we infer existing as subjects.  For instance, in order to develop a theory of mind about other people, that is, in order to predict how other people will behave, we can’t simply model their external behavior as we can for something like a rock falling down a hill.  This can work up to a point, but eventually it’s just not good enough as behaviors become more complex.  Animal behavior, most especially that of humans, is far more complex than that of inanimate objects and as such it is going to be far more effective to infer some internal hidden causes for that behavior.  Our predictions would work well if they involved some kind of internal intentionality and goal-directedness operating within any animate object, thereby transforming that object into a subject.  This should be no less true for how we model our own behavior.

If this object that we’ve now inferred to be a subject seems to behave in ways that we see ourselves as behaving (especially if it’s another human being), then we can begin to infer some kind of equivalence despite being separate subjects.  We can begin to infer that they too have beliefs, desires, and emotions, and thus that they have an internal perspective that we can’t directly access just as they aren’t able to access ours.  And we can also see ourselves from a different perspective based on how we see those other subjects from an external perspective.  Since I can’t easily see myself from an external perspective, when I look at others and infer that we are similar kinds of beings, then I can begin to see myself as having both an internal and external side.  I can begin to infer that others see me similar to the way that I see them, thus further adding to my concept of self and the boundaries that define that self.

Multiple meta-cognitive predictions can be inferred based on all of these interactions with others, and from the introspective interactions with our brain’s own models.  Once this happens, a cognitive agent like ourselves may begin to think about thinking and think about being a thinking being, and so on and so forth.  All of these cognitive moves would seem to provide varying degrees of self-hood or self-awareness.  And these can all be thought of as the brain’s best guesses that account for its own behavior.  Either way, it seems that the level of consciousness that is intrinsic to an agent’s experience is going to be dependent on what kinds of higher level models and meta-models are operating within the agent.

Consciousness as Integrated Information

One prominent theory of consciousness is the Integrated Information Theory of Consciousness, otherwise known as IIT.  This theory, initially formulated by Giulio Tononi back in 2004 and which has undergone development ever since, posits that consciousness is ultimately dependent on the degree of information integration that is inherent in the causal properties of some system.  Another way of saying this is that the causal system specified is unified such that every part of the system must be able to affect and be affected by the rest of the system.  If you were to physically isolate one part of a system from the rest of it (and if this part was the least significant to the rest of the system), then the resulting change in the cause-effect structure of the system would quantify the degree of integration.  A large change in the cause-effect structure based on this part’s isolation from the system (that is, by having introduced what is called a minimum partition to the system) would imply a high degree of information integration and vice versa.  And again, a high degree of integration implies a high degree of consciousness.

Notice how this information integration axiom in IIT posits that a cognitive system that is entirely feed-forward will not be conscious.  So if our brain processed incoming sensory information from the bottom up and there was no top-down generative model feeding downward through the system, then IIT would predict that our brain wouldn’t be able to produce consciousness.  PP on the other hand, posits a feedback system (as opposed to feed-forward) where the bottom-up sensory information that flows upward is met with a downward flow of top-down predictions trying to explain away that sensory information.  The brain’s predictions cause a change in the resulting prediction error, and this prediction error serves as feedback to modify the brain’s predictions.  Thus, a cognitive architecture like that suggested by PP is predicted to produce consciousness according to the most fundamental axiom of IIT.

Additionally, PP posits cross-modal sensory features and functionality where the brain integrates (especially lower level) predictions spanning various spatio-temporal scales from different sensory modalities, into a unified whole.  For example, if I am looking at and petting a black cat lying on my lap and hearing it purr, PP posits that my perceptual experience and contextual understanding of that experience are based on having integrated the visual, tactile, and auditory expectations that I’ve associated to constitute such an experience of a “black cat”.  It is going to be contingent on a conjunction of predictions that are occurring simultaneously in order to produce a unified experience rather than a barrage of millions or billions of separate causal relations (let alone those which stem from different sensory modalities) or having millions or billions of separate conscious experiences (which would seem to necessitate separate consciousnesses if they are happening at the same time).

Evolution of Consciousness

Since IIT identifies consciousness with integrated information, it can plausibly account for why it evolved in the first place.  The basic idea here is that a brain that is capable of integrating information is more likely to exploit and understand an environment that has a complex causal structure on multiple time scales than a brain that has informationally isolated modules.  This idea has been tested and confirmed to some degree by artificial life simulations (animats) where adaptation and integration are both simulated.  The organism in these simulations was a Braitenberg-like vehicle that had to move through a maze.  After 60,000 generations of simulated brains evolving through natural selection, it was found that there was a monotonic relationship between their ability to get through the maze and the amount of simulated information integration in their brains.

This increase in adaptation was the result of an effective increase in the number of concepts that the organism could make use of given the limited number of elements and connections possible in its cognitive architecture.  In other words, given a limited number of connections in a causal system (such as a group of neurons), you can pack more functions per element if the level of integration with respect to those connections is high, thus giving an evolutionary advantage to those with higher integration.  Therefore, when all else is equal in terms of neural economy and resources, higher integration gives an organism the ability to take advantage of more regularities in their environment.

From a PP perspective, this makes perfect sense because the complex causal structure of the environment is described as being modeled at many different levels of abstraction and at many different spatio-temporal scales.  All of these modeled causal relations are also described as having a hierarchical structure with models contained within models, and with many associations existing between various models.  These associations between models can be accounted for by a cognitive architecture that re-uses certain sets of neurons in multiple models, so the association is effectively instantiated by some literal degree of neuronal overlap.  And of course, these associations between multiply-leveled predictions allows the brain to exploit (and create!) as many regularities in the environment as possible.  In short, both PP and IIT make a lot of practical sense from an evolutionary perspective.

That’s All Folks!

And this concludes my post-series on the Predictive Processing (PP) framework and how I see it as being applicable to a far more broad account of mentality and brain function, than it is generally assumed to be.  If there’s any takeaways from this post-series, I hope you can at least appreciate the parsimony and explanatory scope of predictive processing and viewing the brain as a creative and highly capable prediction engine.

Predictive Processing: Unlocking the Mysteries of Mind & Body (Part V)

In the previous post, part 4 in this series on Predictive Processing (PP), I explored some aspects of reasoning and how different forms of reasoning can be built from a foundational bedrock of Bayesian inference (click here for parts 1, 2, or 3).  This has a lot to do with language, but I also claimed that it depends on how the brain is likely generating new models, which I think is likely to involve some kind of natural selection operating on neural networks.  The hierarchical structure of the generative models for these predictions as described within a PP framework, also seems to fit well with the hierarchical structure that we find in the brain’s neural networks.  In this post, I’m going to talk about the relation between memory, imagination, and unconscious and conscious forms of reasoning.

Memory, Imagination, and Reasoning

Memory is of course crucial to the PP framework whether for constructing real-time predictions of incoming sensory information (for perception) or for long-term predictions involving high-level, increasingly abstract generative models that allow us to accomplish complex future goals (like planning to go grocery shopping, or planning for retirement).  Either case requires the brain to have stored some kind of information pertaining to predicted causal relations.  Rather than memories being some kind of exact copy of past experiences (where they’d be stored like data on a computer), research has shown that memory functions more like a reconstruction of those past experiences which are modified by current knowledge and context, and produced by some of the same faculties used in imagination.

This accounts for any false or erroneous aspects of our memories, where the recalled memory can differ substantially from how the original event was experienced.  It also accounts for why our memories become increasingly altered as more time passes.  Over time, we learn new things, continuing to change many of our predictive models about the world, and thus have a more involved reconstructive process the older the memories are.  And the context we find ourselves in when trying to recall certain memories, further affect this reconstruction process, adapting our memories in some sense to better match what we find most salient and relevant in the present moment.

Conscious vs. Unconscious Processing & Intuitive Reasoning (Intuition)

Another attribute of memory is that it is primarily unconscious, where we seem to have this pool of information that is kept out of consciousness until parts of it are needed (during memory recall or or other conscious thought processes).  In fact, within the PP framework we can think of most of our generative models (predictions), especially those operating in the lower levels of the hierarchy, as being out of our conscious awareness as well.  However, since our memories are composed of (or reconstructed with) many higher level predictions, and since only a limited number of them can enter our conscious awareness at any moment, this implies that most of the higher-level predictions are also being maintained or processed unconsciously as well.

It’s worth noting however that when we were first forming these memories, a lot of the information was in our consciousness (the higher-level, more abstract predictions in particular).  Within PP, consciousness plays a special role since our attention modifies what is called the precision weight (or synaptic gain) on any prediction error that flows upward through the predictive hierarchy.  This means that the prediction errors produced from the incoming sensory information or at even higher levels of processing are able to have a greater impact on modifying and updating the predictive models.  This makes sense from an evolutionary perspective, where we can ration our cognitive resources in a more adaptable way, by allowing things that catch our attention (which may be more important to our survival prospects) to have the greatest effect on how we understand the world around us and how we need to act at any given moment.

After repeatedly encountering certain predicted causal relations in a conscious fashion, the more likely those predictions can become automated or unconsciously processed.  And if this has happened with certain rules of inference that govern how we manipulate and process many of our predictive models, it seems reasonable to suspect that this would contribute to what we call our intuitive reasoning (or intuition).  After all, intuition seems to give people the sense of knowing something without knowing how it was acquired and without any present conscious process of reasoning.

This is similar to muscle memory or procedural memory (like learning how to ride a bike) which is consciously processed at first (thus involving many parts of the cerebral cortex), but after enough repetition it becomes a faster and more automated process that is accomplished more economically and efficiently by the basal ganglia and cerebellum, parts of the brain that are believed to handle a great deal of unconscious processing like that needed for procedural memory.  This would mean that the predictions associated with these kinds of causal relations begin to function out of our consciousness, even if the same predictive strategy is still in place.

As mentioned above, one difference between this unconscious intuition and other forms of reasoning that operate within the purview of consciousness is that our intuitions are less likely to be updated or changed based on new experiential evidence since our conscious attention isn’t involved in the updating process. This means that the precision weight of upward flowing prediction errors that encounter downward flowing predictions that are operating unconsciously will have little impact in updating those predictions.  Furthermore, the fact that the most automated predictions are often those that we’ve been using for most of our lives, means that they are also likely to have extremely high Bayesian priors, further isolating them from modification.

Some of these priors may become what are called hyperpriors or priors over priors (many of these believed to be established early in life) where there may be nothing that can overcome them, because they describe an extremely abstract feature of the world.  An example of a possible hyperprior could be one that demands that the brain settle on one generative model even when it’s comparable to several others under consideration.  One could call this a “tie breaker” hyperprior, where if the brain didn’t have this kind of predictive mechanism in place, it may never be able to settle on a model, causing it to see the world (or some aspect of it) as a superposition of equiprobable states rather than simply one determinate state.  We could see the potential problem in an organism’s survival prospects if it didn’t have this kind of hyperprior in place.  Whether or not a hyperprior like this is a form of innate specificity, or acquired in early learning is debatable.

An obvious trade-off with intuition (or any kind of innate biases) is that it provides us with fast, automated predictions that are robust and likely to be reliable much of the time, but at the expense of not being able to adequately handle more novel or complex situations, thereby leading to fallacious inferences.  Our cognitive biases are also likely related to this kind of unconscious reasoning whereby evolution has naturally selected cognitive strategies that work well for the kind of environment we evolved in (African savanna, jungle, etc.) even at the expense of our not being able to adapt as well culturally or in very artificial situations.

Imagination vs. Perception

One large benefit of storing so much perceptual information in our memories (predictive models with different spatio-temporal scales) is our ability to re-create it offline (so to speak).  This is where imagination comes in, where we are able to effectively simulate perceptions without requiring a stream of incoming sensory data that matches it.  Notice however that this is still a form of perception, because we can still see, hear, feel, taste and smell predicted causal relations that have been inferred from past sensory experiences.

The crucial difference, within a PP framework, is the role of precision weighting on the prediction error, just as we saw above in terms of trying to update intuitions.  If precision weighting is set or adjusted to be relatively low with respect to a particular set of predictive models, then prediction error will have little if any impact on the model.  During imagination, we effectively decouple the bottom-up prediction error from the top-down predictions associated with our sensory cortex (by reducing the precision weighting of the prediction error), thus allowing us to intentionally perceive things that aren’t actually in the external world.  We need not decouple the error from the predictions entirely, as we may want our imagination to somehow correlate with what we’re actually perceiving in the external world.  For example, maybe I want to watch a car driving down the street and simply imagine that it is a different color, while still seeing the rest of the scene as I normally would.  In general though, it is this decoupling “knob” that we can turn (precision weighting) that underlies our ability to produce and discriminate between normal perception and our imagination.

So what happens when we lose the ability to control our perception in a normal way (whether consciously or not)?  Well, this usually results in our having some kind of hallucination.  Since perception is often referred to as a form of controlled hallucination (within PP), we could better describe a pathological hallucination (such as that arising from certain psychedelic drugs or a condition like Schizophrenia) as a form of uncontrolled hallucination.  In some cases, even with a perfectly normal/healthy brain, when the prediction error simply can’t be minimized enough, or the brain is continuously switching between models, based on what we’re looking at, we experience perceptual illusions.

Whether it’s illusions, hallucinations, or any other kind of perceptual pathology (like not being able to recognize faces), PP offers a good explanation for why these kinds of experiences can happen to us.  It’s either because the models are poor (their causal structure or priors) or something isn’t being controlled properly, like the delicate balance between precision weighting and prediction error, any of which that could result from an imbalance in neurotransmitters or some kind of brain damage.

Imagination & Conscious Reasoning

While most people would tend to define imagination as that which pertains to visual imagery, I prefer to classify all conscious experiences that are not directly resulting from online perception as imagination.  In other words, any part of our conscious experience that isn’t stemming from an immediate inference of incoming sensory information is what I consider to be imagination.  This is because any kind of conscious thinking is going to involve an experience that could in theory be re-created by an artificial stream of incoming sensory information (along with our top-down generative models that put that information into a particular context of understanding).  As long as the incoming sensory information was a particular way (any way that we can imagine!), even if it could never be that way in the actual external world we live in, it seems to me that it should be able to reproduce any conscious process given the right top-down predictive model.  Another way of saying this is that imagination is simply another word to describe any kind of offline conscious mental simulation.

This also means that I’d classify any and all kinds of conscious reasoning processes as yet another form of imagination.  Just as is the case with more standard conceptions of imagination (within PP at least), we are simply taking particular predictive models, manipulating them in certain ways in order to simulate some result with this process decoupled (at least in part) from actual incoming sensory information.  We may for example, apply a rule of inference that we’ve picked up on and manipulate several predictive models of causal relations using that rule.  As mentioned in the previous post and in the post from part 2 of this series, language is also likely to play a special role here where we’ll likely be using it to help guide this conceptual manipulation process by organizing and further representing the causal relations in a linguistic form, and then determining the resulting inference (which will more than likely be in a linguistic form as well).  In doing so, we are able to take highly abstract properties of causal relations and apply rules to them to extract new information.

If I imagine a purple elephant trumpeting and flying in the air over my house, even though I’ve never experienced such a thing, it seems clear that I’m manipulating several different types of predicted causal relations at varying levels of abstraction and experiencing the result of that manipulation.  This involves inferred causal relations like those pertaining to visual aspects of elephants, the color purple, flying objects, motion in general, houses, the air, and inferred causal relations pertaining to auditory aspects like trumpeting sounds and so forth.

Specific instances of these kinds of experienced causal relations have led to my inferring them as an abstract probabilistically-defined property (e.g. elephantness, purpleness, flyingness, etc.) that can be reused and modified to some degree to produce an infinite number of possible recreated perceptual scenes.  These may not be physically possible perceptual scenes (since elephants don’t have wings to fly, for example) but regardless I’m able to add or subtract, mix and match, and ultimately manipulate properties in countless ways, only limited really by what is logically possible (so I can’t possibly imagine what a square circle would look like).

What if I’m performing a mathematical calculation, like “adding 9 + 9”, or some other similar problem?  This appears (upon first glance at least) to be very qualitatively different than simply imagining things that we tend to perceive in the world like elephants, books, music, and other things, even if they are imagined in some phantasmagorical way.  As crazy as those imagined things may be, they still contain things like shapes, colors, sounds, etc., and a mathematical calculation seems to lack this.  I think the key thing to realize here is the fundamental process of imagination as being able to add or subtract and manipulate abstract properties in any way that is logically possible (given our current set of predictive models).  This means that we can imagine properties or abstractions that lack all the richness of a typical visual/auditory perceptual scene.

In the case of a mathematical calculation, I would be manipulating previously acquired predicted causal relations that pertain to quantity and changes in quantity.  Once I was old enough to infer that separate objects existed in the world, then I could infer an abstraction of how many objects there were in some space at some particular time.  Eventually, I could abstract the property of how many objects without applying it to any particular object at all.  Using language to associate a linguistic symbol for each and every specific quantity would lay the groundwork for a system of “numbers” (where numbers are just quantities pertaining to no particular object at all).  Once this was done, then my brain could use the abstraction of quantity and manipulate it by following certain inferred rules of how quantities can change by adding to or subtracting from them.  After some practice and experience I would now be in a reasonable position to consciously think about “adding 9 + 9”, and either do it by following a manual iterative rule of addition that I’ve learned to do with real or imagined visual objects (like adding up some number of apples or dots/points in a row or grid), or I can simply use a memorized addition table and search/recall the sum I’m interested in (9 + 9 = 18).

Whether we consider imagining a purple elephant, mentally adding up numbers, thinking about what I’m going to say to my wife when I see her next, or trying to explicitly apply logical rules to some set of concepts, all of these forms of conscious thought or reasoning are all simply different sets of predictive models that I’m simply manipulating in mental simulations until I arrive at a perception that’s understood in the desired context and that has minimal prediction error.

Putting it all together

In summary, I think we can gain a lot of insight by looking at all the different aspects of brain function through a PP framework.  Imagination, perception, memory, intuition, and conscious reasoning fit together very well when viewed as different aspects of hierarchical predictive models that are manipulated and altered in ways that give us a much more firm grip on the world we live in and its inferred causal structure.  Not only that, but this kind of cognitive architecture also provides us with an enormous potential for creativity and intelligence.  In the next post in this series, I’m going to talk about consciousness, specifically theories of consciousness and how they may be viewed through a PP framework.