“Black Mirror” Reflections: U.S.S. Callister (S4, E1)

This Black Mirror reflection will explore season 4, episode 1, which is titled “U.S.S. Callister”.  You can click here to read my last Black Mirror reflection (season 3, episode 2: “Playtest”).  In U.S.S. Callister, we’re pulled into the life of Robert Daly (Jesse Plemons), the Chief Technical Officer at Callister Inc., a game development company that has produced a multiplayer simulated reality game called Infinity.  Within this game, users control a starship (an obvious homage to Star Trek), although Daly, the brilliant programmer behind this revolutionary game, has his own offline version of the game which has been modded to look like his favorite TV show Space Fleet, where Daly is the Captain of the ship.

We quickly learn that most of the employees at Callister Inc. don’t treat Daly very kindly, including his company’s co-founder James Walton (Jimmy Simpson).  Daly appears to be an overly passive, shy, introvert.  During one of Daly’s offline gaming sessions at home, we come to find out that the Space Fleet characters aboard the starship look just like his fellow employees, and as Captain of his simulated crew, he indulges in berating them all.  Due to the fact that Daly is Captain and effectively controls the game, he is rendered nearly omnipotent, and able to force his crew to perpetually bend to his will, lest they suffer immensely.

It turns out that these characters in the game are actually conscious, created from Daly having surreptitiously acquired his co-workers’ DNA and somehow replicated their consciousness and memories and uploaded them into the game (and any biologists or neurologists out there, let’s just forget for the moment that DNA isn’t complex enough to store this kind of neurological information).  At some point, a wrench is thrown into Daly’s deviant exploits when a new co-worker, programmer Nanette Cole (Cristin Milioti), is added to his game and manages to turn the crew against him.  Once the crew finds a backdoor means of communicating with the world outside the game, Daly’s world is turned upside down with a relatively satisfying ending chock-full of poetic justice, as his digitized, enslaved crew members manage to escape while he becomes trapped inside his own game as it’s being shutdown and destroyed.

Daly stuck

This episode is rife with a number of moral issues that build on one another, all deeply coupled with the ability to engineer a simulated reality (perceptual augmentation).  Virtual worlds carry a level of freedom that just isn’t possible in the real world, where one can behave in countless ways with little or no consequence, whether acting with beneficence or utter malice.  One can violate physical laws as well as prescriptive laws, opening up a new world of possibilities that are free to evolve without the feedback of social norms, legal statutes, and law enforcement.

People have long known about various ways of escaping social and legal norms through fiction and game playing, where one can imagine they are somebody else, living in a different time and place, and behave in ways they’d never even think of doing in the real world.

But what happens when they’re finished with the game and go back to the real world with all its consequences, social norms and expectations?  Doesn’t it seem likely that at least some of the behaviors cultivated in the virtual world will begin to rear their ugly heads in the real world?  One can plausibly argue that violent game playing is simply a form of psychological sublimation, where we release many of our irrational and violent impulses in a way that’s more or less socially acceptable.  But there’s also bound to be a difference between playing a very abstract game involving violence or murder, such as the classic board-game Clue, and playing a virtual reality game where your perceptions are as realistic as can be and you choose to murder some other character in cold blood.

Clearly in this episode, Daly was using the simulated reality as a means of releasing his anger and frustration, by taking it out on reproductions of his own co-workers.  And while a simulated experiential alternative could be healthy in some cases, in terms of its therapeutic benefit and better control over the consequences of the simulated interaction, we can see that Daly took advantage of his creative freedom, and wielded it to effectively fashion a parallel universe where he was free to become a psychopath.

It would already be troubling enough if Daly behaved as he did to virtual characters that were not actually conscious, because it would still show Daly pretending that they are conscious; a man who wants them to be conscious.  But the fact that Daly knows they are conscious makes him that much more sadistic.  He is effectively in the position of a god, given his powers over the simulated world and every conscious being trapped within it, and he has used these powers to generate a living hell (thereby also illustrating the technology’s potential to, perhaps one day, generate a living heaven).  But unlike the hell we hear about in myths and religious fables, this is an actual hell, where a person can suffer the worst fates imaginable (it is in fact only limited by the programmer’s imagination) such as experiencing the feeling of suffocation, yet unable to die and thus with no end in sight.  And since time is relative, in this case based on the ratio of real time to simulated time (or the ratio between one simulated time and another), a character consciously suffering in the game could feel as if they’ve been suffering for months, when the god-like player has only felt several seconds pass.  We’ve never had to morally evaluate these kinds of situations before, and we’re getting to a point where it’ll be imperative for us to do so.

Someday, it’s very likely that we’ll be able to create an artificial form of intelligence that is conscious, and it’s up to us to initiate and maintain a public conversation that addresses how our ethical and moral systems will need to accommodate new forms of conscious moral agents.

Jude Law, Haley Joel Osment, Brendan Gleeson, and Brian Turk in Artificial Intelligence: AI (2001)

We’ll also need to figure out how to incorporate a potentially superhuman level of consciousness into our moral frameworks, since these frameworks often have an internal hierarchy that is largely based on the degree or level of consciousness that we ascribe to other individuals and to other animals.  If we give moral preference to a dog over a worm, and preference to a human over a dog (for example), then where would a being with superhuman consciousness fit within that framework?  Most people certainly wouldn’t want these beings to be treated as gods, but we shouldn’t want to treat them like slaves either.  If nothing else, they’ll need to be treated like people.

Technologies will almost always have that dual potential, where they can be used for achieving truly admirable goals and to enhance human well being, or used to dominate others and to exacerbate human suffering.  So we need to ask ourselves, given a future world where we have the capacity to make any simulated reality we desire, what kind of world do we want?  What kind of world should we want?  And what kind of person do you want to be in that world?  Answering these questions should say a lot about our moral qualities, serving as a kind of window into the soul of each and every one of us.

Advertisements

“Black Mirror” Reflections: Playtest (S3, E2)

Cooper

Black Mirror, the British science-fiction anthology series created by Charlie Brooker, does a pretty good job experimenting with a number of illuminating concepts that highlight how modern culture is becoming increasingly shaped by (and vulnerable to) various technological advances and changes.  I’ve been interested in a number of these concepts for many years now, not least because of the many important philosophical implications (including a number of moral issues) that they point to.  I’ve decided to start a blog post series that will explore the contents of these episodes.  My intention with this blog post series, which I’m calling “Black Mirror” Reflections, will be to highlight some of my own takeaways from some of my favorite episodes.

I’d like to begin with season 2, episode 3, “Playtest”.  I’m just going to give a brief summary here as you can read the full episode summary in the link provided above.  In this episode, Cooper (Wyatt Russell) decides to travel around the world, presumably as a means of dealing with the recent death of his father, who died from early-onset Alzheimer’s.  After finding his way to London, his last destination before planning to return home to America, he runs into a problem with his credit card and bank account where he can’t access the money he needs to buy his plane ticket.

While he waits for his bank to fix the problem with his account, Cooper decides to earn some cash using an “Oddjobs” app, which provides him with a number of short-term job listings in the area, eventually leading him to “SaitoGemu,” a video game company looking for game testers to try a new kind of personalized horror game involving a (seemingly) minimally invasive brain implant procedure.  He briefly hesitates but, desperate for money and reasonably confident in its safety, he eventually consents to the procedure whereby the implant is intended to wire itself into the gamer’s brain, resulting in a form of perceptual augmentation and a semi-illusory reality.

cooper-mad

The implant is designed to (among other things) scan your memories and learn what your worst fears are, in order to integrate these into the augmented perceptions, producing a truly individualized, and maximally frightening horror game experience.  Needless to say, at some point Cooper begins to lose track of what’s real and what’s illusory, and due to a malfunction, he’s unable to exit the game and he ends up going mad and eventually dying as a result of the implant unpredictably overtaking (and effectively frying) his brain.

nanobots in brain

There are a lot of interesting conceptual threads in this story, and the idea of perceptual augmentation is a particularly interesting theme that finds it’s way into a number of other Black Mirror episodes.  While holographic and VR-headset gaming technologies can produce their own form of augmented reality, perceptual augmentation carried out on a neurological level isn’t even in the same ballpark, having qualitative features that are far more advanced and which are more akin to those found in the Wachowski’s The Matrix trilogy or James Cameron’s Total Recall.  Once the user is unable to distinguish between the virtual world and the external reality, with the simulator having effectively passed a kind of graphical version of the Turing Test, then one’s previous notion of reality is effectively shattered.  To me, this technological idea is one of the most awe-inspiring (yet sobering) ideas within the Black Mirror series.

The inability to discriminate between the two worlds means that both worlds are, for all practical purposes, equally “real” to the person experiencing them.  And the fact that one simulationcan’t tell the difference between such worlds ought to give a person pause to re-evaluate what it even means for something to be real.  If you doubt this, then just imagine if you were to find out one day that your entire life has really been the result of a computer simulation, created and fabricated by some superior intelligence living in a layer of reality above your own (we might even think of this being as a “god”).  Would this realization suddenly make your remembered experiences imaginary and meaningless?  Or would your experiences remain just as “real” as they’ve always been, even if they now have to be reinterpreted within a context that grounds them in another layer of reality?

To answer this question honestly, we ought to first realize that we’re likely fully confident that what we’re experiencing right now is reality, is real, is authentic, and is meaningful (just as Cooper was at some point in his gaming “adventure”).  And this seems to be at least a partial basis for how we define what is real, and how we differentiate the most vivid and qualitatively rich experiences from those we might call imaginary, illusory, or superficial.  If what we call reality is really just a set of perceptions, encompassing every conscious experience from the merely quotidian to those we deem to be extraordinary, would we really be justified in dismissing all of these experiences and their value to us if we were to discover that there’s a higher layer of reality, residing above the only reality we’ve ever known?

For millennia, humans have pondered over whether or not the world is “really” the way we see it, with perhaps the most rigorous examination of this metaphysical question undertaken by the German philosopher Immanuel Kant, with his dichotomy of the phenomenon and the noumenon (i.e. the way we see the world or something in the world versus the way the world or thing “really is” in itself, independent of our perception).  Even if we assume the noumenon exists, we can never know anything about the thing in itself, by our being fundamentally limited by our own perceptual categories and the way we subjectively interpret the world.  Similarly, we can never know for certain whether or not we’re in a simulation.

Looking at the situation through this lens, we can then liken the question of how to (re)define reality within the context of a simulation with the question of how to (re)define reality within the context of a world as it really is in itself, independent of our perception.  Both the possibility of our being in a simulation and the possibility of our perceptions stemming from an unknowable noumenal world could be true (and would likely be unfalsifiable), and yet we still manage to use and maintain a relatively robust conception and understanding of reality.  This leads me to conclude that reality is ultimately defined by pragmatic considerations (mostly those pertaining to our ability to make successful predictions and achieve our goals), and thus the possibility of our one day learning about a new, higher level of reality should merely add to our existing conception of reality, rather than completely negating it, even if it turns out to be incomplete.

Another interesting concept in this episode involves the basic design of the individualized horror game itself, where a computer can read your thoughts and memories, and then surmise what your worst fears are.  This is a technological feat that is possible in principle, and one with far-reaching implications that concern our privacy, safety, and autonomy.  Just imagine if such a power were unleashed by corporations, or mind-readingthe governments owned by those corporations, to acquire whatever information they wanted from your mind, to find out how to most easily manipulate you in terms of what you buy, who you vote for, what you generally care about, or what you do any and every day of your life.  The Orwellian possibilities are endless.

Marketing firms (both corporatocratic and political) have already been making use of discoveries in psychology and neuroscience, finding new ways to more thoroughly exploit our cognitive biases to get us to believe and desire whatever will benefit them most.  Adding to this the future ability to read our thoughts and manipulate our perceptions (even if this is first implemented as a seemingly innocuous video game), this will establish a new means of mass surveillance, where we can each become a potential “camera” watching one another (a theme also highlighted in BM, S4E3: Crocodile), while simultaneously exposing our most private of thoughts, and transmitting them to some centralized database.  Once we reach these technological heights (it’s not a matter of if but when), depending on how it’s employed, we may find ourselves no longer having the freedom to lie or to keep a secret, nor the freedom of having any mental privacy whatsoever.

To be fair, we should realize that there are likely to be undeniable benefits in our acquiring these capacities (perceptual augmentation and mind-reading), such as making virtual paradises with minimal resources, finding new ways of treating phobias, PTSD, brain to cloudand other pathologies; giving us the power of telepathy and superhuman intelligence by connecting our brains to the cloud, giving us the ability to design error-proof lie detectors and other vast enhancements in maximizing personal security and reducing crime.  But there are also likely to be enormous losses in personal autonomy, as our available “choices” are increasingly produced and constrained by automated algorithms; there are likely to be losses in privacy, and increasing difficulties in ascertaining what is true and what isn’t, since our minds will be vulnerable to artificially generated perceptions created by entities and institutions that want to deceive us.

Although we’ll constantly need to be on the lookout for these kinds of potential dangers as they arise, in the end, we may find ourselves inadvertently allowing these technologies to creep into our lives, one consumer product at a time.

Irrational Man: An Analysis (Part 3, Chapter 9: Heidegger)

In my previous post of this series on William Barrett’s Irrational Man, I explored some of Nietzsche’s philosophy in more detail.  Now we’ll be taking a look at the work of Martin Heidegger.

Heidegger was perhaps the most complex of the existentialist philosophers, as his work is often interpreted in a number of different ways and he develops a lot of terminology and concepts that can sometimes be difficult to grasp; and there’s also a fundamental limitation of fully engaging with the concepts he uses unless one is fluent in the German language, due to what is otherwise lost in translation to English.

Phenomenology, or the study of the structures of our conscious experience, was a big part of Heidegger’s work and had a big influence on his distinguishing between what he called “calculating thought” and “meditating thought”, and Barrett alludes to this distinction when he first mentions Heidegger’s feelings on thought and reason.  Heidegger says:

“Thinking only begins at the point where we have come to know that Reason, glorified for centuries, is the most obstinate adversary of thinking.”

Now at first one might be confused with the assertion that reason is somehow against thinking, but if one understands that Heidegger is only making reference to one of the two types of thinking outlined above, then we can begin to make sense of what he’s saying.  Calculating thought is what he has in mind here, which is the kind of thinking that we use all the time for planning and investigating in order to accomplish some specific purpose and achieve our goals.  Meditative thought, on the other hand, involves opening up one’s mind such that they aren’t consumed by a single perspective of an idea; it requires us to not limit our thinking to only one category of ideas, and requires us to at least temporarily free our thinking from the kind of unity that makes everything seemingly fit together.

Meaning is more or less hidden behind calculating thought, and thus meditative thought is needed to help uncover that meaning.  Calculative thinking also involves a lot of external input from society and the cultural or technological constructs that our lives are embedded in, whereas meditative thinking is internal, prioritizing the self over the collective and creating a self-derived form of truth and meaning rather than one that is externally imposed on us.  By thinking about reason’s relation to what Heidegger is calling meditative thought in particular, we can understand why Heidegger would treat reason as an adversary to what he sees as a far more valuable form of “thinking”.

It would be mistaken however to interpret this as Heidegger being some kind of an irrationalist, because Heidegger still values thinking even though he redefines what kinds of thinking exist:

“Heidegger is not a rationalist, because reason operates by means of concepts, mental representations, and our existence eludes these.  But he is not an irrationalist either.  Irrationalism holds that feeling, or will, or instinct are more valuable and indeed more truthful than reason-as in fact, from the point of view of life itself, they are.  But irrationalism surrenders the field of thinking to rationalism and thereby secretly comes to share the assumptions of its enemy.  What is needed is a more fundamental kind of thinking that will cut under both opposites.”

Heidegger’s intention seems to involve carving out a space for thinking that connects to the most basic elements of our experience, and which connects to the grounding for all of our experience.  He thinks that we’ve almost completely submerged our lives in reason or calculative thinking and that this has caused us to lose our connection with what he calls Being, and this is analogous to Kierkegaard’s claim of reason having estranged us from faith:

“Both Kierkegaard and Nietzsche point up a profound dissociation, or split, that has taken place in the being of Western man, which is basically the conflict of reason with the whole man.  According to Kierkegaard, reason threatens to swallow up faith; Western man now stands at a crossroads forced to choose either to be religious or to fall into despair…Now, the estrangement from Being itself is Heidegger’s central theme.”

Where or when did this estrangement from our roots first come to be?  Many may be tempted to say it was coincident with the onset of modernity, but perhaps it happened long before that:

“Granted that modern man has torn himself up by his roots, might not the cause of this lie farther back in his past than he thinks?  Might it not, in fact, lie in the way in which he thinks about the most fundamental of all things, Being itself?”

At this point, it’s worth looking at Heidegger’s overall view of man, and see how it might be colored by his own upbringing in southern Germany:

“The picture of man that emerges from Heidegger’s pages is of an earth-bound, time-bound, radically finite creature-precisely the image of man we should expect from a peasant, in this case a peasant who has the whole history of Western philosophy at his fingertips.”

So Heidegger was in an interesting position by wanting to go back to one of the most basic questions ever asked in philosophy, but to do so with the knowledge of the philosophy that had been developed ever since, hopefully giving him a useful outline of what went wrong in that development.

1. Being

“He has never ceased from that single task, the “repetition” of the problem of Being: the standing face to face with Being as did the earliest Greeks.  And on the very first pages of Being and Time he tells us that this task involves nothing less than the destruction of the whole history of Western ontology-that is, of the way the West has thought about Being.”

Sometimes the only way to make progress in philosophy is through a paradigm shift of some kind, where the most basic assumptions underlying our theories of the world and our existence are called into question; and we may end up having to completely discard what we thought we knew and start over, in order to follow a new path and discover a new way of looking at the problem we first began with.

For Heidegger, the problem all comes down to how we look at beings versus being (or Being), and in particular how little attention we’ve given the latter:

“Now, it is Heidegger’s contention that the whole history of Western thought has shown an exclusive preoccupation with the first member of these pairs, with the thing-which-is, and has let the second, the to-be of what is, fall into oblivion.  Thus that part of philosophy which is supposed to deal with Being is traditionally called ontology-the science of the thing-which-is-and not einai-logy, which would be the study of the to-be of Being as opposed to beings…What it means is nothing less than this: that from the beginning the thought of Western man has been bound to things, to objects.”

And it’s the priority we’ve given to positing and categorizing various types of beings that has distracted us from really contemplating the grounding for any and all beings, Being itself.  It’s more or less been taken for granted or seen as too tenuous or insubstantial to give any attention to.  We can certainly blame our lack of attention toward Being, at least in part, as a result of how we’ve understood it as contingent on the existence of particular objects:

“Once Being has been understood solely in terms of beings, things, it becomes the most general and empty of concepts: ‘The first object of the understanding,’ says St. Thomas Aquinas, ‘that which the intellect conceives when it conceives of anything.’ “

Aquinas seems to have pointed out the obvious here: we can’t easily think of the concept of Being without thinking of beings, and once we’ve thought of beings, we tend to think of its fact of being as not adding anything useful to our conception of the beings themselves (similar to Kant’s claim about existence as a superfluous or useless concept).  Heidegger wants to turn this common assumption on its head:

“Being is not an empty abstraction but something in which all of us are immersed up to our necks, and indeed over our heads.  We all understand the meaning in ordinary life of the word ‘is,’ though we are not called upon to give a conceptual explanation of it.  Our ordinary human life moves within a preconceptual understanding of Being, and it is this everyday understanding of Being in which we live, move, and have our Being that Heidegger wants to get at as a philosopher.”

Since we live our lives with an implicit understanding of Being and its permeating everything in our experience, it seems reasonable that we can turn our attention toward it and build up a more explicit conceptualization or understanding of it.  To do this carefully, without building in too many abstract assumptions or uncertain inferences, we need to make special use of phenomenology in this task.  Heidegger turns to Edmund Husserl’s work to take this project off the ground.

2. Phenomenology and Human Existence

“Instead of making intellectual speculations about the whole of reality, philosophy must turn, Husserl declared, to a pure description of what is…Heidegger accepts Husserl’s definition of phenomenology: he will attempt to describe, he says, and without any obscuring preconceptions, what human existence is.”

And this means that we need to try and analyze our experience without importing the plethora of assumptions and ideas that we’ve been taught about our existence.  We have to try and do a kind of phenomenological reduction, where we suspend our judgment about the way the world works which will include leaving aside (at least temporarily) most if not all of science and its various theories pertaining to how reality is structured.

Heidegger also makes many references to language in his work and, perhaps sharing a common thread with Wittgenstein’s views, dissects language to get at what we really mean by certain words, revealing important nuances that are taken for granted and how our words and concepts constrain our thinking to some degree:

“Heidegger’s perpetual digging at words to get at their hidden nuggets of meaning is one of his most exciting facets…(The thing for itself) will reveal itself to us, he says, only if we do not attempt to coerce it into one of our ready-made conceptual strait jackets.”

Truth is another concept that needs to be reformulated for Heidegger, and he first points out how the Greek word for truth, aletheia, means “un-hiddenness or “revelation”, and so he believes that truth lies in finding out what has been hidden from us; and this goes against our usual views of what is meant by “truth”, where we most often think of it as a status ascribed to propositions that correspond to actual facts about the world.  Since propositions can’t exist without minds, then modern conceptions of truth are limited in a number of ways, which Heidegger wants to find a way around:

“…truth is therefore, in modern usage, to be found in the mind when it has a correct judgment about what is the case.  The trouble with this view is that it cannot take account of other manifestations of truth.  For example, we speak of the “truth” of a work of art.  A work of art in which we find truth may actually have in it no propositions that are true in this literal sense.  The truth of a work of art is in its being a revelation, but that revelation does not consist in a statement or group of statements that are intellectually correct.  The momentous assertion that Heidegger makes is that truth does not reside primarily in the intellect, but that, on the contrary, intellectual truth is in fact a derivative of a more basic sense of truth.”

This more basic sense of truth will be explored more later on, but the important takeaway is that truth for Heidegger seems to involve the structure of our experience and of the beings within that experience; and it lends itself toward a kind of epistemological relativism, depending on how a human being interprets the world they’re interacting with.

On the other hand, if we think about truth as a matter of deciding what we should trust or doubt in terms of what we’re experiencing, we may wind up in a position of solipsism similar to Descartes’, where we come to doubt the “external world” altogether:

“…Descartes’ cogito ergo sum moment, is the point at which modern philosophy, and with it the modern epoch, begins: man is locked up in his own ego.  Outside him is the doubtful world of things, which his science has not taught him are really not the least like their familiar appearances.  Descartes got the external world back through a belief in God, who in his goodness would not deceive us into believing that this external world existed if it really did not.  But the ghost of subjectivism (and solipsism too) is there and haunts the whole of modern philosophy.”

While Descartes pulled himself out of solipsism through the added assumption of another (benevolent) mind existing that was responsible for any and all experience, I think there’s a far easier and less ad hoc solution to this problem.  First, we need to realize that we wouldn’t be able to tell the difference between an experience of an artificial or illusory external reality and one that was truly real independent of our conscious experience, and this means that solipsism is entirely indeterminable and so it’s almost pointless to worry about; and second, we don’t feel that we are the sole authors of every aspect of our experience anyway, otherwise there would be nothing unpredictable or uncertain with respect to any part of our experience, and therefore we’d know everything that will ever happen before it happens with not so much as an inkling of surprise or confusion.

This is most certainly not the case, which means we can be confident that we are not the sole authors of our experience and therefore there really is a reality or a causal structure that is independent of our own will and expectations.  So it’s conceptually simpler to simply define reality in the most broadest sense of the term, as nothing more or less than what we experience at any point in time.  And because we experience what appear to be other beings that are just like ourselves, each appearing to have their own experiences of reality, it’s going to be more natural and intuitive to treat them as such regardless of whether or not they’re really figments of our imagination or products of some unknown higher level reality like The Matrix.

Heidegger does something similar here, where he views our being in the “external” world as essential to who we are, and thus he doesn’t think we can separate ourselves from the world, like Descartes himself did as a result of his methodological skepticism:

“Heidegger destroys the Cartesian picture at one blow: what characterizes man essentially, he says, is that he is Being-in-the-world.  Leibniz had said that the monad has no windows; and Heidegger’s reply is that man does not look out upon an external world through windows, from the isolation of his ego: he is already out-of-doors.  He is in the world because, existing, he is involved in it totally.”

This is fundamentally different from the traditional conception of an ego or self that may or may not exist in a world; within this conception of human beings, one can’t treat the ego as separable from the world that it’s embedded in.  One way to look at this is to imagine that we were separated from the world forever, and then ask ourselves if the mode of living that remains (if any) in the absence of any world is still fundamentally human.  It’s an interesting thought experiment to imagine us somehow as a disembodied mind “residing” within an infinite void of nothingness, but if this were really done for eternity and not simply for a short while where we knew we could later return to the world we left behind, then we’d have no more interactions or relations with any objects or other beings and no mode of living at all.  I think it’s perfectly reasonable to conclude in this case that we would have lost something that we need in order to truly be a human being.  If one fails to realize this, then they’ve simply failed to truly grasp the conditions called for in the thought experiment.

Heidegger drives this point further by describing existence or more appropriately our Being as analogous to a non-localized field as in theoretical physics:

“Existence itself, according to Heidegger, means to stand outside oneself, to be beyond oneself.  My Being is not something that takes place inside my skin (or inside an immaterial substance inside that skin); my Being, rather, is spread over a field or region which is the world of its care and concern.  Heidegger’s theory of man (and of Being) might be called the Field Theory of Man (or the Field Theory of Being) in analogy with Einstein’s Field Theory of Matter, provided we take this purely as an analogy; for Heidegger would hold it a spurious and inauthentic way to philosophize to derive one’s philosophic conclusions from the highly abstract theories of physics.  But in the way that Einstein took matter to be a field (a magnetic field, say)- in opposition to the Newtonian conception of a body as existing inside its surface boundaries-so Heidegger takes man to be a field or region of Being…Heidegger calls this field of Being Dasein.  Dasein (which in German means Being-there) is his name for man.”

So we can see our state of Being (which Heidegger calls Dasein) as one that’s spread out over a number of different interactions and relations we have with other beings in the world.  In some sense we can think of the sum total of all the actions, goals, and beings that we concern ourselves with as constituting our entire field of Being.  And each being, especially each human being, shares this general property of Being even though each of those beings will be embedded within their own particular region of Being with their own set of concerns (even if these sets overlap in some ways).

What I find really fascinating is Heidegger’s dissolution of the subject-object distinction:

“That Heidegger can say everything he wants to say about human existence without using either “man” or “consciousness” means that the gulf between subject and object, or between mind and body, that has been dug by modern philosophy need not exist if we do not make it.”

I mentioned Wittgenstein earlier, because I think Heidegger’s philosophy has a few parallels.  For instance, Wittgenstein treats language and meaning as inherently connected to how we use words and concepts and thus is dependent on the contextual relations between how or what we communicate and what our goals are, as opposed to treating words like categorical objects with strict definitions and well-defined semantic boundaries.  For Wittgenstein, language can’t be separated from our use of it in the world even though we often treat it as if we can (and assuming we can is what often creates problems in philosophy), and this is similar to how Heidegger describes us human beings as inherently inseparable from our extension in the world as manifested in our many relations and concerns within that world.

Heidegger’s reference to Being as some kind of field or region of concern can be illustrated well by considering how a child comes to learn their own name (or to respond to it anyway):

“He comes promptly enough at being called by name; but if asked to point out the person to whom the name belongs, he is just as likely to point to Mommy or Daddy as to himself-to the frustration of both eager parents.  Some months later, asked the same question, the child will point to himself.  But before he has reached that stage, he has heard his name as naming a field or region of Being with which he is concerned, and to which he responds, whether the call is to come to food, to mother, or whatever.  And the child is right.  His name is not the name of an existence that takes place within the envelope of his skin: that is merely the awfully abstract social convention that has imposed itself not only on his parents but on the history of philosophy.  The basic meaning the child’s name has for him does not disappear as he grows older; it only becomes covered over by the more abstract social convention.  He secretly hears his own name called whenever he hears any region of Being named with which he is vitally involved.”

This is certainly an interesting interpretation of this facet of child development, and we might presume that the current social conventions which eventually attach our name to our body arise out of a number of pragmatic considerations such as wanting to provide a label for each conscious agent that makes decisions and behaves in ways that affect other conscious agents, and because our regions of Being overlap in some ways but not in others (i.e. some of my concerns, though not all of them, are your concerns too), it also makes sense to differentiate between all the different “sets of concerns” that comprise each human being we interact with.  But the way we tend to do this is by abstracting the sets of concerns, localizing them in time and space as “mine” and “yours” rather than the way they truly exist in the world.

Nevertheless, we could (in principle anyway) use an entirely different convention where a person’s name is treated as a representation of a non-localized set of causal structures and concerns, almost as if their body extended in some sense to include all of this, where each concern or facet of their life is like a dynamic appendage emanating out from their most concentrated region of Being (the body itself).  Doing so would markedly change the way we see one another, the way we see objects, etc.

Perhaps our state of consciousness can mislead us into forgetting about the non-locality of our field of Being, because our consciousness is spatially constrained to one locus of experience; and perhaps this is the reason why philosophers going all the way back to the Greeks have linked existence directly with our propensity to reflect in solitude.  But this solitary reflection also misleads us into forgetting the average “everydayness” of our existence, where we are intertwined with our community like a sheep directed by the rest of the herd:

“…none of us is a private Self confronting a world of external objects.  None of us is yet even a Self.  We are each simply one among many; a name among the names of our schoolfellows, our fellow citizens, our community.  This everyday public quality of our existence Heidegger calls “the One.”  The One is the impersonal and public creature whom each of us is even before he is an I, a real I.  One has such-and-such a position in life, one is expected to behave in such-and-such a manner, one does this, one does not do that, etc.  We exist thus in a state of “fallen-ness”, according to Heidegger, in the sense that we are as yet below the level of existence to which it is possible for us to rise.  So long as we remain in the womb of this externalized and public existence, we are spared the terror and the dignity of becoming a Self.”

Heidegger’s concept of fallen-ness illustrates our primary mode of living; sometimes we think of ourselves as individuals striving toward our personal self-authored goals but most of the time we’re succumbing to society’s expectations of who we are, what we should become, and how we ought to judge the value of our own lives.  We shouldn’t be surprised by this fact as we’ve evolved as a social species, which means our inclination toward a herd mentality is in some sense instinctual and therefore becomes the path of least resistance.  But because of our complex cognitive and behavioral versatility, we can also consciously rise above this collective limitation; and even if we don’t strive to, in other cases our position within the safety of the collective is torn away from us as a result of the chaos that reigns in on us from life’s many contingencies:

“But…death and anxiety (and such) intrude upon this fallen state, destroy our sheltered position of simply being one among many, and reveal to us our own existence as fearfully and irremediably our own.  Because it is less fearful to be “the One” than to be a Self, the modern world has wonderfully multiplied all the devices of self-evasion.”

It is only by rising out of this state of fallenness, that is, by our rescuing some measure of true individuality, that Heidegger thinks we can live authentically.  But in any case:

“Whether it be fallen or risen, inauthentic or authentic, counterfeit copy or genuine original, human existence is marked by three general traits: 1) mood or feeling; 2) understanding; 3) speech.  Heidegger calls these existentialia and intends them as basic categories of existence (as opposed to more common ones like quantity, quality, space, time, etc.).”

Heidegger’s use of these traits in describing our existence shouldn’t be confused with some set of internal mental states but rather we need to include them within a conception of Being or Dasein as a field that permeates the totality of our existence.  So when we consider the first trait, mood or feeling, we should think of each human being as being a mood rather than simply having a mood.  According to Heidegger, it is through moods that we feel a sense of belonging to the world, and thus, if we didn’t have any mood at all, then we wouldn’t find ourselves in a world at all.

It’s also important to understand that he doesn’t see moods as some kind of state of mind as we would typically take them to be, but rather that all states of mind presuppose this sense of belonging to a world in the first place; they presuppose a mood in order to have the possibilities for any of those states of mind.  And as Barrett mentions, not all moods are equally important for Heidegger:

“The fundamental mood, according to Heidegger, is anxiety (Angst); he does not choose this as primary out of any morbidity of temperament, however, but simply because in anxiety this here-and-now of our existence arises before us in all its precarious and porous contingency.”

It seems that he finds anxiety to be fundamental in part because it is a mood that opens us up to see Being for what it really is: a field of existence centered around the individual rather than society and its norms and expectations; and this perspective largely results from the fact that when we’re in anxiety all practical significance melts away and the world as it was no longer seems to be relevant.  Things that we took for granted now become salient and there’s a kind of breakdown in our sense of everyday familiarity.  If what used to be significant and insignificant reverse roles in this mood, then we can no longer misinterpret ourselves as some kind of entity within the world (the world of “Them”; the world of externally imposed identity and essence); instead, we finally see ourselves as within, or identical with, our own world.

Heidegger also seems to see anxiety as always there even though it may be covered up (so to speak).  Perhaps it’s hiding most of the time because the fear of being a true Self and facing our individuality with the finitude, contingency, and responsibility that goes along with it is often too much for one to bear, so we blind ourselves by shifting into a number of other, less fundamental moods which enhance the significance of the usual day-to-day world, even if the possibility for Being to “reveal itself” to us, is just below the surface.

If we think of moods as a way of our feeling a sense of belonging within a world, where they effectively constitute the total range of ways that things can have significance for us, then we can see how our moods ultimately affect and constrain our view of the possibilities that our world affords us.  And this is a good segue to consider Heidegger’s second trait of Dasein or human Being, namely that of understanding.

“The “understanding” Heidegger refers to here is not abstract or theoretical; it is the understanding of Being in which our existence is rooted, and without which we could not make propositions or theories that can claim to be “true”.  Whence comes this understanding?  It is the understanding that I have by virtue of being rooted in existence.”

I think Heidegger’s concept of understanding (verstehen) is more or less an intuitive sense of knowing how to manipulate the world in order to make use of it for some goal or other.  By having certain goals, we see the world and the entities in that world colored by the context of those goals.  What we desire then will no doubt change the way the world appears to us in terms of what functionality we can make the most use of, what tools are available to us, and whether we see something as a tool or not.  And even though our moods may structure the space of possibilities that we see the world present to us, it is our realizing what these possibilities are that seems to underlie this form of understanding that Heidegger’s referring to.

We also have no choice but to use our past experience to interpret the world, determine what we find significant and meaningful, and to further drive the evolution of our understanding; and of course this process feeds back in on itself by changing our interpretation of the world which changes our dynamic understanding yet again.  And here we might make the distinction between an understanding of the possible uses for various objects and processes, and the specific interpretation of which possibilities are relevant to the task at hand thereby making use of that understanding.

If I’m sitting at my desk with a cup of coffee and a stack of papers in front of me that need to be sorted, my understanding of these entities affords me a number of possibilities depending on my specific mode of Being or what my current concerns are at the present moment.  If I’m tired or thirsty or what-have-you I may want to make use of that cup of coffee to drink from it, but if the window is open and the wind is starting to blow the stack of papers off my desk, I may decide to use my cup of coffee as a paper weight instead of a vessel to drink from.  And if a fly begins buzzing about while I’m sorting my papers and starts to get on my nerves, distracting me from the task at hand, I may decide to temporarily roll up my stack of papers and use it as a fly swatter.  My understanding of all of these objects in terms of their possibilities allows me to interpret an object as a particular possibility that fits within the context of my immediate wants and needs.

I like to think of understanding and interpretation, whether in the traditional sense or in Heidegger’s terms, as ultimately based on our propensity to make predictions about the world and its causal structure.  As a proponent of the predictive processing framework of brain function, I see the ultimate purpose of brains as being prediction generators, where the brain serves to organize information in order to predict the causal structure of the world we interact with; and this is done at many different levels of abstraction, so we can think of us as making use of many smaller-scale modules of understanding as well as various larger-scale assemblies of those individual modules.  In some cases, the larger-scale assemblies constrain which smaller-scale modules can be used and in other cases, the smaller-scale modules are prioritized which constrain or direct the shape of the larger-scale assemblies.

Another way to say this is to say that our brains make use of a number of lower level and higher level predictions (any of which may change over time based on new experiences or desires) and depending on the context we find ourselves in, one level may have more or less influence on another level.  My lowest level predictions may include things like what each “pixel” in my visual field is going to be from moment to moment, and a slightly higher level prediction may include things like what a specific coffee cup looks like, it’s shape, weight, etc., and slightly higher level predictions may include things like what coffee cups in general look like, their range of shapes, weights, etc., and eventually we get to higher levels of predictions that may include what we expect an object can be used for.

All of these predictions taken as a whole constitute our understanding of the world and the specific sets of predictions that come into play at any point in time, depending on our immediate concerns and where our attention is directed, constitute our interpretation of what we’re presently experiencing.  Barrett points to the fact that what we consider to be truth in the most primitive sense is this intuitive sense of understanding:

“Truth and Being are thus inseparable, given always together, in the simple sense that a world with things in it opens up around man the moment he exists.  Most of the time, however, man does not let himself see what really happens in seeing.”

This process of the world opening itself up to us, such that we have an immediate sense of familiarity with it, is a very natural and automated process and so unless we consciously pull ourselves out of it to try and externally reflect on this situation, we’re just going to take our understanding and interpretation for granted.  But regardless of whether we reflect on it or not, this process of understanding and interpretation is fundamental to how we exist in the world, and thus it is fundamental to our Being in the world.

Speech is the last trait of Dasein that Heidegger mentions, and it is intimately connected with the other two traits, mood and understanding.  So what exactly is he referring to by speech?

“Speech:  Language, for Heidegger, is not primarily a system of sounds or of marks on paper symbolizing those sounds.  Sounds and marks upon paper can become language only because man, insofar as he exists, stands within language.”

This is extremely reminiscent of Wittgenstein once again, since Wittgenstein views language in terms of how it permeates our day-to-day lives within a community or social group of some kind.  Language isn’t some discrete set of symbols with fixed meanings, but rather is a much more dynamic, context-dependent communication tool.  We primarily use language to accomplish some specific goal or another in our day-to-day lives, and this is a fundamental aspect of how we exist as human beings.  Because language is attached to its use, the meaning of any word or utterance (if any) is derived from its use as well.  Sounds and marks on paper are meaningless, unless we first ascribe a meaning to it based on how its been used in the past and how its being used in the present.

And as soon as we pick up language during our child development, it becomes automatically associated with our thought processes where we begin to think in our native language, and both real perceptions and figments of our imagination are instantly attached to some linguistic label in our heads.  If I showed you a picture of an elephant, the word elephant would no doubt enter your stream of consciousness, even without my saying the word or asking you what animal you saw in the picture.  Whether our thinking linguistically comes about as a result of having learned a specific language, or if its an innate feature of our how our brains function, language permeates our thought as well as our interactions with others in the world.

While we tend to think of language as involving words, sounds, and so forth, Heidegger has an interesting take on what else falls under the umbrella of language:

“Two people are talking together.  They understand each other, and they fall silent-a long silence.  This silence is language; it may speak more eloquently than any words.  In their mood they are attuned to each other; they may even reach down into that understanding which, as we have seen above, lies below the level of articulation.  The three-mood, understanding, and speech (a speech here that is silence)-thus interweave and are one.”

I’m not sure if Heidegger is only referring to silence itself here or if he’s also including body language, which often continues even after we’ve stopped talking and which sometimes speaks better than verbal language ever could.  In silence, we often get a chance to simply read another person in a much more basic and instinctual way, ascertaining their mood, and allowing any language that may have recently transpired to be interpreted differently than if a long silence had never occurred.  Heidegger refers to what happens in silence as an attunement between fellow human beings:

“…Nor is this silence merely a gap in our chatter; it is, rather, the primordial attunement of one existent to another, out of which all language-as sounds, marks, and counters-comes.  It is only because man is capable of such silence that he is capable of authentic speech.  If he ceases to be rooted in that silence all his talk becomes chatter.”

We may however also want to consider silence when it is accompanied with action, as our actions speak louder than words and we gain information primarily in this way anyway, by simply watching another do something in the world.  And language can only work within the context of mutual understanding, which necessarily involves action (at some point) and so perhaps we could say that action at least partially constitutes the “silence” that Heidegger refers to.

Heidegger’s entire “Field Theory” of Being relies on context, and Barrett mentions this as well:

“…we might just as well call it a contextual theory of Being.  Being is the context in which all beings come to light-and this means those beings as well that are sounds or marks on paper…Men exist “within language” prior to their uttering sounds because they exist within a mutual context of understanding, which in the end is nothing but Being itself.”

I think that we could interpret this as saying that in order for us to be able to interact with one another and with the world, there’s a prerequisite context of meaning and of what matters to us, already in play in order for those interactions to be possible in the first place.  In other words, whether we’re talking to somebody or simply trying to cook our breakfast, all of our actions presuppose some background of understanding and a sense of what’s significant to us.

3. Death, Anxiety, Finitude

Heidegger views the concept of death as an important one especially as it relates to a proper understanding of the concept of Being, and he also sees the most common understanding of death as fundamentally misguided:

“The authentic meaning of death-“I am to die”-is not as an external and public fact within the world, but as an internal possibility of my own Being.  Nor is it a possibility like a point at the end of a road, which I will in time reach.  So long as I think in this way, I still hold death at a distance outside myself.  The point is that I may die at any moment, and therefore death is my possibility now…Hence, death is the most personal and intimate of possibilities, since it is what I must suffer for myself: nobody else can die for me.”

I definitely see the merit in viewing death as an imminent possibility in order to fully grasp the implications it has on how we should be living our lives.  But most people tend to consider death in very abstract terms where it’s only thought about in the periphery as something that will happen “some day” in the future; and I think we owe this abstraction and partial repression of death to our own fear of death.  In many other cases, this fear of death manifests itself into supernatural beliefs that posit life after death, the resurrection of the dead, immortality, and other forms of magic; and importantly, all of these psychologically motivated tactics prevent one from actually accepting the truth about death as an integral part of our existence as human beings.  By denying the truth about death, one is denying the true value of the life they actually have.  On the other hand, by accepting it as an extremely  personal and intimate attribute of ourselves, we can live more authentically.

We can also see that by viewing death in more personal terms, we are pulled out the world of “Them”, the collective world of social norms and externalized identities, and able to better connect with our sense of being a Self:

“Only by taking my death into myself, according to Heidegger, does an authentic existence become possible for me.  Touched by this interior angel of death, I cease to be the impersonal and social One among many…and I am free to become myself.”

Thinking about death may make us uncomfortable, but if I know I’m going to die and could die at any moment, then shouldn’t I prioritize my personal endeavors to reflect this radical contingency?  It’s hard to argue with that conclusion but even if we should be doing this it’s understandably easy to get lost in our mindless day-to-day routines and superficial concerns, and then we simply lose sight of what we ought to be doing with our limited time.  It’s important to break out of this pattern, or at least to strive to, and in doing so we can begin to center our lives around truly personal goals and projects:

“Though terrifying, the taking of death into ourselves is also liberating: It frees us from servitude to the petty cares that threaten to engulf our daily life and thereby opens us to the essential projects by which we can make our lives personally and significantly our own.  Heidegger calls this the condition of “freedom-toward-death” or “resoluteness.”

The fear of death is also unique and telling in the sense that it isn’t really directed at any object but rather it is directed at Nothingness itself.  We fear that our lives will come to an end and that we’ll one day cease to be.  Heidegger seems to refer to this disposition as anxiety rather than fear, since it isn’t really the fear of anything but rather the fear of nothing at all; we just treat this nothing as if it were an object:

“Anxiety is not fear, being afraid of this or that definite object, but the uncanny feeling of being afraid of nothing at all.  It is precisely Nothingness that makes itself present and felt as the object of our dread.”

Accordingly, the concept of Nothingness is integral to the concept of Being, since it’s interwoven in our existence:

“In Heidegger Nothingness is a presence within our own Being, always there, in the inner quaking that goes on beneath the calm surface of our preoccupation with things.  Anxiety before Nothingness has many modalities and guises: now trembling and creative, now panicky and destructive; but always it is as inseparable from ourselves as our own breathing because anxiety is our existence itself in its radical insecurity.  In anxiety we both are and are not, at one and the same time, and this is our dread.”

What’s interesting about this perspective is the apparent asymmetry between existence and non-existence, at least insofar as it relates to human beings.  Non-existence becomes a principal concern for us as a result of our kind of existence, but non-existence itself carries with it no concerns at all.  In other words, non-existence doesn’t refer to anything at all whereas (human) existence refers to everything as well as to nothing (non-existence).  And it seems to me that anxiety and our relation to Nothingness is entirely dependent on our inherent capacity of imagination, where we use imagination to simulate possibilities, and if this imagination is capable of rendering the possibility of our own existence coming to an end, then that possibility may forcefully reorganize the relative significance of all other products of our imagination.

As confusing as it may sound, death is the possibility of ending all possibilities for any extant being.  If imagining such a possibility were not enough to fundamentally change one’s perspective of their own life, then I think that combining it with the knowledge of its inevitability, that this possibility is also a certainty, ought to.  Another interesting feature of our existence is the fact that death isn’t the only possibility that relates to Nothingness, for anytime we imagine a possibility that has not yet been realized or even if we conceive of an actuality that has been realized, we are making reference to that which is not; we are referring to what we are not, to what some thing or other is not and ultimately to that which does not exist (at a particular time and place).  I only know how to identify myself, or any object in the world for that matter, by distinguishing it from what it is not.

“Man is finite because the “not”-negation-penetrates the very core of his existence.  And whence is this “not” derived?  From Being itself.  Man is finite because he lives and moves within a finite understanding of Being.”

As we can see, Barrett describes how within Heidegger’s philosophy negation is treated as something that we live and move within and this makes sense when we consider what identity itself is dependent on.  And the mention of finitude is interesting because it’s not simply a limitation in the sense of what we aren’t capable of (e.g. omnipotence, immortality, perfection, etc.), but rather that our mode of existence entails making discriminations between what is and what isn’t, between what is possible and what is not, between what is actual and what is not.

4. Time and Temporality; History

The last section on Heidegger concerns the nature of time itself, and it begins by pointing out the relation between negation and our experience of time:

“Our finitude discloses itself essentially in time.  In existing…we stand outside ourselves at once open to Being and in the open clearing of Being; and this happens temporally as well as spatially.  Man, Heidegger says, is a creature of distance: he is perpetually beyond himself, his existence at every moment opening out toward the future.  The future is the not-yet, and the past is the no-longer; and these two negatives-the not-yet and the no-longer-penetrate his existence.  They are his finitude in its temporal manifestation.”

I suppose we could call this projection towards the future the teleological aspect of our Being.  In order to do anything within our existence, we must have a goal or a number of them, and these goals refer to possible future states of existence that can only be confirmed or disconfirmed once our growing past subsumes them as actualities or lost opportunities.  We might even say that the present is somewhat of an illusion (though as far as I know Heidegger doesn’t make this claim) because in a sense we’re always living in the future and in the past, since the person we see ourselves as and the person we want to be are a manifestation of both; whereas the present itself seems to be nothing more than a fleeting moment that only references what’s in front or behind itself.

In addition to this, if we look at how our brain functions from a predictive coding perspective, we can see that our brain generates predictive models based on our past experiences and the models it generates are pure attempts to predict the future; nowhere does the present come into play.  The present as it is experienced by us, is nothing but a prediction of what is yet to come.  I think this is integral to take into account when considering how time is experienced by us and how it should be incorporated into a concept of Being.

Heidegger’s concept of temporality is also tied to the concept of death that we explored earlier.  He sees death as necessary to put our temporal existence into a true human perspective:

“We really know time, says Heidegger, because we know we are going to die.  Without this passionate realization of our mortality, time would be simply a movement of the clock that we watch passively, calculating its advance-a movement devoid of human reasoning…Everything that makes up human existence has to be understood in the light of man’s temporality: of the not-yet, the no-longer, the here-and-now.”

It’s also interesting to consider what I’ve previously referred to as the illusion of persistent identity, an idea that’s been explored by a number of philosophers for centuries: we think of our past self as being the same person as our present or future self.  Now whether the present is actually an illusion or not is irrelevant to this point; the point is that we think of ourselves as always being there in our memories and at any moment of time.  The fact that we are all born into a society that gives us a discrete and fixed name adds to this illusion of our having an identity that is fixed as well.  But we are not the same person that we were ten years ago let alone the same person we were as a five-year old child.  We may share some memories with those previous selves but even those change over time and are often altered and reconstructed upon recall.  Our values, our ontology, our language, our primary goals in life, are all changing to varying degrees over time and we mustn’t lose sight of this fact either.  I think this “dynamic identity” is a fundamental aspect of our Being.

We might help to explain how we’re taken in by the illusion of a persistent identity by understanding once again that our identity is fundamentally projected toward the future.  Since that projection changes over time (as our goals change) and since it is a process that relies on a kind of psychological continuity, we might expect to simply focus on the projection itself rather than what that projection used to be.  Heidegger stresses the importance of the future in our concept of Being:

“Heidegger’s theory of time is novel, in that, unlike earlier philosophers with their “nows,” he gives priority to the future tense.  The future, according to him, is primary because it is the region toward which man projects and in which he defines his own being.  “Man never is, but always is to be,” to alter slightly the famous line of Pope.”

But, since we’re always relying on past experiences in order to determine our future projection, to give it a frame of reference with which we can evaluate that projected future, we end up viewing our trajectory in terms of human history:

“All these things derive their significance from a more basic fact: namely, that man is the being who, however dimly and half-consciously, always understands, and must understand, his own being historically.”

And human history, in a collective sense also plays a role, since our view of ourselves and of the world is unavoidably influenced by the cultural transmission of ideas stemming from our recent past all the way to thousands of years ago.  A lot of that influence has emanated from the Greeks especially (as Barrett has mentioned throughout Irrational Man), and it’s had a substantial impact on our conceptualization of Being as well:

“By detaching the figure from the ground the object could be made to emerge into the daylight of human consciousness; but the sense of the ground, the environing background, could also be lost.  The figure comes into sharper focus, that is, but the ground recedes, becomes invisible, is forgotten.  The Greeks detached beings from the vast environing ground of Being.  This act of detachment was accompanied by a momentous shift in the meaning of truth for the Greeks, a shift which Heidegger pinpoints as taking place in a single passage in Plato’s Republic, the celebrated allegory of the cave.”

Through the advent of reason and forced abstraction, the Greeks effectively separated the object from the context it’s normally embedded within.  But if a true understanding of our Being in the world can only be known contextually, then we can see how Heidegger may see a crucial problem with this school of thought which has propagated itself in Western philosophy ever since Plato.  Even the concept of truth itself had underwent a dramatic change as a result of the Greeks discovering and developing the process of reason:

“The quality of unhiddenness had been considered the mark of truth; but with Plato in that passage truth came to be defined, rather, as the correctness of an intellectual judgment.”

And this concept of unhiddenness seems to be a variation of “subjective truth” or “subjective understanding”, although it shouldn’t be confused with modern subjectivism; in this case the unhiddenness would naturally precipitate from whatever coherency and meaning is immediately revealed to us through our conscious experience.  I think that the redefining of truth had less of an impact on philosophy than Heidegger may have thought; the problem wasn’t the redefining of truth per se but rather the fact that a contextual understanding and subjective experience itself lost the level of importance they once had.  And I’m sure Heidegger would agree that this perceived loss of importance for both subjectivity and for a holistic understanding of human existence has led to a major shift in our view of the world, and a view that has likely resulted in some of the psychological pathologies sprouting up in our age of modernity.

Heidegger thought that allowing Being to reveal itself to us, was analogous to an artist letting the truth reveal itself naturally:

“…the artist, as well as the spectator, must submit patiently and passively to the artistic process, that he must lie in wait for the image to produce itself; that he produces false notes as soon as he tries to force anything; that, in short, he must let the truth of his art happen to him?  All of these points are part of what Heidegger means by our letting Being be.  Letting it be, the artist lets it speak to him and through him; and so too the thinker must let it be thought.”

He seems to be saying that modern ways of thinking are too forceful in the sense of our always prioritizing the subject-object distinction in our way of life.  We’ve let the obsessive organization of our lives and the various beings within those lives drown out our appreciation of, and ability to connect to, Being itself.  I think that another way we could put this is to say that we’ve split ourselves off from the sense of self that has a strong connection to nature and this has disrupted our ability to exist in a mode that’s more in tune with our evolutionary psychology, a kind of harmonious path of least resistance.  We’ve become masters over manipulating our environment while simultaneously losing our innate connection to that environment.  There’s certainly a lesson to be learned here even if the problem is difficult to articulate.

•      •      •      •      •

This concludes William Barrett’s chapter on Martin Heidegger.  I’ve gotta say that I think Heidegger’s views are fascinating as they serve to illustrate a radically different way of viewing the world and human existence, and for those that take his philosophy seriously, it forces us to re-evaluate much of the Western philosophical thought that we’ve been taking for granted, and which has constrained a lot of our thinking.  And I think it’s both refreshing and worthwhile to explore some of these novel ideas as they help us see life through an entirely new lens.  We need a shift in our frame of mind; a need to think outside the box so we have a better chance of finding new solutions to many of the problems we face in today’s world.  In the next post for this series on William Barrett’s Irrational Man, I’ll be looking at the philosophy of Jean-Paul Sartre.

Irrational Man: An Analysis (Part 3, Chapter 8: Nietzsche)

In the last post on this series on William Barrett’s Irrational Man, we began exploring Part 3: The Existentialists, beginning with chapter 7 on Kierkegaard.  In this post, we’ll be taking a look at Nietzsche’s philosophy in more detail.

Ch. 8 – Nietzsche

Nietzsche shared many of the same concerns as Kierkegaard, given the new challenges of modernity brought about by the Enlightenment; but each of these great thinkers approached this new chapter of our history from perspectives that were in many ways, diametrically opposed.  Kierkegaard had a grand goal of sharing with others what he thought it meant to be a Christian, and he tried to revive Christianity within a society that he found to be increasingly secularized.  He also saw that of the Christianity that did exist, it was largely organized and depersonalized, and he felt the need to stress the importance of a much more personal form of the religion.  Nietzsche on the other hand, an atheist, rejected Christianity and stressed the importance of re-establishing our values given the fact that modernity was now living in a godless world, albeit a culturally Christianized world, but one that was now without a Christ.

Despite their differences, both philosophers felt that discovering the true meaning of our lives was paramount to living an authentic life, and this new quest for meaning was largely related to the fact that our view of ourselves had changed significantly:

“By the middle of the nineteenth century, as we have seen, the problem of man had begun to dawn on certain minds in a new and more radical form: Man, it was seen, is a stranger to himself and must discover, or rediscover, who he is and what his meaning is.”

Nietzsche also understood and vividly illustrated the crucial differences between mankind and the rest of nature, most especially our self-awareness:

“It was Nietzsche who showed in its fullest sense how thoroughly problematical is the nature of man: he can never be understood as an animal species within the zoological order of nature, because he has broken free of nature and has thereby posed the question of his own meaning-and with it the meaning of nature as well-as his destiny.”

And for Nietzsche, the meaning one was to find for their life included first abandoning what he saw as an oppressive and disempowering Christian morality, which he saw as severely inhibiting our potential as human beings.  He was much more sympathetic to Greek culture and morality, and even though (perhaps ironically) Christianity itself was derived from a syncretism between Judaism and Greek Hellenism, it’s moral and psychologically relevant differences were too substantial to avoid Nietzsche’s criticism.  He thought that a return to a more archaic form of Greek culture and mentality was the solution we needed to restore or at least re-validate our instincts:

“Dionysus reborn, Nietzsche thought, might become a savior-god for the whole race, which seemed everywhere to show symptoms of fatigue and decline.”

Nietzsche learned about the god Dionysus while he was studying Greek tragedy, and he gravitated toward the Dionysian symbolism, for it contained a kind of reconciliation between high culture and our primal instincts.  Dionysus was after all the patron god of the Greek tragic festivals and so was associated with beautiful works of art; but he was also the god of wine and ritual madness, bringing people together through the grape harvest and the joy of intoxication.  As Barrett describes it:

“This god thus united miraculously in himself the height of culture with the depth of instinct, bringing together the warring opposites that divided Nietzsche himself.”

It was this window into the hidden portions of our psyche that Nietzsche tried to look through and which became an integral basis for much of his philosophy.  But the chaotic nature of our species combined with our immense power to manipulate the environment also concerned him greatly; and he thought that the way modernity was repressing many of our instincts needed to be circumvented, lest we continue to build up this tension in our unconscious only to release itself in an explosive eruption of violence:

“It is no mere matter of psychological curiosity but a question of life and death for man in our time to place himself again in contact with the archaic life of his unconscious.  Without such contact he may become the Titan who slays himself.  Man, this most dangerous of the animals, as Nietzsche called him, now holds in his hands the dangerous power of blowing himself and his planet to bits; and it is not yet even clear that this problematic and complex being is really sane.”

Indeed our species has been removed from our natural evolutionary habitat; and we’ve built our modern lives around socio-cultural and technological constructs that have, in many ways at least, inhibited the open channel between our unconscious and conscious mind.  One could even say that the prefrontal cortex region of our brain, important as it is for conscious planning and decision making, has been effectively high-jacked by human culture (memetic selection) and now it’s constantly running on overtime, over-suppressing the emotional centers of the brain (such as the limbic system).

While we’ve come to realize that emotional suppression is sometimes necessary in order to function well as a cooperative social species that depends on complex long-term social relationships, we also need to maintain a healthy dose of emotional expression as well.  If we can’t release this more primal form of “psychological energy” (for lack of a better term), then it shouldn’t be surprising to see it eventually manifest into some kind of destructive behavior.

We ought not forget that we’re just like other animals in terms of our brain being best adapted to a specific balance of behaviors and cognitive functions; and if this balance is disrupted through hyperactive or hypoactive use of one region or another, doesn’t it stand to reason that we may wind up with either an impairment in brain function or less healthy behavior?  If we’re not making a sufficient use of our emotional capacities, shouldn’t we expect them to diminish or atrophy in some way or other?  And if some neurological version of the competitive exclusion principle exists, then this would further reinforce the idea that our suppression of emotion by other capacities may cause those other capacities to become permanently dominant.  We can only imagine the kinds of consequences that ensue when this psychological impairment falls upon a species with our level of power.

1.  Ecce Homo

“In the end one experiences only oneself,” Nietzsche observes in his Zarathustra, and elsewhere he remarks, in the same vein, that all the systems of the philosophers are just so many forms of personal confession, if we but had eyes to see it.”

Here Nietzsche is expressing one of his central ideas, by pointing out the fact that we can’t separate ourselves from our thoughts, and therefore the ideas put forward by any philosopher are really betraying some set of values they hold whether unconsciously or explicitly, and they also betray one’s individual perspective; an unavoidable perspective stemming from our subjective experience which colors our interpretation of those experiences.  Perspectivism, or the view that reality and our ideas about what’s true and what we value can be mapped onto many different possible conceptual schemes, was a big part of Nietzsche’s overall philosophy, and it was interwoven into his ideas on meaning, epistemology, and ontology.

As enlightening as Nietzsche was in giving us a much needed glimpse into some of the important psychological forces that give our lives the shape they have (for better or worse), his written works show little if any psychoanalytical insight applied to himself, in terms of the darker and less pleasant side of his own psyche.

“Nietzsche’s systematic shielding of himself from the other side is relevant to his explanation of the death of God: Man killed God, he says, because he could not bear to have anyone looking at his ugliest side.”

But even if he didn’t turn his psychoanalytical eye inward toward his own unconscious, he was still very effective in shining a light on the collective unconscious of humanity as a whole.  It’s not enough that we marvel over the better angels of our nature, even though it’s important to acknowledge and understand our capacities and our potential for bettering ourselves and the world we live in; we also have to acknowledge our flaws and shortcomings and while we may try to overcome these weaknesses in one way or another, some of them are simply a part of our nature and we should accept them as such even if we may not like them.

One of the prevailing theories stemming from Western Rationalism (mentioned earlier in Barrett’s book) was that human beings were inherently perfect and rational, and it was within the realm of possibility to realize that potential; but this kind of wishful thinking was relying on a fundamental denial of an essential part of the kind of animal we really are.  And Nietzsche did a good job of putting this fact out in the limelight, and explaining how we couldn’t truly realize our potential until we came to terms with this fairly ugly side of ourselves.

Barrett distinguishes between the attitudes stemming from various forms of atheism, including that of Nietzsche’s:

“The urbane atheism of Bertrand Russell, for example, presupposes the existence of believers against whom he can score points in an argument and get off some of his best quips.  The atheism of Sartre is a more somber affair, and indeed borrows some of its color from Nietzsche: Sartre relentlessly works out the atheistic conclusion that in a universe without God man is absurd, unjustified, and without reason, as Being itself is.”

In fact Nietzsche was more concerned with the atheists that hadn’t yet accepted the full ramifications of a Godless world, rather than believers themselves.  He felt that those who believed in God still had a more coherent belief system given many of their moral values and the meaning they attached to their lives, since they were largely derived from their religious beliefs (even if the religious beliefs were unwarranted or harmful).  Those that were atheists, many of them at least, hadn’t yet taken the time to build a new foundation for their previously held values and the meaning in their lives; and if they couldn’t rebuild this foundation then they’d have to change those values and what they find to be meaningful to reflect that shift in foundation.

Overall, Nietzsche even changes the game a little in terms of how the conception of God was to be understood in philosophy:

“If God is taken as a metaphysical object whose existence has to be proved, then the position held by scientifically minded philosophers like Russell must inevitably be valid: the existence of such an object can never be empirically proved.  Therefore, God must be a superstition held by primitive and childish minds.  But both these alternative views are abstract, whereas the reality of God is concrete, a thoroughly autonomous presence that takes hold of men but of which, of course, some men are more conscious than others.  Nietzsche’s atheism reveals the true meaning of God-and does so, we might add, more effectively than a good many official forms of theism.”

Even though God may not actually exist as a conscious being, the idea of God and the feelings associated with what one labels as an experience of God, is as real as any other idea or experience and so it should still be taken seriously even in a world where no gods exist.  As an atheist myself (formerly a born-again Protestant Christian), I’ve come to appreciate the real breadth of possible meaning attached to the term “God”.  Even though the God or gods described in various forms of theism do not track onto objective reality as was pointed out by Russell, this doesn’t negate the subjective reality underlying theism.  People ascribe the label “God” to a set of certain feelings and forces that they feel are controlling their lives, their values, and their overall vision of the world.  Nietzsche had his own vision of the world as well, and so one could label this as his “god”, despite the confusion that this may cause when discussing God on metaphysical terms.

2. What Happens In “Zarathustra”; Nietzsche as Moralist

One of the greatest benefits of producing art is our ability to tap into multiple perspectives that we might otherwise never explore.  And by doing this, we stand a better chance of creating a window into our own unconscious:

“[Thus Spoke Zarathustra] was Nietzsche’s poetic work and because of this he could allow the unconscious to take over in it, to break through the restraints imposed elsewhere by the philosophic intellect. [in poetry] One loses all perception of what is imagery and simile-that is to say, the symbol itself supersedes thought, because it is richer in meaning.”

By avoiding the common philosophical trap of submitting oneself to a preconceived structure and template for processing information, poetry and other works of art can inspire new ideas and novel perspectives through the use of metaphor and allegory, emotion, and the seemingly infinite powers of our own imagination.  Not only is this true for the artist herself, but also for the spectator who stands in a unique position to interpret what they see and to make sense of it the best they can.  And in the rarest of cases, one may be able to experience something in a work of art that fundamentally changes themselves in the process.  Either way, by presenting someone with an unusual and ambiguous stimulus that they are left to interpret, they can hope to gain at least some access into their unconscious and to inspire an expression of their creativity.

In his Thus Spoke Zarathustra, Nietzsche was focusing on how human beings could ever hope to regain a feeling of completeness; for modernity had fractured the individual and placed too much emphasis on collective specialization:

“For man, says Schiller, the problem is one of forming individuals.  Modern life has departmentalized, specialized, and thereby fragmented the being of man.  We now face the problem of putting the fragments together into a whole.  In the course of his exposition, Schiller even referred back, as did Nietzsche, to the example of the Greeks, who produced real individuals and not mere learned abstract men like those of the modern age.”

A part of this reassembly process would necessarily involve reincorporating what Nietzsche thought of as the various attributes of our human nature, which many of us are in denial of (to go back to the point raised earlier in this post).  For Nietzsche, this meant taking on some characteristics that may seem inherently immoral:

“Man must incorporate his devil or, as he put it, man must become better and more evil; the tree that would grow taller must send its roots down deeper.”

Nietzsche seems to be saying that if there are aspects of our psychology that are normal albeit seemingly dark or evil, then we ought to structure our moral values along the same lines rather than in opposition to these more primal parts of our psyche:

“(Nietzsche) The whole of traditional morality, he believed, had no grasp of psychological reality and was therefore dangerously one-sided and false.”

And there is some truth to this though I wouldn’t go as far as Nietzsche does, as I believe that there are many basic and cross-cultural moral prescriptions that do in fact take many of our psychological needs into account.  Most cultures have some place for various virtues such as compassion, honesty, generosity, forgiveness, commitment, integrity, and many others; and these virtues have been shown within the fields of moral psychology and sociology to help us flourish and lead psychologically fulfilling lives.  We evolved as a social species that operates within some degree of a dominance hierarchy, and sure enough our psychological traits are what we’d expect given our evolutionary history.

However, it’s also true that we ought to avoid slipping into some form of a naturalistic fallacy; for we don’t want to simply assume that all the behaviors that were more common prior to modernity let alone prior to civilization were beneficial to us and our psychology.  For example, raping and stealing were far more common per capita prior to humans establishing some kind of civil state or society.  And a number of other behaviors that we’ve discovered to be good or bad for our physical or psychological health have been the result of a long process of trial and error, cultural evolution, and the cultural transmission of newfound ideas from one generation to the next.  In any case, we don’t want to assume that just because something has been a tradition or even a common instinctual behavior for many thousands or even hundreds of thousands of years, that it is therefore a moral behavior.  Quite the contrary; instead we need to look closely at which behaviors lead to more fulfilling lives and overall maximal satisfaction and then aim to maximize those behaviors over those that detract from this goal.

Nietzsche did raise a good point however, and one that’s supported by modern moral psychology; namely, the fact that what we think we want or need isn’t always in agreement with our actual wants and needs, once we dive below the surface.  Psychology has been an indispensable tool in discovering many of our unconscious desires and the best moral theories are going to be those that take both our conscious and unconscious motivations and traits into account.  Only then can we have a moral theory that is truly going to be sufficiently motivating to follow in the long term as well as most beneficial to us psychologically.  If one is basing their moral theory on wishful thinking rather than facts about their psychology, then they aren’t likely to develop a viable moral theory.  Nietzsche realized this and promoted it in his writings, and this was an important contribution as it bridged human psychology with a number of important topics in philosophy:

“On this point Nietzsche has a perfectly sober and straightforward case against all those idealists, from Plato onward, who have set universal ideas over and above the individual’s psychological needs.  Morality itself is blind to the tangle of its own psychological motives, as Nietzsche showed in one of his most powerful book, The Genealogy of Morals, which traces the source of morality back to the drives of power and resentment.”

I’m sure that drives of power and resentment have played some role in certain moral prescriptions and moral systems, but I doubt that this is likely to be true generally speaking; and even in cases where a moral prescription or system arose out of one of these unconscious drives, such as resentment, this doesn’t negate it’s validity nor demonstrate whether or not it’s warranted or psychologically beneficial.  The source of morality isn’t as important as whether or not the moral system itself works, though the sources of morality can be both psychologically relevant and revealing.

Our ego certainly has a lot to do with our moral behavior and whether or not we choose to face our inner demons:

“Precisely what is hardest for us to take is the devil as the personification of the pettiest, paltriest, meanest part of our personality…the one that most cruelly deflates ones egotism.”

If one is able to accept both the brighter and darker sides of themselves, they’ll also be more likely to willingly accept Nietzsche’s idea of the Eternal Return:

“The idea of the Eternal Return thus expresses, as Unamuno has pointed out, Nietzsche’s own aspirations toward eternal and immortal life…For Nietzsche the idea of the Eternal Return becomes the supreme test of courage: If Nietzsche the man must return to life again and again, with the same burden of ill health and suffering, would it not require the greatest affirmation and love of life to say Yes to this absolutely hopeless prospect?”

Although I wouldn’t expect most people to embrace this idea, because it goes against our teleological intuitions of time and causation leading to some final state or goal, it does provide a valuable means of establishing whether or not someone is really able to accept this life for what it is with all of its absurdities, pains and pleasures.  If someone willingly accepts the idea, then by extension they likely accept themselves, their lives, and the rest of human history (and even the history of life on this planet, no less).  This doesn’t mean that by accepting the idea of the Eternal Return, that people have to put every action, event, or behavior on equal footing, for example, by condoning slavery or the death of millions as a result of countless wars, just because these events are to be repeated for eternity.  But one is still accepting that all of these events were necessary in some sense; that they couldn’t have been any other way, and this acceptance should translate to one’s attitude toward life reflecting this by some kind of overarching contentment.

I think that the more common reaction to this kind of idea is slipping into some kind of nihilism, where a person no longer cares about anything, and no longer finds any meaning or purpose in their lives.  And this reaction is perfectly understandable given our teleological intuitions; most people don’t or wouldn’t want to invest time and effort into some goal if they knew that the goal would never be accomplished or knew that it would eventually be undone.  But, oddly enough, even if the Eternal Return isn’t an actual fact of our universe, we still run into the same basic situation within our own lifetimes whereby we know that there are a number of goals that will never be achieved because of our own inevitable death.

There’s also the fact that even if we did achieve a number of goals, once we die, we can no longer reap any benefits from them.  There may be other people that can benefit from our own past accomplishments, but eventually they will die as well.  Eventually all life in the universe will expire, and this is bound to happen long before the inevitable heat death of the universe transpires; and once it happens, there won’t be anybody experiencing anything at all let alone reaping the benefits of a goal from someone’s distant past.  Once this is taken into account, Nietzsche’s idea doesn’t sound all that bad, does it?  If we knew that there would always be time and some amount of conscious experience for eternity, then there will always be some form of immortality for consciousness; and if it’s eternally recurring, then we end up becoming immortal in some sense.

Perhaps the idea of the Eternal Return (or Recurrence), even though this idea began many hundreds if not a few thousand years before Nietzsche, was motivated by Nietzsche’s own fear of death.  Even though he thought of the idea as horrifying and paralyzing (not least because of all the undesirable parts of our own personal existence), since he didn’t believe in a traditional afterlife, maybe the fear of death motivated him to adopt such an idea, even though he also saw it as a physically plausible consequence of probability and the laws of physics.  But either way, beyond it being far more plausible an idea compared to that of an eternal paradise after death, it’s also a very different idea; not only because one’s memory or identity isn’t preserved such that one actually experiences immortality when the “temporal loop” defining their lifetime starts all over again (i.e. the “movie” of one’s life is just played over and over again, unbeknownst to the person in the movie), but this idea is also different because it involves accepting the world the way it is for all eternity rather than an idea revolving around the way they wish it was.  The Eternal Return fully engages with the existentialist reality of human life with all its absurdities, contingencies, and the rest.

3. Power and Nihilism

Nietzsche, like Kierkegaard before him, was considered to be an unsystematic thinker by many, though Barrett points out that this view is mistaken when it comes to Nietzsche; a common view resulting from the form his writings took which were largely aphoristic.  Nietzsche himself even said that he was viewing science and philosophy through the eyes of art, but nevertheless his work eventually revealed a systematic structure:

“As thinking gradually took over the whole person, and everything else in his life being starved out, it was inevitable that this thought should tend to close itself off in a system.”

This systematization of his philosophy was revealed in the notes he was making for The Will to Power, a work he never finished but one that was more or less assembled and published posthumously by his sister Elisabeth.  The systematization was centered on the idea that a will to power was the underlying force that ultimately guided our behavior, striving to achieve the highest possible position in life as opposed to being fundamentally driven by some biological imperative to survive.  But it’s worth pointing out here that Nietzsche didn’t seem to be merely talking about a driving force behind human behavior, but rather he seemed to be referencing an even more fundamental driving force that was in some sense driving all of reality; and this more inclusive view would be analogous to Schopenhauer’s will to live.

It’s interesting to consider this perspective from the lens of biology, where if we try and reconcile this with a Darwinian account of evolution in which survival is key, we could surmise that a will to power is but a mechanism for survival.  If we grant Nietzsche’s position and the position of the various anti-Darwinian thinkers that influenced him, that is, that a will to survive or survival more generally is somehow secondary to a will to power, then we run into a problem: if survival is a secondary drive or goal, then we’d expect survival to be less important than power; but without survival you can’t have power and thus it makes far more evolutionary sense that a will to survive is more basic than a will to power.

However, I am willing to grant that as soon as organisms began evolving brains along with the rest of their nervous system, the will to power (or something analogous to it) became increasingly important; and by the time humans came on the scene with their highly complex brains, the will to power may have become manifest in our neurological drive to better predict our environment over time.  I wrote about this idea in a previous post where I had explored Nietzsche’s Beyond Good and Evil, where I explained what I saw as a correlation between Nietzsche’s will to power and the idea of the brain as a predictive processor:

“…I’d like to build on it a little and suggest that both expressions of a will to power can be seen as complementary strategies to fulfill one’s desire for maximal autonomy, but with the added caveat that this autonomy serves to fulfill a desire for maximal causal power by harnessing as much control over our experience and understanding of the world as possible.  On the one hand, we can try and change the world in certain ways to fulfill this desire (including through the domination of other wills to power), or we can try and change ourselves and our view of the world (up to and including changing our desires if we find them to be misdirecting us away from our greatest goal).  We may even change our desires such that they are compatible with an external force attempting to dominate us, thus rendering the external domination powerless (or at least less powerful than it was), and then we could conceivably regain a form of power over our experience and understanding of the world.

I’ve argued elsewhere that I think that our view of the world as well as our actions and desires can be properly described as predictions of various causal relations (this is based on my personal view of knowledge combined with a Predictive Processing account of brain function).  Reconciling this train of thought with Nietzsche’s basic idea of a will to power, I think we could…equate maximal autonomy with maximal predictive success (including the predictions pertaining to our desires). Looking at autonomy and a will to power in this way, we can see that one is more likely to make successful predictions about the actions of another if they subjugate the other’s will to power by their own.  And one can also increase the success of their predictions by changing them in the right ways, including increasing their complexity to better match the causal structure of the world, and by changing our desires and actions as well.”

On the other hand, if we treat the brain’s predictive activity as a kind of mechanistic description of how the brain works rather than a type of drive per se, and if we treat our underlying drives as something that merely falls under this umbrella of description, then we might want to consider whether or not any of the drives that are typically posited in psychological theory (such as power, sex, or a will to live) are actually more basic than any other or typically dominant over the others.  Barrett suggests an alternative, saying that we ought to look at these elements more holistically:

“What if the human psyche cannot be carved up into compartments and one compartment wedged in under another as being more basic?  What if such dichotomizing really overlooks the organic unity of the human psyche, which is such that a single impulse can be just as much an impulse toward love on the one hand as it is toward power on the other?”

I agree with Barrett at least in the sense that these drives seem to be operating together much of the time, and when they aren’t in unison, they often seem to be operating on the same level at least.  Another way to put this could be to say that as a social species, we’ve evolved a number of different modular behavioral strategies that are typically best suited for particular social circumstances, though some circumstances may be such that multiple strategies will work and thus multiple strategies may be used simultaneously without working against each other.

But I also think that our conscious attention plays a role as well where the primary drive(s) used to produce the subsequent behavior may be affected by thinking about certain things or in a certain way, such as how much you love someone, what you think you can gain from them, etc.  And this would mean that the various underlying drives for our behavior are individually selected or prioritized based on one’s current experiences and where their attention is directed at any particular moment.  It may still be the case that some drives are typically dominant over others, such as a will to power, even if certain circumstances can lead to another drive temporarily taking over, however short-lived that may be.

The will to power, even if it’s not primary, would still help to explain some of the enormous advancements we’ve made in our scientific and technological progress, while also explaining (at least to some degree) the apparent disparity between our standard of living and overall physio-psychological health on the one hand, and our immense power to manipulate the environment on the other:

“Technology in the twentieth century has taken such enormous strides beyond that of the nineteenth that it now bulks larger as an instrument of naked power than as an instrument for human well-being.”

We could fully grasp Barrett’s point here by thinking about the opening scene in Stanley Kubrick’s 2001: A Space Odyssey, where “The Dawn of Man” was essentially made manifest with the discovery of tools, specifically weapons.  Once this seemingly insignificant discovery was made, it completely changed the game for our species.  While it gave us a new means of defending ourselves and an ability to hunt other animals, thus providing us with the requisite surplus of protein and calories needed for brain development and enhanced intelligence, it also catalyzed an arms race.  And with the advent of human civilization, our historical trajectory around the globe became largely dominated by our differential means of destruction; whoever had the biggest stick, the largest army, the best projectiles, and ultimately the most concentrated form of energy, would likely win the battle, forever changing the fate of the parties involved.

Indeed war has been such a core part of human history, so it’s no wonder Nietzsche stumbled upon such an idea; and even if he hadn’t, someone else likely would have even if they wouldn’t have done so with the same level of creativity and intellectual rigor.  If our history has been so colored with war, which is a physical instantiation of a will to power in order to dominate another group, then we should expect many to see it as a fundamental part of who we are.  Personally, I don’t think we’re fundamentally war-driven creatures but rather that war results from our ability to engage in it combined with a perceived lack of and desire for resources, freedom, and stability in our lives; but many of these goods, freedom in particular, imply a particular level of power over oneself and their environment, so even a fight for freedom is in one way or another a kind of will to power as well.

And what are we to make of our quest for power in terms of the big picture?  Theoretically, there is an upper limit to the amount of power any individual or group can possibly attain, and if one gets to a point where they are seeking power for power’s sake, and using their newly acquired power to harness even more power, then what will happen when we eventually hit that ceiling?  If our values become centered around power, and we lose any anchor we once had to provide meaning for our lives, then it seems we would be on a direct path toward nihilism:

“For Nietzsche, the problem of nihilism arose out of the discovery that “God is dead.” “God” here means the historical God of the Christian faith.  But in a wider philosophical sense it means also the whole realm of supersensible reality-Platonic Ideas, the Absolute, or what not-that philosophy has traditionally posited beyond the sensible realm, and in which it has located man’s highest values.  Now that this other, higher, eternal realm is gone, Nietzsche declared, man’s highest values lose their value…The only value Nietzsche can set up to take the place of these highest values that have lost their value for contemporary man is: Power.”

As Barrett explains, Nietzsche seemed to think that power was the only thing that could replace the eternal realm that we valued so much; but of course this does nothing to alleviate the problem of nihilism in the long term since the infinite void still awaits those at the end of the finite road to maximal power.

“If this moment in Western history is but the fateful outcome of the fundamental ways of thought that lie at the very basis of our civilization-and particularly of that way of thought that sunders man from nature, sees nature as a realm of objects to be mastered and conquered, and can therefore end only with the exaltation of the will to power-then we have to find out how this one-sided and ultimately nihilistic emphasis upon the power over things may be corrected.”

I think the answer to this problem lies in a combination of strategies, but with the overarching goal of maximizing life fulfillment.  We need to reprogram our general attitude toward power such that it is truly instrumental rather than perceived as intrinsically valuable; and this means that our basic goal becomes accumulating power such that we can maximize our personal satisfaction and life fulfillment, and nothing more.  Among other things, the power we acquire should be centered around a power over ourselves and our own psychology; finding ways of living and thinking which are likely to include the fostering of a more respectful relationship with the nature that created us in the first place.

And now that we’re knee deep in the Information Age, we will be able to harness the potential of genetic engineering, artificial intelligence, and artificial reality; and within these avenues of research we will be able to change our own biology such that we can quickly adapt our psychology to the ever-changing cultural environment of the modern world, and we’ll be able to free up our time to create any world we want.  We just need to keep the ultimate goal in mind, fulfillment and contentment, and direct our increasing power towards this goal in particular instead of merely chipping away at it in the background as some secondary priority.  If we don’t prioritize it, then we may simply perpetuate and amplify the meaningless sources of distraction in our lives, eventually getting lost in the chaos, and forgetting about our ultimate potential and the imperatives needed to realize it.

I’ll be posting a link here to part 9 of this post-series, exploring Heidegger’s philosophy, once it has been completed.

Irrational Man: An Analysis (Part 2, Chapter 5: “Christian Sources”)

In the previous post in this series on William Barrett’s Irrational Man, we explored some of the Hebraistic and Hellenistic contributions to the formation and onset of existentialism within Western thought.  In this post, I’m going to look at chapter 5 of Barrett’s book, which takes a look at some of the more specifically Christian sources of existentialism in the Western tradition.

1. Faith and Reason

We should expect there to be some significant overlap between Christianity and Hebraism, since the former was a dissenting sect of the latter.  One concept worth looking at is the dichotomy of faith versus reason, where in Christianity the man of faith is far and above the man of reason.  Even though faith is heavily prized within Hebraism as well, there are still some differences between the two belief systems as they relate to faith, with these differences stemming from a number of historical contingencies:

“Ancient Biblical man knew the uncertanties and waverings of faith as a matter of personal experience, but he did not yet know the full conflict of faith with reason because reason itself did not come into historical existence until later, with the Greeks.  Christian faith is therefore more intense than that of the Old Testament, and at the same time paradoxical: it is not only faith beyond reason but, if need be, against reason.”

Once again we are brought back to Plato, and his concept of rational consciousness and the explicit use of reason; a concept that had never before been articulated or developed.  Christianity no longer had reason hiding under the surface or homogeneously mixed-in with the rest of our mental traits; rather, it was now an explicit and specific way of thinking and processing information that was well known and that couldn’t simply be ignored.  Not only did the Christian have to hold faith above this now-identified capacity of thinking logically, but they also had to explicitly reject reason if it contradicted any belief that was grounded on faith.

Barrett also mentions here the problem of trying to describe the concept of faith in a language of reason, and how the opposition between the two is particularly relevant today:

“Faith can no more be described to a thoroughly rational mind than the idea of colors can be conveyed to a blind man.  Fortunately, we are able to recognize it when we see it in others, as in St. Paul, a case where faith had taken over the whole personality.  Thus vital and indescribable, faith partakes of the mystery of life itself.  The opposition between faith and reason is that between the vital and the rational-and stated in these terms, the opposition is a crucial problem today.”

I disagree with Barrett to some degree here as I don’t think it’s possible to be able to recognize a quality in others (let alone a quality that others can recognize as well) that is entirely incommunicable.  On the contrary, if you and I can both recognize some quality, then we should be able to describe it even if only in some rudimentary way.  Now this doesn’t mean that I think a description in words can do justice to every conception imaginable, but rather that, as human beings there is an inherent ability to communicate fairly well that which is involved in common modes of living and in our shared experiences; even very complex concepts that have somewhat of a fuzzy boundary.

It may be that a thoroughly rational mind will not appreciate faith in any way, nor think it is of any use, nor have had any personal experience with it; but it doesn’t mean that they can’t understand the concept, or what a mode of living dominated by faith would look like.  I also don’t think it’s fair to say that the opposition between faith and reason is that between the vital and the rational, even though this may be a real view for some.  If by vital, Barrett is alluding to what is essential to our psyche, then I don’t think he can be talking about faith, at least not in the sense of faith as belief without good reason.  But I’m willing to concede his point if instead by vital he is simply referring to an attitude that is spirited, vibrant, energetic, and thus involving some lively form of emotional expression.

He delves a little deeper into the distinction between faith and reason when he says:

“From the point of view of reason, any faith, including the faith in reason itself, is paradoxical, since faith and reason are fundamentally different functions of the human psyche.”

It seems that Barrett is guilty of making a little bit of a false equivocation here between two uses of the word faith.  I’d actually go so far as to say that the two uses of the word faith are best described in the way that Barrett himself alluded to in the last chapter: faith as trust and faith as belief.  On the one hand, we have faith as belief where some belief or set of beliefs is maintained without good reason; and on the other hand, we have faith as trust where, in the case of reason, our trust in reason is grounded on the fact that its use leads to successful predictions about the causal structure of our experience.  So there isn’t really any paradox at all when it comes to “faith” in reason, because such a faith need not involve adopting some belief without good reason, but rather there is, at most, a trust in reason based on the demonstrable and replicable success it has provided us in the past.

Barrett is right however, when he says that faith and reason are fundamentally different functions of the human psyche.  But this doesn’t mean that they are isolated from one another: for example, if one believes that having faith in God will help them to secure some afterlife in heaven, and if one also desires an afterlife in heaven, then the use of reason can actually be employed to reinforce a faith in God (as it did for me, back when I was a Christian).  This doesn’t mean that faith in God is reasonable, but rather that when certain beliefs are isolated or compartmentalized in the right way (even in the case of imagining hypothetical scenarios), reason can be used to process a logical relation between them, even if those beliefs are themselves derived from illogical cognitive biases.  To see that this is true, one need only realize that an argument can be logically valid even if the premises are not logically sound.

To be as charitable as possible with the concept of faith (at least, one that is more broadly construed), I will make one more point that I think is relevant here: having faith as a form of positivity or optimism in one’s outlook on life, given the social and personal circumstances that one finds themselves in, is perfectly rational and reasonable.  It is well known that people tend to be happier and have more fulfilling lives if they steer clear of pessimism and simply try and stay as positive as possible.  One primary reason for this has to do with what I have previously called socio-psychological feedback loops (what I would also call an evidence-based form of Karma).  Thus, one can have an empirically demonstrable good reason to have an attitude that is reminiscent of faith, yet without having to sacrifice an epistemology that is reliable and which consistently tracks on to reality.

When it comes to the motivating factors that lead people to faith, existentialist thought can shed some light on what some of these factors may be.  If we consider Paul the Apostle, for example, which Barrett mentioned earlier, he also says:

“The central fact for his faith is that Jesus did actually rise from the dead, and so that death itself is conquered-which is what in the end man most ardently longs for.”

The finite nature of man, not only with regard to the intellect but also with respect to our lives in general, is something that existentialism both recognizes and sees as an integral starting point to philosophically build off of.  And I think it should come as no surprise that the fear of death in particular is likely to be a motivating factor for most if not all supernatural beliefs found within any number of religions, including the belief in possibly resurrecting one from the dead (as in the case of St. Paul’s belief in Jesus), the belief in spirits, souls, or other disembodied minds, or any belief in an afterlife (including reincarnation).

We can see that faith is, largely anyway, a means of reducing the cognitive dissonance that results from existential elements of our experience; it is a means of countering or rejecting a number of facts surrounding the contingencies and uncertainties of our existence.  From this fact, we can then see that by examining what faith is compensating for, one can begin to extract what the existentialists eventually elaborated on.  In other words, within existentialism, it is fully accepted that we can’t escape death (for example), and that we have to face death head-on (among other things) in order to live authentically; therefore having faith in overcoming death in any way is simply a denial of at least one burden willingly taken on by the existentialist.

Within early Christianity, we can also see some parallels between an existentialist like Kierkegaard and an early Christian author like Tertullian.  Barrett gives us an excerpt from Tertullian’s De Carne Christi:

“The Son of God was crucified; I am unashamed of it because men must needs be ashamed of it.  And the Son of God died; it is by all means to be believed, because it is absurd.  And He was buried and rose again; the fact is certain because it is impossible.”

This sounds a lot like Kierkegaard, despite the sixteen centuries of separation between these two figures, and despite the fact that Tertullian was writing when Christianity was first expanding and most aggressive and Kierkegaard writing more towards the end of Christianity when it had already drastically receded under the pressure of secularization and modernity.  In this quote of Tertullian’s, we can see a kind of submission to paradox, where it is because a particular proposition is irrational or unreasonable that it becomes an article of faith.  It’s not enough to say that faith is simply used to arrive at a belief that is unreasonable or irrational, but rather that its attribute of violating reason is actually taken as a reason to use it and to believe that the claims of faith are in fact true.  So rather than merely seeing this as an example of irrationality, this actually makes for a good example of the kind of anti-rational attitudes that contributed to the existentialist revolt against some dominant forms of positivism; a revolt that didn’t really begin to precipitate until Kierkegaard’s time.

Although anti-rationalism is often associated with existentialism, I think one would be making an error if they equivocated the two or assumed that anti-rationalism was integral or necessary to existentialism; though it should be said that this is contingent on how exactly one is defining both anti-rationalism (and by extension rationalism) and existentialism.  The two terms have a variety of definitions (as does positivism, with Barrett giving us one particularly narrow definition back in chapter one).  If by rationalism, one is referring to the very narrow view that rationality is the only means of acquiring knowledge, or a view that the only thing that matters in human thought or in human life is rationality, or even a view that humans are fundamentally or primarily rational beings; then I think it is fair to say that this is largely incompatible with existentialism, since the latter is often defined as a philosophy positing that humans are not primarily rational, and that subjective meaning (rather than rationality) plays a much more primary role in our lives.

However, if rationality is simply taken as a view that reason and rational thought are integral components (even if not the only components) for discovering much of the knowledge that exists about ourselves and the universe at large, then it is by all means perfectly compatible with most existentialist schools of thought.  But in order to remain compatible, some priority needs to be given to subjective experience in terms of where meaning is derived from, how our individual identity is established, and how we are to direct our lives and inform our system of ethics.

The importance of subjective experience, which became a primary assumption motivating existentialist thought, was also appreciated by early Christian thinkers such as St. Augustine.  In fact, Barrett goes so far as to say that the interiorization of experience and the primary focus on the individual over the collective was unknown to the Greeks and didn’t come about until Christianity had begun to take hold in Western culture:

“Where Plato and Aristotle had asked the question, What is man?, St. Augustine (in the Confessions) asks, Who am I?-and this shift is decisive.  The first question presupposed a world of objects, a fixed natural and zoological order, in which man was included; and when man’s precise place in that order had been found, the specifically differentiating characteristic of reason was added.  Augustine’s question, on the other hand, stems from an altogether different, more obscure and vital center within the questioner himself: from an acutely personal sense of dereliction and loss, rather than from the detachment with which reason surveys the world of objects in order to locate its bearer, man, zoologically within it.”

So rather than looking at man as some kind of well-defined abstraction within a categorical hierarchy of animals and objects, and one bestowed with a unique faculty of rational consciousness, man is looked at from the inside, from the perspective of, and identified as, an embodied individual with an experience that is deeply rooted and inherently emotional.  And in Augustine’s question, we also see an implication that man can’t be defined in a way that the Greeks attempted to do because in asking the question, Who am I?, man became separated from the kind of zoological order that all other animals adhered to.  It is the unique sense of self and the (often foreboding) awareness of this self then, as opposed to the faculty of reason, that Augustine highlighted during his moment of reflection.

Though Augustine was cognizant of the importance of exploring the nature of our personal, human existence (albeit for him, with a focus on religious experience in particular), he also explored aspects of human existence on the cosmic scale.  But when he went down this cosmic road of inquiry, he left the nature of personal lived existence by the wayside and instead went down a Neo-Platonist path of theodicy.  Augustine was after all a formal theologian, and one who tried to come to grips with the problem of evil in the world.  But he employed Greek metaphysics for this task, and tried to use logic mixed with irrational faith-based claims about the benevolence of God, in order to show that God’s world wasn’t evil at all.  Augustine did this by eliminating evil from his conception of existence such that all evil was simply a lack of being and hence a form of non-being and therefore non-existent.  This was supposed to be our consolation: ignore any apparent evil in the world because we’re told that it doesn’t really exist; all in the name of trying to justify the cosmos as good and thus to maintain a view that God is good.

This is such a shame in the end, for in Augustine’s attempt at theodicy, he simply downplayed and ignored the abominable and precarious aspects of our existence, which are as real a part of our experience as anything could be.  Perhaps Augustine was motivated to use rationalism here in order to feel comfort and safety in a world where many feel alienated and alone.  But as Barrett says:

“…reason cannot give that security; if it could, faith would be neither necessary nor so difficult.  In the age-old struggle between the rational and the vital, the modern revolt against theodicy (or, equally, the modern recognition of its impossibility) is on the side of the vital…”

And so once again we see that a primary motivation for faith is because many cannot get the consolation they long for through the use of reason.  They can’t psychologically deal with the way the world really is and so they either use faith to believe the world is some other way or they try to use reason along with some number of faith-based premises, in evaluating our apparent place in the world.  St. Augustine actually thought that he could harmonize faith and reason (or what Barrett referred to as the vital and the rational) and this set the stage for the next thousand years of Christian thought.  Once faith claims became more explicit in the Church’s various articles of dogma, one was left to exercise rationality as they saw fit, as long as it was within the confines of that faith and dogma.  And this of course led to many medieval thinkers to mistake faith for reason (in many cases at least), reinforced by the fact that the use of reason was operating under faith-based premises.

The problematic relation between the vital and the rational didn’t disappear even after many a philosopher assumed the two forces were in complete harmony and agreement; instead the clash resurfaced in a debate between Voluntarism and Intellectualism, with our attention now turned toward St. Thomas Aquinas.  As an intellectualist, St. Thomas tried to argue that the intellect in man is prior to the will of man because the intellect determines the will, since we can only desire what we know.  Scotus on the other hand, a voluntarist, responded to this claim and said that the will determines what ideas the intellect turns toward, and thus the will ends up determining what the intellect comes to know.

As an aside, I think that Scotus was more likely correct here, because while the intellect may inform the will (and thus help to direct the will), there is something far more fundamental and unconscious at play that drives our engagement with the world as an organism evolved to survive.  For example, we have a curiosity for both novelty (seeking new information) and familiarity (a maximal understanding of incoming information) and basic biologically-driven desires for satisfaction and homeostasis.  These drives simply don’t depend on the intellect even though they can make use of it, and these drives will continue to operate even if the intellect does not.  I also disagree with St. Thomas’ claim that one can only desire that which they already know, for one can desire knowledge for it’s own sake without knowing exactly what knowledge is yet to come (again from an independent drive, which we could call the will), and one can also desire a change in the world from the way they currently know it to be to some other as yet unspecified way (e.g. to desire a world without a specific kind of suffering, even though one may not know exactly what that would be like, having never lived such a life before).

The crux of the debate between the primacy of the will versus the intellect can perhaps be re-framed accordingly:

“…not in terms of whether will is to be given primacy over the intellect, or the intellect over the will-these functions being after all but abstract fragments of the total man-but rather in terms of the primacy of the thinker over his thoughts, of the concrete and total man himself who is doing the thinking…the fact remains that Voluntarism has always been, in intention at least, an effort to go beyond the thought to the concrete existence of the thinker who is thinking that thought.”

Here again I see the concept of an embodied and evolved organism and a reflective self at play, where the contents of consciousness cannot be taken on their own, nor as primary, but rather they require a conscious agent to become manifest, let alone to have any causal power in the world.

2. Existence vs. Essence

There is a deeper root to the problem underlying the debate between St. Thomas and Scotus and it has to do with the relation between essence and existence; a relation that Sartre (among others) emphasized as critical to existentialism.  Whereas the essence of a thing is simply what the thing is (its essential properties for example), the existence of a thing simply refers to the brute fact that the thing is.  A very common theme found throughout Sartre’s writings is the basic contention that existence precedes essence (though, for him, this only applies to the case of man).  Barrett gives a very simplified explanation of such a thesis:

“In the case of man, its meaning is not difficult to grasp.  Man exists and makes himself to be what he is; his individual essence or nature comes to be out of his existence; and in this sense it is proper to say that existence precedes essence.  Man does not have a fixed essence that is handed to him ready-made; rather, he makes his own nature out of his freedom and the historical conditions in which he is placed.”

I think this concept also makes sense when we put it into the historical context going back to ancient Greece.  Whereas the Greeks tried to put humanity within a zoological order and categorize us by some essential qualities, the existentialists that later came on the scene rejected such an idea almost as a kind of categorical error.  It was simply not possible to classify human beings like you could inanimate objects or other animals that didn’t have the behavioral complexity and a will to define themselves as we do.

Barrett also explains that the essence versus existence relation breaks down into two different but related questions:

1) Does existence have primacy over essence, or the reverse?

2) In actual existing things is there a real distinction between the two?  Or are they merely different points of view that the mind takes toward the same existing thing?

In an attempt at answering these questions, I think it’s reasonable to say that essence has more to do with potentiality (a logically possible set of abstractions or properties) and existence has more to do with actuality (a logically and physically possible past or present state of being).  I also think that existence is necessary in order for any essence to be real in any way, and not because an actual object needs to exist in order to instantiate a physical instance of some specific essence, but because any essence is merely an abstraction created by a conscious agent who inferred it in the first place.  Quite simply, there is no Platonic realm for the essence to “exist” within and so it needs an existing consciousness to produce it.  So I would say that existence has primacy over essence since an existent is needed for an essence to be conceived, let alone instantiated at all.

As for the second question, I think there is a real distinction between the two and that they are different points of view that the mind takes toward the same existing thing.  I don’t see this as an either/or dichotomy.  The mind can certainly acknowledge the existence of some object, animal, or person, without giving much thought to what its essence is (aside from its being distinct enough to consider it a thing), but it can never acknowledge the essence of an actual object, animal, or person without also acknowledging its existence.  There’s also something relevant to be said about how our minds differentiate between an actual perception and an imagined one, even between an actual life and an imagined one, or an actual person and an imagined one; something that we do all the time.

On the other side of this issue, within the debate between essentialism and existentialism, are the questions of whether or not there are any essential qualities of human beings that make them human, and if so, whether or not any of these essential qualities are fixed for human beings.  Most people are familiar with the idea of human nature, whereby there are some biologically-grounded innate predispositions that affect our behavior in some ways.  We are evolved animals after all, and the only way for evolution to work is to have genes providing different phenotypic effects on the body and behavior of a population of organisms.  Since about half of our own genes are used to code for our brain’s development, its basic structure, and functionality, it should seem obvious to anyone that we’re bound to have some set of innate behavioral tendencies as well as some biological limitations on the space of possibilities for our behavior.

Thus, any set of innate behavioral tendencies that are shared by healthy human beings would constitute our human nature.  And furthermore, this behavioral overlap would be a fixed attribute of human nature insofar as the genes coding for that innate behavioral overlap don’t evolve beyond a certain degree.  Human nature is simply an undeniable fact about us grounded in our biology.  If our biology changes enough, then we will eventually cease to be human, or if we still call ourselves “human” at that point in time then we will cease to have the same nature since we will no longer be the same species.  But as long as that biological evolution remains under a certain threshold, we will have some degree of fixity in our human nature.

But unlike most other organisms on earth, human beings have an incredibly complex brain as well as a number of culturally inherited cognitive programs that provide us with a seemingly infinite number of possible behaviors and subsequent life trajectories.  And I think this fact has fueled some of the disagreement between the existentialists and the essentialists.  The essentialists (or many of them at least) have maintained that we have a particular human nature, which I think is certainly true.  But, as per the existentialists, we must also acknowledge just how vast our space of possibilities really is and not let our human nature be the ultimate arbiter of how we define ourselves.  We have far more freedom to choose a particular life project and way of life than one would be led to believe given some preconceived notion of what is essentially human.

3. The Case of Pascal

Science brought about a dramatic change to Western society, effectively carrying us out of the Middle Ages within a century; but this change also created the very environment that made modern Existentialism possible.  Cosmology, for example, led 17th century mathematician and theologian, Blaise Pascal to say that “The silence of these infinite spaces (outer space) frightens me.”  In this statement he was expressing the general reaction of humanity to the world that science had discovered; a world that made man feel homeless and insignificant.

As a religious man, Pascal was also seeking to reconcile this world discovered by science with his own faith.  This was a difficult task, but he found some direction in looking at the wretchedness of the human condition:

“In Pascal’s universe one has to search much more desperately to find any signposts that would lead the mind in the direction of faith.  And where Pascal finds such a signpost, significantly enough, is in the radically miserable condition of man himself.  How is it that this creature who shows everywhere, in comparison with other animals and with nature itself, such evidence marks of grandeur and power is at the same time so feeble and miserable?  We can only conclude, Pascal says, that man is rather like a ruined or disinherited nobleman cast out from the kingdom which ought to have been his (his fundamental premise is the image of man as a disinherited being).”

Personally I see this as indicative of our being an evolved species, albeit one with an incredibly rich level of consciousness and intelligence.  In some sense our minds’ inherent capacities have a potential that is far and beyond what most humans ever come close to actualizing, and this may create a feeling of our having lost something that we deserve; some possible yet unrealized world that would make more sense for us to have given the level of power and potential we possess.  And perhaps this lends itself to a feeling of angst or a lack of fulfillment as well.

As Pascal himself said:

“The natural misfortune of our mortal and feeble condition is so wretched that when we consider it closely, nothing can console us.” 

Furthermore, Barrett reminds us of Pascal having mentioned the human tendency of escaping from this close consideration of our condition through the two “sovereign anodynes” of habit and diversion:

“Both habit and diversion, so long as they work, conceal from man “his nothingness, his forlornness, his inadequacy, his impotence and his emptiness.”  Religion is the only possible cure for this desperate malady that is nothing other than our ordinary mortal existence itself.”

Barrett describes our state of cultivating various habits and distractions as a kind of concealment of the ever-narrowing space of possibilities in our lives, as each day passes us by with some set of hopes and dreams forever lost in the past.  I think it would be fair to say however, that the psychological benefit we gain from at least many of our habits and distractions warrants our having them in the first place.  As an analogy, if one day I have been informed by my doctor that I’m going to die from a terminal illness in a few months (thus drastically narrowing the future space of possibilities in my life), is there really a problem with me trying to keep my mind off of such a morbid fate, such that I can make the most of the time that I have left?  I certainly wouldn’t advocate burying one’s head in the sand either, nor having irrational expectations, nor living in denial; and so I think the time will be used best if I don’t completely forget about my timeline coming to an end either, because then I can more adequately appreciate that remaining time.

Moreover, I don’t agree that religion is the only possible cure for dealing with our “ordinary mortal existence”.  I think that psychological and moral development are the ultimate answers here, where a critical look at oneself and an ongoing attempt to become a better version of oneself in order to maximize one’s life fulfillment is key.  The world can be any way that it is but that doesn’t change the fact that certain attitudes toward that world and certain behaviors can make our experience of the world, as it truly is, better or worse.  Falling back on religion to console us is, to use Pascal’s own words, nothing but another habit and form of distraction, and one where we sacrifice knowing and facing the truth of our condition for some kind of ignorant bliss that relies on false beliefs about the world.  And I don’t think we can live authentically if we don’t face the condition we’re in as a species, and more importantly as individuals; by trying to believe as many true things and as few false things as possible.

When it comes to the human mind and how we deal with or process information about our everyday lives or aspects of our overall condition, it can be a rather messy process with very complicated concepts, many of which that have fuzzy boundaries and so can’t be dealt with in the same way that we deal with analyses relying strictly on logic.  In some cases of analyzing our lives, we don’t know exactly what the premises could even be, let alone how to arrange them within a logical structure to arrive at some kind of certain conclusion.  And this differs a lot from other areas of inquiry like, say, mathematics.  As Pascal was delving deeper into the study of our human condition, he realized this difference and posited that there was an interesting distinction in terms of what the mind can comprehend in these different contexts: a mathematical mind and an intuitive mind.  Within a field of study such as mathematics, it is our mind’s ability to logically comprehend various distinct and well defined ideas that we make use of, whereas in other more concrete domains of our experience, we rely quite heavily on intuition.

Not only is logic not very useful in the more concrete and complex domains of our lives, but in many cases it may actually get in the way of seeing things clearly since our intuition, though less reliable than logic, is more up to the task of dealing with complexities that haven’t yet been parsed out into well-defined ideas and concepts.  Barrett describes this a little differently than I would when he says:

“What Pascal had really seen, then, in order to have arrived at this distinction was this: that man himself is a creature of contradictions and ambivalences such as pure logic can never grasp…By delimiting a sphere of intuition over against that of logic, Pascal had of course set limits to human reason. “

In the case of logic, one is certainly limited in its use when the concepts involved are probabilistic (rather than being well-defined or “black and white”) and there are significant limitations when our attitudes toward those concepts vary contextually.  We are after all very complex creatures with a number of competing goals and with a prioritization of those goals that varies over time.  And of course we have a number of value judgments and emotional dispositions that motivate our actions in often unpredictable ways.  So it seems perfectly reasonable to me that one should acknowledge the limits we have in our use of logic and reason.

Barrett makes another interesting claim about the limits of reason when he says:

“Three centuries before Heidegger showed, through a learned and laborious exegesis, that Kant’s doctrine of the limitations of human reason really rests on the finitude of our human existence, Pascal clearly saw that the feebleness of our reason is part and parcel of the feebleness of our human condition generally.  Above all, reason does not get at the heart of religious experience.”

And I think the last sentence is most important here, where we try and understand that the limitations of reason include the inability to examine all that matters and all that is meaningful within religious experience.  Even though I am a kind of champion for reason, in the sense that I find it to be in short supply when it matters most and in the sense that I often advertise the dire need for critical thinking skills and the need for more education to combat our cognitive biases, I for one have never made the assumption that reason could ever accomplish such an arduous task of dissecting religious experience in some reductionist way while retaining or accounting for everything of value within it.  I have however used reason to analyze various religious beliefs and claims about the world in order to discover their consequences on, or incompatibilities with, a number of other beliefs.

Theologians do something similar in their attempts to prove the existence of God, by beginning with some set of presuppositions (about God or the supernatural), and then applying reason to those faith-based premises to see what would result if they were in fact true.  And Pascal saw this mental exercise as futile and misguided as well because it misses the entire point of religion:

“In any case, God as the object of a rigorous demonstration, even supposing such a demonstration were forthcoming, would have nothing to do with the living needs of religion.”

I admire the fact that Pascal was honest here about what really matters most when it comes to religion and religious beliefs.  It doesn’t matter whether or not God exists, or whether any religious claim or other is actually true or false; rather, it is the living need of religion that he perceives that ultimately motivates one to adhere to it.  Thus, theology doesn’t really matter to him because those who want to be convinced by the arguments for God’s existence will be convinced simply because they desire the conclusion insofar as they believe it will give them consolation from the human condition they find themselves in.

To add something to the bleak picture of just how contingent our existence is, Barrett mentions a personal story that Pascal shared about his having had a brush with death:

“While he was driving by the Seine one day, his carriage  suddenly swerved, the door was flung open, and Pascal almost catapulted down the embankment to his death.  The arbitrariness and suddenness of this near accident became for him another lightning flash of revelation.  Thereafter he saw Nothingness as a possibility that lurked, so to speak, beneath our feet, a gulf and an abyss into which we might tumble at any moment.  No other writer has expressed more powerfully than Pascal the radical contingency that lies at the heart of human existence-a contingency that may at any moment hurl us all unsuspecting into non-being…The idea of Nothingness or Nothing had up to this time played no role at all in Western philosophy…Nothingness had suddenly and drastically revealed itself to him.”

And here it is: the concept of Nothingness which has truly grounded a bulk of the philosophical constructs found within existentialism.  To add further to this, Pascal also saw that one’s inevitable death wasn’t the whole picture of our contingency, but only a part of it; it was our having been born at a particular time, place, and within a certain culture that vastly reduces the number of possible life trajectories one can take, and thus vastly reduces our freedom in establishing our own identity.  Our life begins with contingency and ends with it also.

Pascal also acknowledged our existence as lying in the middle of the universe, in terms of being between that of the infinitesimal and microscopic and that of the seemingly infinite lying at the cosmological scale.  Within this middle position, he saw man as being “an All in relation to Nothingness, a Nothingness in relation to the All.”  I think it’s also worth pointing out that it is because of our evolutionary history and the “middle-position” we evolved within that causes the drastic misalignment between the world we were in some sense meant to understand, and the micro and cosmic scales we discovered in the last several hundred years.

Aside from any feelings of alienation and homelessness that these different scales have precipitated, we simply don’t have an intuition that evolved a need to understand existence at the infinitesimal quantum level nor at the cosmic relativistic level, which is why these discoveries in physics are difficult to understand even by experts in the field.  We should expect these discoveries to be vastly perplexing and counter-intuitive, for our consciousness evolved to deal with life at a very finite scale; yet another example of our finitude as human beings.

It was this very same fact, that we gained so much information about our world through these kinds of discoveries, including major advancements made in mathematics, that also promoted the assumption that human nature could be perfected through the universal application of reason.

I’ll finish the post on this chapter with one final quote from Barrett that I found insightful:

“Poets are witnesses to Being before the philosophers are able to bring it into thought.  And what these particular poets were struggling to reveal, in this case, were the very conditions of Being that are ours historically today.  They were sounding, in poetic terms, the premonitory chords of our own era.”

It is the voice of the poets that we first hear lamenting about the Enlightenment and what reason seems to have done, or at least what it had begun to do.  They were some of the most important contemporary precursors to the later existentialists and philosophers that would soon begin to process (in far more detail and with far more precision) the full effects of modernity on the human psyche.

Irrational Man: An Analysis (Part 1, Chapter 2: “The Encounter with Nothingness”)

In the first post in this series on William Barrett’s Irrational Man, I explored Part 1, Chapter 1: The Advent of Existentialism, where Barrett gives a brief description of what he believes existentialism to be, and the environment it evolved within.  In this post, I want to explore Part I, Chapter 2: The Encounter with Nothingness.

Ch. 2 – The Encounter With Nothingness

Barrett talks about the critical need for self-analysis, despite the fact that many feel that this task has already been accomplished and that we’ve carried out this analysis exhaustively.  But Barrett sees this as contemporary society’s running away from the facts of our own ignorance.  Modern humankind, it seems to Barrett, is in even more need to question their identity for we seem to understand ourselves even less than when we first began to question who we are as a species.

1. The Decline of Religion

Ever since the end of the Middle Ages, religion as a whole has been on the decline.  This decrease in religiosity (particularly in Western civilization) became most prominent during the Enlightenment.  As science began to take off, the mechanistic structure and qualities of the universe (i.e. its laws of nature) began to reveal themselves in more and more detail.  This in turn led to a replacement of a large number of superstitious and supernatural religious beliefs about the causes for various phenomena with scientific explanations that could be empirically verified and tested.  Throughout this process, as theological explanations became replaced more and more with naturalistic explanations, the presumed role of God and the Church began to evaporate.  Thus, at the purely intellectual level, we underwent a significant change in terms of how we viewed the world and subsequently how we viewed the nature of human beings and our place in the world.

But, as Barrett points out:

“The waning of religion is a much more concrete and complex fact than a mere change in conscious outlook; it penetrates the deepest strata of man’s total psychic life…Religion to medieval man was not so much a theological system as a solid psychological matrix surrounding the individual’s life from birth to death, sanctifying and enclosing all its ordinary and extraordinary occasions in sacrament and ritual.”

We can see here how the role of religion has changed to some degree from medieval times to the present day.  Rather than simply being a set of beliefs that identified a person with a particular group and which had soteriological, metaphysical, and ethical significance to the believer (as it is more so in modern times), it used to be a complete system or a total solution for how one was to live their life.  And it also provided a means of psychological stability and coherence by providing a ready-made narrative of the essence of man; a sense of familiarity and a pre-defined purpose and structure that didn’t have to be constructed from scratch by the individual.

While the loss of the Church involved losing an entire system of dogmatic teachings, symbols, and various rites and sacraments, the most important loss according to Barrett was the loss of a concrete connection to a transcendent realm of being.  We were now set free such that we had to grapple with the world on our own, with all its precariousness, and to deal head-on with the brute facts of our own existence.

What I find most interesting in this chapter is when Barrett says:

“The rationalism of the medieval philosophers was contained by the mysteries of faith and dogma, which were altogether beyond the grasp of human reason, but were nevertheless powerfully real and meaningful to man as symbols that kept the vital circuit open between reason and emotion, between the rational and non-rational in the human psyche.”

And herein lies the crux of the matter; for Barrett believes that religion’s greatest function historically was its serving as a bridge between the rational and non-rational elements of our psychology, and also its serving as a barrier that limited the effective reach and power of our rationality over the rest of our psyches and our view of the world.  I would go even further to suggest that it may have allowed our emotional expression to more harmoniously co-exist and work with our reason instead of primarily being at odds with it.

I agree with Barrett’s point here in that religion often promulgates ideas and practices that appeal to many of our emotional dispositions and intuitions, thus allowing people to express certain emotional states and to maintain comforting intuitions that might otherwise be hindered or subjugated by reason and rationality.  And it has also provided a path for reason to connect to the unreasonable to some degree; as a means of minimizing the need to compartmentalize rationality from the emotional or irrational influences on a person’s belief systems.  By granting people an opportunity to combine reason and emotion in some way, where this reason could be used to try and make some sense of emotion and to give it some kind of validation without having to reject reason completely, religion has been effective (historically anyway) in helping people to avoid the discomfort of rejecting beliefs that they know to be reasonable (many of these beliefs at least) while also being able to avoid the discomfort of inadequate emotional/non-rational expression.

Once religion began to go by the wayside, due in large part to the accumulated knowledge acquired through reason and scientific progress, it became increasingly difficult to square the evidence and arguments that were becoming more widely known with many of the claims that religion and the Church had been propagating for centuries.  Along with this growing invalidation or loss of credibility came the increased need to compartmentalize reason and rationality from emotionally and irrationally-derived beliefs and experiences.  And this difficulty led to a decline in religiosity for many, which was accompanied with the loss in any emotional and irrational/non-rational expression that religion had once offered the masses.  Once reason and rationality expanded beyond a certain threshold, it effectively popped the religious bubble that had previously contained it, causing many to begin to feel homeless, out of place, and in many ways incomplete in the new world they now found themselves living in.

2. The Rational Ordering of Society

The organization of our lives has been, historically at least, a relatively organic process where it had a kind of self-guiding, pragmatic, and intuitive structure and evolution.  But once we approached the modern age, as Barrett points out, we saw a drastic shift toward an increasingly rational form of organization, where efficiency and a sort of technical precision began to dominate the overall direction of society and the lives of each individual.  The rise of capitalism was a part of this cultural evolutionary process (as was, I would argue, the Industrial Revolution), and only further enhanced the power and influence of reason and rationality over our day-to-day lives.

The collectivization and distribution of labor involved in the mass production of commodities and various products had taken us entirely out of our agrarian and hunter-gatherer roots.  We no longer lived off of the land so to speak, and were no longer ensconced within the kinds of natural scenery while performing the types of day-to-day tasks that our species had adapted to over its long-term evolutionary history.  And with this collectivization, we also lost a large component of our individuality; a component that is fairly important in human psychology.

Barrett comments on how we’ve accepted modern society as normal, relatively unaware of our ancestral roots and our previous way of life:

“We are so used to the fact that we forget it or fail to perceive that the man of the present day lives on a level of abstraction altogether beyond the man of the past.”

And he goes on to talk about how our ratcheting forward in terms of technological progress and any mechanistic societal re-structuring is what gives us our incredible power over our environment but at the cost of feeling rootless and without any concrete sense of feeling, when it’s needed now more than ever.

Perhaps a more interesting point he makes is with respect to how our increased mechanization and collectivization has changed a fundamental part of how our psyche and identity operate:

“Not only can the material wants of the masses be satisfied to a degree greater than ever before, but technology is fertile enough to generate new wants that it can also satisfy…All of this makes for an extraordinary externalization of life in our time. “

And it is this externalization of our identity and psychology, manifested in ever-growing and ever-changing sets of material objects and information flow, that is interesting to ponder over.  It reminds me somewhat of Richard Dawkins’ concept of an extended phenotype, where the effects of an organism’s genes aren’t merely limited to the organism’s body, but rather they extend into how the organism structures its environment. While this term is technically limited to behaviors that have a direct bearing on the organism’s survival, I prefer to think of this extended phenotype as encompassing everything the organism creates and the totality of its behaviors.

The reason I mention this concept is because I think it makes for a useful analogy here.  For in the earlier evolution of organisms on earth, the genes’ effects or the resulting phenotypes were primarily manifested as the particular body and bodily behavior of the organism, and as organisms became more complex (particularly those that evolved brains and a nervous system), that phenotype began to extend itself into the abilities of an organism to make external structures out of raw materials found in its environment.  And more and more genetic resources were allotted to the organism’s brain which made this capacity for environmental manipulation possible.  As this change occurred, the previous boundaries that defined the organism vanished as the external constructions effectively became an extension of the organism’s body.  But with this new capacity came a loss of intimacy in the sense that the organism wasn’t connected to these external structures in the same way it was connected to its own feelings and internal bodily states; and these external structures also lacked the privacy and hidden qualities inherent in an organism’s thoughts, feelings, and overall subjective experience.

Likewise, as we’ve evolved culturally, eventually gaining the ability to construct and mass-produce a plethora of new material goods, we began to dedicate a larger proportion of our attention on these different external objects, wanting more and more of them well past what we needed for survival.  And we began to invest or incorporate more of ourselves, including our knowledge and information, in these externalities, forcing us to compensate by investing less and less in our internal, private states and locally stored knowledge.  Now it would be impractical if not impossible for an organism to perform increasingly complex behaviors and to continuously increase its ability to manipulate its own environment without this kind of trade-off occurring in terms of its identity, and how it distributes its limited psychological resources.

And herein lies the source of our seemingly fractured psyche: the natural selection of curiosity, knowledge accumulation, and behavioral complexity for survival purposes has become co-opted for just about any purpose imaginable, since the hardware and schema required for the former has a capacity that transcends its evolutionary purpose and that transcends the finite boundaries, the guiding constraints, and the essential structure of the lives we once had in our evolutionary past.  Now we’ve moved beyond what used to be a kind of essential quality and highly predictable trajectory of our lives and of our species, and we’ve moved into the unknown; from the realm of the finite and the familiar to the seemingly infinite realm of the unknown.

A big part of this externalization has manifested itself in the new ways we acquire, store, and share information, such as with the advent of mass media.  As Barrett puts it:

“…journalism enables people to deal with life more and more at second hand.  Information usually consists of half-truths, and “knowledgability” becomes a substitute for real knowledge.  Moreover, popular journalism has by now extended its operations into what were previously considered the strongholds of culture-religion, art, philosophy…It becomes more and more difficult to distinguish the secondhand from the real thing, until most people end by forgetting there is such a distinction.”

I think this ties well into what Barrett mentioned previously when he talked about how modern civilization is built on increasing levels of abstraction.  The very information we’re absorbing, in order to make sense of and deal with a large aspect of our contemporary world, is second hand at best.  The information we rely on has become increasingly abstracted, manipulated, reinterpreted, and distorted.  The origin of so much of this information is now at least one level away from our immediate experience, giving it a quality that is disconnected, less important, and far less real than it otherwise would be.  But we often forget that there’s any epistemic difference between our first-hand lived experience and the information that arises from our mass media.

To add to Barrett’s previous description of existentialism as a reaction against positivism, he also mentions Karl Jaspers’ views of existentialism, which he described as:

“…a struggle to awaken in the individual the possibilities of an authentic and genuine life, in the fact of the great modern drift toward a standardized mass society.”

Though I concur with Jaspers’ claim that modernity has involved a shift toward a standardized mass society in a number of ways, I also think that it has provided the means for many more ways of being unique, many more possible talents and interests for one to explore, and many more kinds of goals to choose from for one’s life project(s).  Collectivization and distribution of labor and the technology that has precipitated from it have allowed many to avoid spending all day hunting and gathering food, making or cleaning their clothing, and other tasks that had previously consumed most of one’s time.

Now many people (in the industrialized world at least) have the ability to accumulate enough free time to explore many other types of experiences, including reading and writing, exploring aspects of our existence with highly-focused introspective effort (as in philosophy), creating or enjoying vast quantities of music and other forms of art, listening to and telling stories, playing any number of games, and thousands of other activities.  And even though some of these activities have been around for millennia, many of them have not (or there was little time for them), and of those that have been around the longest, there were still far fewer choices than what we have on offer today.  So we mustn’t forget that many people develop a life-long passion for at least some of these experiences that would never have been made possible without our modern society.

The issue I think lies in the balance or imbalance between standardization and collectivization on the one hand (such that we reap the benefits of more free time and more recreational choices), and opportunities for individualistic expression on the other.  And of the opportunities that exist for individualistic expression, there is still the need to track the psychological consequences that result from them so we can pick more psychologically fulfilling choices; so that we can pick choices that better allow us to keep open that channel between reason and emotion and between the rational and the non-rational/irrational that religion once provided, as Barrett mentioned earlier.

We also have to accept the fact that the findings in science have largely dehumanized or inhibited the anthropomorphization of nature, instead showing us that the universe is indifferent to us and to our goals; that humans and life in general are more of an aberration than anything else within a vast cosmos that is inhospitable to life.  Only after acknowledging the situation we’re in can we fully appreciate the consequences that modernity has had on upending the comforting structure that religion once gave to humans throughout their lives.  As Barrett tell us:

“Science stripped nature of its human forms and presented man with a universe that was neutral, alien, in its vastness and force, to his human purposes.  Religion, before this phase set in, had been a structure that encompassed man’s life, providing him with a system of images and symbols by which he could express his own aspirations toward psychic wholeness.  With the loss of this containing framework man became not only a dispossessed but a fragmentary being.”

Although we can’t unlearn the knowledge that has caused religion to decline including that which has had a bearing on the questions dealing with our ultimate meaning and purpose, we can certainly find new ways of filling the psychological void felt by many as a result of this decline.  The modern world has many potential opportunities for psychologically fulfilling projects in life, and these opportunities need to be more thoroughly explored.  But, existentialist thought rightly reminds us of how the fruits of our rational and enlightened philosophy have been less capable of providing as satisfying an answer to the question “What are human beings?” as religion once gave.  Along with the fruits of the Enlightenment came a lack of consolation, and a number of painful truths pertaining to the temporal and contingent nature of our existence, previously thought to possess both an eternal and necessary character.  Overall, this cultural change and accumulation of knowledge effectively forced humanity out of its comfort zone.  Barrett described the situation quite well when he said:

“In the end, he [modern man] sees each man as solitary and unsheltered before his own death.  Admittedly, these are painful truths, but the most basic things are always learned with pain, since our inertia and complacent love of comfort prevent us from learning them until they are forced upon us.”

Modern humanity then, has become alienated from God, from nature, and from the complex social machinery that produces the goods and services that he both wants and needs.  Even worse yet however, is the alienation from one’s own self that has occurred as humans have found themselves living in a society that expects each of them to perform some specific function in life (most often not of their choosing), and this leads to society effectively identifying each person as this function, forgetting or ignoring the real person buried underneath.

3.  Science and Finitude

In this last section of chapter two, Barrett discusses some of the ultimate limitations in our use of reason and in the scope of knowledge we can obtain from our scientific and mathematical methods of inquiry.  Reason itself is described as the product of a creature “whose psychic roots still extend downward into the primeval soil,” and is thus a creation from an animal that is still intimately connected to an irrational foundation; an animal still possessing a set of instincts arising from its evolutionary origins.

We see this core presence of irrationality in our day-to-day lives whenever we have trouble trying to employ reason, such as when it conflicts with our emotions and intuitions.  And Barrett brings up the optimism in the confirmed rationalist and their belief that they may still be able to one day overcome all of these obstacles of irrationality by simply employing reason in a more clever way than before.  Clearly Barrett doesn’t share this optimism of the rationalist and he tries to support his pessimism by pointing out a few limitations of reason as suggested within the work of Immanuel Kant, modern physics, and mathematics.

In Kant’s Critique of Pure Reason, he lays out a substantial number of claims and concepts relating to metaphysics and epistemology, where he discusses the limitations of both reason and the senses.  Among other things, he claims that our reason is limited by certain a priori intuitional forms or predispositions about space and time (for example) that allow us to cognize any thing at all.  And so if there are any “things in themselves”, that is, real objects underlying whatever appears to us in the external world, where these objects have qualities and attributes that are independent of our experience, then we can never know anything of substance about these underlying features.  We can never see an object or understand it in any way without incorporating a large number of intuitional forms and assumptions in order to create that experience at all; we can never see the world without seeing it through the limited lens of our minds and our body’s sensory machinery.

For Kant, this also means that we may often use reason erroneously to make unjustified claims of knowledge pertaining to the transcendent, particularly within metaphysics and theology.  When reason is applied to ideas that can’t be confirmed through sensory experience, or that lie outside of our realm of possible experience (such as the idea that cause and effect laws govern every interaction in the universe, something we can never know through experience), it leads to knowledge claims that it can’t justify.  Another limitation of reason, according to Kant, is that it operates through an a priori assumption of unification in our experience, and so the categories and concepts that we infer to exist based on reason are limited by this underlying unifying principle.  Science has added to the rigidity and specificity of this unification, by going beyond what we’ve unified through our unmodified experience (i.e. seeing the sun “rise” and “set” every day), that is, experience without the use of any instruments, telescopes, microscopes, etc. (where the use of these instruments has helped give us more data showing that the earth rotates on an axis rather than the sun revolving around the earth).  Nevertheless, unification is the ultimate goal of reason whether applied with or without a strict scientific method.

Then within physics, we find another limitation in terms of our possible understanding of the world.  Heisenberg’s Uncertainty Principle showed us that we are unable to know complementary parameters pertaining to a particle with an arbitrarily high level of precision.  We eventually hit a limit where, for example, if we know the position of a particle (such as an electron) with a high degree of accuracy at some particular time, then we’re unable to know the momentum of that particle with the same accuracy.  The more we know about one complementary parameter, the less we know about the other.  Barrett describes this discovery in physics as showing that nature may be irrational and chaotic at its most fundamental level and then he says:

“What is remarkable is that here, at the very farthest reaches of precise experimentation, in the most rigorous of the natural sciences, the ordinary and banal fact of our human limitations emerges.”

This is indeed interesting because for a long while many rationalists and positivists held that our potential knowledge was unlimited (or finite, yet complete) and that science was the means to gain this open-ended or complete knowledge of the universe.  Then quantum physics delivered a major blow to this assumption, showing us instead that our knowledge is inherently limited based on how the universe is causally structured.  However, it’s worth pointing out that this is not a human limitation per se, but a limitation of any conscious system acquiring knowledge in our universe.  It wouldn’t matter if humans had different brains, better technology, or used artificial intelligence to perform the computations or measurements.  There is simply a limit on what can ever be measured; a limit on what can ever be known about the state of any system that is being studied.

Godel’s Incompleteness Theorem likewise showed how there were inherent limitations in any mathematical formulation of number theory that was rich enough where, no matter how large or seemingly complete its set of axioms are, if it is consistent, then there will be statements that can’t be proven or disproven within that formulation.  Likewise, the consistency of any formulation of number theory can’t be proven by the formulation itself; rather, it depends on assumptions that lie outside of that formulation.  However, once again, contrary to what Barrett claims, I don’t think this should be taken to be a human limitation of reason, but rather a limitation of mathematics generally.

Regardless, I concede Barrett’s overarching point that, in all of these cases (Kant, physics, and mathematics), we are still running into scenarios where we are unable to do something or to know something that we may have previously thought we could do or thought we could know, at least in principle.  And so these discoveries did run counter to the beliefs of many that thought that humans were inherently unstoppable in these endeavors of knowledge accumulation and in our ability to create technology capable of solving any problem whatsoever.  We can’t do this.  We are finite creatures with finite brains, but perhaps more importantly, our capacities are also limited by what knowledge is theoretically available to any conscious system trying to better understand the universe.  The universe is inherently unknowable in at least some respects, which means it is unpredictable in at least some respects.  And unpredictability doesn’t sit well with humans intent on eliminating uncertainty, eliminating the unknown, and trying to have a coherent narrative to explain everything encountered throughout one’s existence.

In the next post in this series, I’ll explore Irrational Man, Part 1, Chapter 3: The Testimony of Modern Art.

The Experientiality of Matter

If there’s one thing that Nietzsche advocated for, it was to eliminate dogmatism and absolutism from one’s thinking.  And although I generally agree with him here, I do think that there is one exception to this rule.  One thing that we can be absolutely certain of is our own conscious experience.  Nietzsche actually had something to say about this (in Beyond Good and Evil, which I explored in a previous post), where he responds to Descartes’ famous adage “cogito ergo sum” (the very adage associated with my blog!), and he basically says that Descartes’ conclusion (“I think therefore I am”) shows a lack of reflection concerning the meaning of “I think”.  He wonders how he (and by extension, Descartes) could possibly know for sure that he was doing the thinking rather than the thought doing the thinking, and he even considers the possibility that what is generally described as thinking may actually be something more like willing or feeling or something else entirely.

Despite Nietzsche’s criticism against Descartes in terms of what thought is exactly or who or what actually does the thinking, we still can’t deny that there is thought.  Perhaps if we replace “I think therefore I am” with something more like “I am conscious, therefore (my) conscious experience exists”, then we can retain some core of Descartes’ insight while throwing out the ambiguities or uncertainties associated with how exactly one interprets that conscious experience.

So I’m in agreement with Nietzsche in the sense that we can’t be certain of any particular interpretation of our conscious experience, including whether or not there is an ego or a self (which Nietzsche actually describes as a childish superstition similar to the idea of a soul), nor can we make any certain inferences about the world’s causal relations stemming from that conscious experience.  Regardless of these limitations, we can still be sure that conscious experience exists, even if it can’t be ascribed to an “I” or a “self” or any particular identity (let alone a persistent identity).

Once we’re cognizant of this certainty, and if we’re able to crawl out of the well of solipsism and eventually build up a theory about reality (for pragmatic reasons at the very least), then we must remain aware of the priority of consciousness in any resultant theory we construct about reality, with regard to its structure or any of its other properties.  Personally, I believe that some form of naturalistic physicalism (a realistic physicalism) is the best candidate for an ontological theory that is the most parsimonious and explanatory for all that we experience in our reality.  However, most people that make the move to adopt some brand of physicalism seem to throw the baby out with the bathwater (so to speak), whereby consciousness gets eliminated by assuming it’s an illusion or that it’s not physical (therefore having no room for it in a physicalist theory, aside from its neurophysiological attributes).

Although I used to feel differently about consciousness (and it’s relationship to physicalism), where I thought it was plausible for it to be some kind of an illusion, upon further reflection I’ve come to realize that this was a rather ridiculous position to hold.  Consciousness can’t be an illusion in the proper sense of the word, because the experience of consciousness is real.  Even if I’m hallucinating where my perceptions don’t directly correspond with the actual incoming sensory information transduced through my body’s sensory receptors, then we can only say that the perceptions are illusory insofar as they don’t directly map onto that incoming sensory information.  But I still can’t say that having these experiences is itself an illusion.  And this is because consciousness is self-evident and experiential in that it constitutes whatever is experienced no matter what that experience consists of.

As for my thoughts on physicalism, I came to realize that positing consciousness as an intrinsic property of at least some kinds of physical material (analogous to a property like mass) allows us to avoid having to call consciousness non-physical.  If it is simply an experiential property of matter, that doesn’t negate its being a physical property of that matter.  It may be that we can’t access this property in such a way as to evaluate it with external instrumentation, like we can for all the other properties of matter that we know of such as mass, charge, spin, or what-have-you, but that doesn’t mean an experiential property should be off limits for any physicalist theory.  It’s just that most physicalists assume that everything can or has to be reducible to the externally accessible properties that our instrumentation can measure.  And this suggests that they’ve simply assumed that the physical can only include externally accessible properties of matter, rather than both internally and externally accessible properties of matter.

Now it’s easy to see why science might push philosophy in this direction because its methodology is largely grounded on third-party verification and a form of objectivity involving the ability to accurately quantify everything about a phenomenon with little or no regard for introspection or subjectivity.  And I think that this has caused many a philosopher to paint themselves into a corner by assuming that any ontological theory underlying the totality of our reality must be constrained in the same way that the physical sciences are.  To see why this is an unwarranted assumption, let’s consider a “black box” that can only be evaluated by a certain method externally.  It would be fallacious to conclude that just because we are unable to access the inside of the box, that the box must therefore be empty inside or that there can’t be anything substantially different inside the box compared to what is outside the box.

We can analogize this limitation of studying consciousness with our ability to study black holes within the field of astrophysics, where we’ve come to realize that accessing any information about their interior (aside from how much mass there is) is impossible to do from the outside.  And if we managed to access this information (if there is any) from the inside by leaping past its outer event horizon, it would be impossible for us to escape and share any of that information.  The best we can do is to learn what we can from the outside behavior of the black hole in terms of its interaction with surrounding matter and light and infer something about the inside, like how much matter it contains (e.g. we can infer the mass of a black hole from its outer surface area).  And we can learn a little bit more by considering what is needed to create or destroy a black hole, thus creating or destroying any interior qualities that may or may not exist.

A black hole can only form from certain configurations of matter, particularly aggregates that are above a certain mass and density.  And it can only be destroyed by starving it to death, by depriving it of any new matter, where it will slowly die by evaporating entirely into Hawking radiation, thus destroying anything that was on the inside in the process.  So we can infer that any internal qualities it does have, however inaccessible they may be, can be brought into and out of existence with certain physical processes.

Similarly, we can infer some things about consciousness by observing one’s external behavior including inferring some conditions that can create, modify, or destroy that type of consciousness, but we are unable to know what it’s like to be on the inside of that system once it exists.  We’re only able to know about the inside of our own conscious system, where we are in some sense inside our own black hole with nobody else able to access this perspective.  And I think it is easy enough to imagine that certain configurations of matter simply have an intrinsic, externally inaccessible experiential property, just as certain configurations of matter lead to the creation of a black hole with its own externally inaccessible and qualitatively unknown internal properties.  Despite the fact that we can’t access the black hole’s interior with a strictly external method, to determine its internal properties, this doesn’t mean we should assume that whatever properties may exist inside it are therefore fundamentally non-physical.  Just as we wouldn’t consider alternate dimensions (such as those predicted in M-theory/String-Theory) that we can’t physically access to be non-physical.  Perhaps one or more of these inaccessible dimensions (if they exist) is what accounts for an intrinsic experiential property within matter (though this is entirely speculative and need not be true for the previous points to hold, but it’s an interesting thought nevertheless).

Here’s a relevant quote from the philosopher Galen Strawson, where he outlines what physicalism actually entails:

Real physicalists must accept that at least some ultimates are intrinsically experience-involving. They must at least embrace micropsychism. Given that everything concrete is physical, and that everything physical is constituted out of physical ultimates, and that experience is part of concrete reality, it seems the only reasonable position, more than just an ‘inference to the best explanation’… Micropsychism is not yet panpsychism, for as things stand realistic physicalists can conjecture that only some types of ultimates are intrinsically experiential. But they must allow that panpsychism may be true, and the big step has already been taken with micropsychism, the admission that at least some ultimates must be experiential. ‘And were the inmost essence of things laid open to us’ I think that the idea that some but not all physical ultimates are experiential would look like the idea that some but not all physical ultimates are spatio-temporal (on the assumption that spacetime is indeed a fundamental feature of reality). I would bet a lot against there being such radical heterogeneity at the very bottom of things. In fact (to disagree with my earlier self) it is hard to see why this view would not count as a form of dualism… So now I can say that physicalism, i.e. real physicalism, entails panexperientialism or panpsychism. All physical stuff is energy, in one form or another, and all energy, I trow, is an experience-involving phenomenon. This sounded crazy to me for a long time, but I am quite used to it, now that I know that there is no alternative short of ‘substance dualism’… Real physicalism, realistic physicalism, entails panpsychism, and whatever problems are raised by this fact are problems a real physicalist must face.

— Galen Strawson, Consciousness and Its Place in Nature: Does Physicalism Entail Panpsychism?
.
While I don’t believe that all matter has a mind per se, because a mind is generally conceived as being a complex information processing structure, I think it is likely that all matter has an experiential quality of some kind, even if most instantiations of it are entirely unrecognizable as what we’d generally consider to be “consciousness” or “mentality”.  I believe that the intuitive gap here is born from the fact that the only minds we are confident exist are those instantiated by a brain, which has the ability to make an incredibly large number of experiential discriminations between various causal relations, thus giving it a capacity that is incredibly complex and not observed anywhere else in nature.  On the other hand, a particle of matter on its own would be hypothesized to have the capacity to make only one or two of these kinds of discriminations, making it unintelligent and thus incapable of brain-like activity.  Once a person accepts that an experiential quality can come in varying degrees from say one experiential “bit” to billions or trillions of “bits” (or more), then we can plausibly see how matter could give rise to systems that have a vast range in their causal power, from rocks that don’t appear to do much at all, to living organisms that have the ability to store countless representations of causal relations (memory) allowing them to behave in increasingly complex ways.  And perhaps best of all, this approach solves the mind-body problem by eliminating the mystery of how fundamentally non-experiential stuff could possibly give rise to experientiality and consciousness.