“The Bounds of Subjectivity”

The world as it really is
Unknowable, despite conviction
We hope for objectivity
And yet we see prediction

Perception is the mind’s best guess, to make some sense of all the mess
Expectations frame the lens, ontology, each one depends

Controlled hallucination
To see what we want to see
Perception is not a mirror
But the bounds of subjectivity

Our minds are but a model, sensations flowing, open throttle
A story written, a narrative, to which high credence we will give

Grounded on abstraction
Toward emotion reason shouts
Embodied filters blinding us
Wishful thinking wins the bout

Behold the power of intuition, as certainty comes to fruition
Housed in our unconscious mind, yet fallible we ought to find

Beliefs entangled with desire
Can I hold this view of mine?
Search for reasons to confirm
True or not, we all assign

Not all views have equal merit, unable to know, unless we share it
Reducing that complexity, except when over-vexed we’ll be

Worlds are shaped by what we want
In how we act and how we view
Rigid ways take hold of us
The old interprets all the new

Emotions are the reason’s master, ignoring this will bring disaster
Hume was right about the passions, finding reasons is our fashion

When evidence begins to mount
Against a highly prized belief
Minds can change, a last resort
From dissonance, we seek relief

Ignore the proof, for it can’t be! Or change your views for harmony
Highlight all coincidence, though it lacks significance

Must I believe what’s likely true?
Not if I can find a way!
A means to cover my own eyes
Truth be damned, emotions stray

Coincidence we seem to find, memories, tricks of the mind
‘Tis the frequency illusion, we’re falling prey to this delusion

The path of least resistance
Always tempting to the end
Sacrificing truth for self
How far the mind can bend

A marvel of our evolution, the ego fights its dissolution
Fallacies run far and wide, despite the logic by our side

Remember what you must
Your world is up to you
Conveniently forget the rest
And false becomes the true

Few will try to face the truth
Combat the bias, critique the “I”
Only the bravest make attempts
By far the most would rather die

By far the most would rather lie
To themselves, to everyone
Confirmation biases
From human nature, we try to run

You can run but you can’t hide!
Our biases remain
But evidence has verified
There’s knowledge we can gain

Advertisement

Irrational Man: An Analysis (Part 3, Chapter 9: Heidegger)

In my previous post of this series on William Barrett’s Irrational Man, I explored some of Nietzsche’s philosophy in more detail.  Now we’ll be taking a look at the work of Martin Heidegger.

Heidegger was perhaps the most complex of the existentialist philosophers, as his work is often interpreted in a number of different ways and he develops a lot of terminology and concepts that can sometimes be difficult to grasp; and there’s also a fundamental limitation of fully engaging with the concepts he uses unless one is fluent in the German language, due to what is otherwise lost in translation to English.

Phenomenology, or the study of the structures of our conscious experience, was a big part of Heidegger’s work and had a big influence on his distinguishing between what he called “calculating thought” and “meditating thought”, and Barrett alludes to this distinction when he first mentions Heidegger’s feelings on thought and reason.  Heidegger says:

“Thinking only begins at the point where we have come to know that Reason, glorified for centuries, is the most obstinate adversary of thinking.”

Now at first one might be confused with the assertion that reason is somehow against thinking, but if one understands that Heidegger is only making reference to one of the two types of thinking outlined above, then we can begin to make sense of what he’s saying.  Calculating thought is what he has in mind here, which is the kind of thinking that we use all the time for planning and investigating in order to accomplish some specific purpose and achieve our goals.  Meditative thought, on the other hand, involves opening up one’s mind such that they aren’t consumed by a single perspective of an idea; it requires us to not limit our thinking to only one category of ideas, and requires us to at least temporarily free our thinking from the kind of unity that makes everything seemingly fit together.

Meaning is more or less hidden behind calculating thought, and thus meditative thought is needed to help uncover that meaning.  Calculative thinking also involves a lot of external input from society and the cultural or technological constructs that our lives are embedded in, whereas meditative thinking is internal, prioritizing the self over the collective and creating a self-derived form of truth and meaning rather than one that is externally imposed on us.  By thinking about reason’s relation to what Heidegger is calling meditative thought in particular, we can understand why Heidegger would treat reason as an adversary to what he sees as a far more valuable form of “thinking”.

It would be mistaken however to interpret this as Heidegger being some kind of an irrationalist, because Heidegger still values thinking even though he redefines what kinds of thinking exist:

“Heidegger is not a rationalist, because reason operates by means of concepts, mental representations, and our existence eludes these.  But he is not an irrationalist either.  Irrationalism holds that feeling, or will, or instinct are more valuable and indeed more truthful than reason-as in fact, from the point of view of life itself, they are.  But irrationalism surrenders the field of thinking to rationalism and thereby secretly comes to share the assumptions of its enemy.  What is needed is a more fundamental kind of thinking that will cut under both opposites.”

Heidegger’s intention seems to involve carving out a space for thinking that connects to the most basic elements of our experience, and which connects to the grounding for all of our experience.  He thinks that we’ve almost completely submerged our lives in reason or calculative thinking and that this has caused us to lose our connection with what he calls Being, and this is analogous to Kierkegaard’s claim of reason having estranged us from faith:

“Both Kierkegaard and Nietzsche point up a profound dissociation, or split, that has taken place in the being of Western man, which is basically the conflict of reason with the whole man.  According to Kierkegaard, reason threatens to swallow up faith; Western man now stands at a crossroads forced to choose either to be religious or to fall into despair…Now, the estrangement from Being itself is Heidegger’s central theme.”

Where or when did this estrangement from our roots first come to be?  Many may be tempted to say it was coincident with the onset of modernity, but perhaps it happened long before that:

“Granted that modern man has torn himself up by his roots, might not the cause of this lie farther back in his past than he thinks?  Might it not, in fact, lie in the way in which he thinks about the most fundamental of all things, Being itself?”

At this point, it’s worth looking at Heidegger’s overall view of man, and see how it might be colored by his own upbringing in southern Germany:

“The picture of man that emerges from Heidegger’s pages is of an earth-bound, time-bound, radically finite creature-precisely the image of man we should expect from a peasant, in this case a peasant who has the whole history of Western philosophy at his fingertips.”

So Heidegger was in an interesting position by wanting to go back to one of the most basic questions ever asked in philosophy, but to do so with the knowledge of the philosophy that had been developed ever since, hopefully giving him a useful outline of what went wrong in that development.

1. Being

“He has never ceased from that single task, the “repetition” of the problem of Being: the standing face to face with Being as did the earliest Greeks.  And on the very first pages of Being and Time he tells us that this task involves nothing less than the destruction of the whole history of Western ontology-that is, of the way the West has thought about Being.”

Sometimes the only way to make progress in philosophy is through a paradigm shift of some kind, where the most basic assumptions underlying our theories of the world and our existence are called into question; and we may end up having to completely discard what we thought we knew and start over, in order to follow a new path and discover a new way of looking at the problem we first began with.

For Heidegger, the problem all comes down to how we look at beings versus being (or Being), and in particular how little attention we’ve given the latter:

“Now, it is Heidegger’s contention that the whole history of Western thought has shown an exclusive preoccupation with the first member of these pairs, with the thing-which-is, and has let the second, the to-be of what is, fall into oblivion.  Thus that part of philosophy which is supposed to deal with Being is traditionally called ontology-the science of the thing-which-is-and not einai-logy, which would be the study of the to-be of Being as opposed to beings…What it means is nothing less than this: that from the beginning the thought of Western man has been bound to things, to objects.”

And it’s the priority we’ve given to positing and categorizing various types of beings that has distracted us from really contemplating the grounding for any and all beings, Being itself.  It’s more or less been taken for granted or seen as too tenuous or insubstantial to give any attention to.  We can certainly blame our lack of attention toward Being, at least in part, as a result of how we’ve understood it as contingent on the existence of particular objects:

“Once Being has been understood solely in terms of beings, things, it becomes the most general and empty of concepts: ‘The first object of the understanding,’ says St. Thomas Aquinas, ‘that which the intellect conceives when it conceives of anything.’ “

Aquinas seems to have pointed out the obvious here: we can’t easily think of the concept of Being without thinking of beings, and once we’ve thought of beings, we tend to think of its fact of being as not adding anything useful to our conception of the beings themselves (similar to Kant’s claim about existence as a superfluous or useless concept).  Heidegger wants to turn this common assumption on its head:

“Being is not an empty abstraction but something in which all of us are immersed up to our necks, and indeed over our heads.  We all understand the meaning in ordinary life of the word ‘is,’ though we are not called upon to give a conceptual explanation of it.  Our ordinary human life moves within a preconceptual understanding of Being, and it is this everyday understanding of Being in which we live, move, and have our Being that Heidegger wants to get at as a philosopher.”

Since we live our lives with an implicit understanding of Being and its permeating everything in our experience, it seems reasonable that we can turn our attention toward it and build up a more explicit conceptualization or understanding of it.  To do this carefully, without building in too many abstract assumptions or uncertain inferences, we need to make special use of phenomenology in this task.  Heidegger turns to Edmund Husserl’s work to take this project off the ground.

2. Phenomenology and Human Existence

“Instead of making intellectual speculations about the whole of reality, philosophy must turn, Husserl declared, to a pure description of what is…Heidegger accepts Husserl’s definition of phenomenology: he will attempt to describe, he says, and without any obscuring preconceptions, what human existence is.”

And this means that we need to try and analyze our experience without importing the plethora of assumptions and ideas that we’ve been taught about our existence.  We have to try and do a kind of phenomenological reduction, where we suspend our judgment about the way the world works which will include leaving aside (at least temporarily) most if not all of science and its various theories pertaining to how reality is structured.

Heidegger also makes many references to language in his work and, perhaps sharing a common thread with Wittgenstein’s views, dissects language to get at what we really mean by certain words, revealing important nuances that are taken for granted and how our words and concepts constrain our thinking to some degree:

“Heidegger’s perpetual digging at words to get at their hidden nuggets of meaning is one of his most exciting facets…(The thing for itself) will reveal itself to us, he says, only if we do not attempt to coerce it into one of our ready-made conceptual strait jackets.”

Truth is another concept that needs to be reformulated for Heidegger, and he first points out how the Greek word for truth, aletheia, means “un-hiddenness or “revelation”, and so he believes that truth lies in finding out what has been hidden from us; and this goes against our usual views of what is meant by “truth”, where we most often think of it as a status ascribed to propositions that correspond to actual facts about the world.  Since propositions can’t exist without minds, then modern conceptions of truth are limited in a number of ways, which Heidegger wants to find a way around:

“…truth is therefore, in modern usage, to be found in the mind when it has a correct judgment about what is the case.  The trouble with this view is that it cannot take account of other manifestations of truth.  For example, we speak of the “truth” of a work of art.  A work of art in which we find truth may actually have in it no propositions that are true in this literal sense.  The truth of a work of art is in its being a revelation, but that revelation does not consist in a statement or group of statements that are intellectually correct.  The momentous assertion that Heidegger makes is that truth does not reside primarily in the intellect, but that, on the contrary, intellectual truth is in fact a derivative of a more basic sense of truth.”

This more basic sense of truth will be explored more later on, but the important takeaway is that truth for Heidegger seems to involve the structure of our experience and of the beings within that experience; and it lends itself toward a kind of epistemological relativism, depending on how a human being interprets the world they’re interacting with.

On the other hand, if we think about truth as a matter of deciding what we should trust or doubt in terms of what we’re experiencing, we may wind up in a position of solipsism similar to Descartes’, where we come to doubt the “external world” altogether:

“…Descartes’ cogito ergo sum moment, is the point at which modern philosophy, and with it the modern epoch, begins: man is locked up in his own ego.  Outside him is the doubtful world of things, which his science has not taught him are really not the least like their familiar appearances.  Descartes got the external world back through a belief in God, who in his goodness would not deceive us into believing that this external world existed if it really did not.  But the ghost of subjectivism (and solipsism too) is there and haunts the whole of modern philosophy.”

While Descartes pulled himself out of solipsism through the added assumption of another (benevolent) mind existing that was responsible for any and all experience, I think there’s a far easier and less ad hoc solution to this problem.  First, we need to realize that we wouldn’t be able to tell the difference between an experience of an artificial or illusory external reality and one that was truly real independent of our conscious experience, and this means that solipsism is entirely indeterminable and so it’s almost pointless to worry about; and second, we don’t feel that we are the sole authors of every aspect of our experience anyway, otherwise there would be nothing unpredictable or uncertain with respect to any part of our experience, and therefore we’d know everything that will ever happen before it happens with not so much as an inkling of surprise or confusion.

This is most certainly not the case, which means we can be confident that we are not the sole authors of our experience and therefore there really is a reality or a causal structure that is independent of our own will and expectations.  So it’s conceptually simpler to simply define reality in the most broadest sense of the term, as nothing more or less than what we experience at any point in time.  And because we experience what appear to be other beings that are just like ourselves, each appearing to have their own experiences of reality, it’s going to be more natural and intuitive to treat them as such regardless of whether or not they’re really figments of our imagination or products of some unknown higher level reality like The Matrix.

Heidegger does something similar here, where he views our being in the “external” world as essential to who we are, and thus he doesn’t think we can separate ourselves from the world, like Descartes himself did as a result of his methodological skepticism:

“Heidegger destroys the Cartesian picture at one blow: what characterizes man essentially, he says, is that he is Being-in-the-world.  Leibniz had said that the monad has no windows; and Heidegger’s reply is that man does not look out upon an external world through windows, from the isolation of his ego: he is already out-of-doors.  He is in the world because, existing, he is involved in it totally.”

This is fundamentally different from the traditional conception of an ego or self that may or may not exist in a world; within this conception of human beings, one can’t treat the ego as separable from the world that it’s embedded in.  One way to look at this is to imagine that we were separated from the world forever, and then ask ourselves if the mode of living that remains (if any) in the absence of any world is still fundamentally human.  It’s an interesting thought experiment to imagine us somehow as a disembodied mind “residing” within an infinite void of nothingness, but if this were really done for eternity and not simply for a short while where we knew we could later return to the world we left behind, then we’d have no more interactions or relations with any objects or other beings and no mode of living at all.  I think it’s perfectly reasonable to conclude in this case that we would have lost something that we need in order to truly be a human being.  If one fails to realize this, then they’ve simply failed to truly grasp the conditions called for in the thought experiment.

Heidegger drives this point further by describing existence or more appropriately our Being as analogous to a non-localized field as in theoretical physics:

“Existence itself, according to Heidegger, means to stand outside oneself, to be beyond oneself.  My Being is not something that takes place inside my skin (or inside an immaterial substance inside that skin); my Being, rather, is spread over a field or region which is the world of its care and concern.  Heidegger’s theory of man (and of Being) might be called the Field Theory of Man (or the Field Theory of Being) in analogy with Einstein’s Field Theory of Matter, provided we take this purely as an analogy; for Heidegger would hold it a spurious and inauthentic way to philosophize to derive one’s philosophic conclusions from the highly abstract theories of physics.  But in the way that Einstein took matter to be a field (a magnetic field, say)- in opposition to the Newtonian conception of a body as existing inside its surface boundaries-so Heidegger takes man to be a field or region of Being…Heidegger calls this field of Being Dasein.  Dasein (which in German means Being-there) is his name for man.”

So we can see our state of Being (which Heidegger calls Dasein) as one that’s spread out over a number of different interactions and relations we have with other beings in the world.  In some sense we can think of the sum total of all the actions, goals, and beings that we concern ourselves with as constituting our entire field of Being.  And each being, especially each human being, shares this general property of Being even though each of those beings will be embedded within their own particular region of Being with their own set of concerns (even if these sets overlap in some ways).

What I find really fascinating is Heidegger’s dissolution of the subject-object distinction:

“That Heidegger can say everything he wants to say about human existence without using either “man” or “consciousness” means that the gulf between subject and object, or between mind and body, that has been dug by modern philosophy need not exist if we do not make it.”

I mentioned Wittgenstein earlier, because I think Heidegger’s philosophy has a few parallels.  For instance, Wittgenstein treats language and meaning as inherently connected to how we use words and concepts and thus is dependent on the contextual relations between how or what we communicate and what our goals are, as opposed to treating words like categorical objects with strict definitions and well-defined semantic boundaries.  For Wittgenstein, language can’t be separated from our use of it in the world even though we often treat it as if we can (and assuming we can is what often creates problems in philosophy), and this is similar to how Heidegger describes us human beings as inherently inseparable from our extension in the world as manifested in our many relations and concerns within that world.

Heidegger’s reference to Being as some kind of field or region of concern can be illustrated well by considering how a child comes to learn their own name (or to respond to it anyway):

“He comes promptly enough at being called by name; but if asked to point out the person to whom the name belongs, he is just as likely to point to Mommy or Daddy as to himself-to the frustration of both eager parents.  Some months later, asked the same question, the child will point to himself.  But before he has reached that stage, he has heard his name as naming a field or region of Being with which he is concerned, and to which he responds, whether the call is to come to food, to mother, or whatever.  And the child is right.  His name is not the name of an existence that takes place within the envelope of his skin: that is merely the awfully abstract social convention that has imposed itself not only on his parents but on the history of philosophy.  The basic meaning the child’s name has for him does not disappear as he grows older; it only becomes covered over by the more abstract social convention.  He secretly hears his own name called whenever he hears any region of Being named with which he is vitally involved.”

This is certainly an interesting interpretation of this facet of child development, and we might presume that the current social conventions which eventually attach our name to our body arise out of a number of pragmatic considerations such as wanting to provide a label for each conscious agent that makes decisions and behaves in ways that affect other conscious agents, and because our regions of Being overlap in some ways but not in others (i.e. some of my concerns, though not all of them, are your concerns too), it also makes sense to differentiate between all the different “sets of concerns” that comprise each human being we interact with.  But the way we tend to do this is by abstracting the sets of concerns, localizing them in time and space as “mine” and “yours” rather than the way they truly exist in the world.

Nevertheless, we could (in principle anyway) use an entirely different convention where a person’s name is treated as a representation of a non-localized set of causal structures and concerns, almost as if their body extended in some sense to include all of this, where each concern or facet of their life is like a dynamic appendage emanating out from their most concentrated region of Being (the body itself).  Doing so would markedly change the way we see one another, the way we see objects, etc.

Perhaps our state of consciousness can mislead us into forgetting about the non-locality of our field of Being, because our consciousness is spatially constrained to one locus of experience; and perhaps this is the reason why philosophers going all the way back to the Greeks have linked existence directly with our propensity to reflect in solitude.  But this solitary reflection also misleads us into forgetting the average “everydayness” of our existence, where we are intertwined with our community like a sheep directed by the rest of the herd:

“…none of us is a private Self confronting a world of external objects.  None of us is yet even a Self.  We are each simply one among many; a name among the names of our schoolfellows, our fellow citizens, our community.  This everyday public quality of our existence Heidegger calls “the One.”  The One is the impersonal and public creature whom each of us is even before he is an I, a real I.  One has such-and-such a position in life, one is expected to behave in such-and-such a manner, one does this, one does not do that, etc.  We exist thus in a state of “fallen-ness”, according to Heidegger, in the sense that we are as yet below the level of existence to which it is possible for us to rise.  So long as we remain in the womb of this externalized and public existence, we are spared the terror and the dignity of becoming a Self.”

Heidegger’s concept of fallen-ness illustrates our primary mode of living; sometimes we think of ourselves as individuals striving toward our personal self-authored goals but most of the time we’re succumbing to society’s expectations of who we are, what we should become, and how we ought to judge the value of our own lives.  We shouldn’t be surprised by this fact as we’ve evolved as a social species, which means our inclination toward a herd mentality is in some sense instinctual and therefore becomes the path of least resistance.  But because of our complex cognitive and behavioral versatility, we can also consciously rise above this collective limitation; and even if we don’t strive to, in other cases our position within the safety of the collective is torn away from us as a result of the chaos that reigns in on us from life’s many contingencies:

“But…death and anxiety (and such) intrude upon this fallen state, destroy our sheltered position of simply being one among many, and reveal to us our own existence as fearfully and irremediably our own.  Because it is less fearful to be “the One” than to be a Self, the modern world has wonderfully multiplied all the devices of self-evasion.”

It is only by rising out of this state of fallenness, that is, by our rescuing some measure of true individuality, that Heidegger thinks we can live authentically.  But in any case:

“Whether it be fallen or risen, inauthentic or authentic, counterfeit copy or genuine original, human existence is marked by three general traits: 1) mood or feeling; 2) understanding; 3) speech.  Heidegger calls these existentialia and intends them as basic categories of existence (as opposed to more common ones like quantity, quality, space, time, etc.).”

Heidegger’s use of these traits in describing our existence shouldn’t be confused with some set of internal mental states but rather we need to include them within a conception of Being or Dasein as a field that permeates the totality of our existence.  So when we consider the first trait, mood or feeling, we should think of each human being as being a mood rather than simply having a mood.  According to Heidegger, it is through moods that we feel a sense of belonging to the world, and thus, if we didn’t have any mood at all, then we wouldn’t find ourselves in a world at all.

It’s also important to understand that he doesn’t see moods as some kind of state of mind as we would typically take them to be, but rather that all states of mind presuppose this sense of belonging to a world in the first place; they presuppose a mood in order to have the possibilities for any of those states of mind.  And as Barrett mentions, not all moods are equally important for Heidegger:

“The fundamental mood, according to Heidegger, is anxiety (Angst); he does not choose this as primary out of any morbidity of temperament, however, but simply because in anxiety this here-and-now of our existence arises before us in all its precarious and porous contingency.”

It seems that he finds anxiety to be fundamental in part because it is a mood that opens us up to see Being for what it really is: a field of existence centered around the individual rather than society and its norms and expectations; and this perspective largely results from the fact that when we’re in anxiety all practical significance melts away and the world as it was no longer seems to be relevant.  Things that we took for granted now become salient and there’s a kind of breakdown in our sense of everyday familiarity.  If what used to be significant and insignificant reverse roles in this mood, then we can no longer misinterpret ourselves as some kind of entity within the world (the world of “Them”; the world of externally imposed identity and essence); instead, we finally see ourselves as within, or identical with, our own world.

Heidegger also seems to see anxiety as always there even though it may be covered up (so to speak).  Perhaps it’s hiding most of the time because the fear of being a true Self and facing our individuality with the finitude, contingency, and responsibility that goes along with it is often too much for one to bear, so we blind ourselves by shifting into a number of other, less fundamental moods which enhance the significance of the usual day-to-day world, even if the possibility for Being to “reveal itself” to us, is just below the surface.

If we think of moods as a way of our feeling a sense of belonging within a world, where they effectively constitute the total range of ways that things can have significance for us, then we can see how our moods ultimately affect and constrain our view of the possibilities that our world affords us.  And this is a good segue to consider Heidegger’s second trait of Dasein or human Being, namely that of understanding.

“The “understanding” Heidegger refers to here is not abstract or theoretical; it is the understanding of Being in which our existence is rooted, and without which we could not make propositions or theories that can claim to be “true”.  Whence comes this understanding?  It is the understanding that I have by virtue of being rooted in existence.”

I think Heidegger’s concept of understanding (verstehen) is more or less an intuitive sense of knowing how to manipulate the world in order to make use of it for some goal or other.  By having certain goals, we see the world and the entities in that world colored by the context of those goals.  What we desire then will no doubt change the way the world appears to us in terms of what functionality we can make the most use of, what tools are available to us, and whether we see something as a tool or not.  And even though our moods may structure the space of possibilities that we see the world present to us, it is our realizing what these possibilities are that seems to underlie this form of understanding that Heidegger’s referring to.

We also have no choice but to use our past experience to interpret the world, determine what we find significant and meaningful, and to further drive the evolution of our understanding; and of course this process feeds back in on itself by changing our interpretation of the world which changes our dynamic understanding yet again.  And here we might make the distinction between an understanding of the possible uses for various objects and processes, and the specific interpretation of which possibilities are relevant to the task at hand thereby making use of that understanding.

If I’m sitting at my desk with a cup of coffee and a stack of papers in front of me that need to be sorted, my understanding of these entities affords me a number of possibilities depending on my specific mode of Being or what my current concerns are at the present moment.  If I’m tired or thirsty or what-have-you I may want to make use of that cup of coffee to drink from it, but if the window is open and the wind is starting to blow the stack of papers off my desk, I may decide to use my cup of coffee as a paper weight instead of a vessel to drink from.  And if a fly begins buzzing about while I’m sorting my papers and starts to get on my nerves, distracting me from the task at hand, I may decide to temporarily roll up my stack of papers and use it as a fly swatter.  My understanding of all of these objects in terms of their possibilities allows me to interpret an object as a particular possibility that fits within the context of my immediate wants and needs.

I like to think of understanding and interpretation, whether in the traditional sense or in Heidegger’s terms, as ultimately based on our propensity to make predictions about the world and its causal structure.  As a proponent of the predictive processing framework of brain function, I see the ultimate purpose of brains as being prediction generators, where the brain serves to organize information in order to predict the causal structure of the world we interact with; and this is done at many different levels of abstraction, so we can think of us as making use of many smaller-scale modules of understanding as well as various larger-scale assemblies of those individual modules.  In some cases, the larger-scale assemblies constrain which smaller-scale modules can be used and in other cases, the smaller-scale modules are prioritized which constrain or direct the shape of the larger-scale assemblies.

Another way to say this is to say that our brains make use of a number of lower level and higher level predictions (any of which may change over time based on new experiences or desires) and depending on the context we find ourselves in, one level may have more or less influence on another level.  My lowest level predictions may include things like what each “pixel” in my visual field is going to be from moment to moment, and a slightly higher level prediction may include things like what a specific coffee cup looks like, it’s shape, weight, etc., and slightly higher level predictions may include things like what coffee cups in general look like, their range of shapes, weights, etc., and eventually we get to higher levels of predictions that may include what we expect an object can be used for.

All of these predictions taken as a whole constitute our understanding of the world and the specific sets of predictions that come into play at any point in time, depending on our immediate concerns and where our attention is directed, constitute our interpretation of what we’re presently experiencing.  Barrett points to the fact that what we consider to be truth in the most primitive sense is this intuitive sense of understanding:

“Truth and Being are thus inseparable, given always together, in the simple sense that a world with things in it opens up around man the moment he exists.  Most of the time, however, man does not let himself see what really happens in seeing.”

This process of the world opening itself up to us, such that we have an immediate sense of familiarity with it, is a very natural and automated process and so unless we consciously pull ourselves out of it to try and externally reflect on this situation, we’re just going to take our understanding and interpretation for granted.  But regardless of whether we reflect on it or not, this process of understanding and interpretation is fundamental to how we exist in the world, and thus it is fundamental to our Being in the world.

Speech is the last trait of Dasein that Heidegger mentions, and it is intimately connected with the other two traits, mood and understanding.  So what exactly is he referring to by speech?

“Speech:  Language, for Heidegger, is not primarily a system of sounds or of marks on paper symbolizing those sounds.  Sounds and marks upon paper can become language only because man, insofar as he exists, stands within language.”

This is extremely reminiscent of Wittgenstein once again, since Wittgenstein views language in terms of how it permeates our day-to-day lives within a community or social group of some kind.  Language isn’t some discrete set of symbols with fixed meanings, but rather is a much more dynamic, context-dependent communication tool.  We primarily use language to accomplish some specific goal or another in our day-to-day lives, and this is a fundamental aspect of how we exist as human beings.  Because language is attached to its use, the meaning of any word or utterance (if any) is derived from its use as well.  Sounds and marks on paper are meaningless, unless we first ascribe a meaning to it based on how its been used in the past and how its being used in the present.

And as soon as we pick up language during our child development, it becomes automatically associated with our thought processes where we begin to think in our native language, and both real perceptions and figments of our imagination are instantly attached to some linguistic label in our heads.  If I showed you a picture of an elephant, the word elephant would no doubt enter your stream of consciousness, even without my saying the word or asking you what animal you saw in the picture.  Whether our thinking linguistically comes about as a result of having learned a specific language, or if its an innate feature of our how our brains function, language permeates our thought as well as our interactions with others in the world.

While we tend to think of language as involving words, sounds, and so forth, Heidegger has an interesting take on what else falls under the umbrella of language:

“Two people are talking together.  They understand each other, and they fall silent-a long silence.  This silence is language; it may speak more eloquently than any words.  In their mood they are attuned to each other; they may even reach down into that understanding which, as we have seen above, lies below the level of articulation.  The three-mood, understanding, and speech (a speech here that is silence)-thus interweave and are one.”

I’m not sure if Heidegger is only referring to silence itself here or if he’s also including body language, which often continues even after we’ve stopped talking and which sometimes speaks better than verbal language ever could.  In silence, we often get a chance to simply read another person in a much more basic and instinctual way, ascertaining their mood, and allowing any language that may have recently transpired to be interpreted differently than if a long silence had never occurred.  Heidegger refers to what happens in silence as an attunement between fellow human beings:

“…Nor is this silence merely a gap in our chatter; it is, rather, the primordial attunement of one existent to another, out of which all language-as sounds, marks, and counters-comes.  It is only because man is capable of such silence that he is capable of authentic speech.  If he ceases to be rooted in that silence all his talk becomes chatter.”

We may however also want to consider silence when it is accompanied with action, as our actions speak louder than words and we gain information primarily in this way anyway, by simply watching another do something in the world.  And language can only work within the context of mutual understanding, which necessarily involves action (at some point) and so perhaps we could say that action at least partially constitutes the “silence” that Heidegger refers to.

Heidegger’s entire “Field Theory” of Being relies on context, and Barrett mentions this as well:

“…we might just as well call it a contextual theory of Being.  Being is the context in which all beings come to light-and this means those beings as well that are sounds or marks on paper…Men exist “within language” prior to their uttering sounds because they exist within a mutual context of understanding, which in the end is nothing but Being itself.”

I think that we could interpret this as saying that in order for us to be able to interact with one another and with the world, there’s a prerequisite context of meaning and of what matters to us, already in play in order for those interactions to be possible in the first place.  In other words, whether we’re talking to somebody or simply trying to cook our breakfast, all of our actions presuppose some background of understanding and a sense of what’s significant to us.

3. Death, Anxiety, Finitude

Heidegger views the concept of death as an important one especially as it relates to a proper understanding of the concept of Being, and he also sees the most common understanding of death as fundamentally misguided:

“The authentic meaning of death-“I am to die”-is not as an external and public fact within the world, but as an internal possibility of my own Being.  Nor is it a possibility like a point at the end of a road, which I will in time reach.  So long as I think in this way, I still hold death at a distance outside myself.  The point is that I may die at any moment, and therefore death is my possibility now…Hence, death is the most personal and intimate of possibilities, since it is what I must suffer for myself: nobody else can die for me.”

I definitely see the merit in viewing death as an imminent possibility in order to fully grasp the implications it has on how we should be living our lives.  But most people tend to consider death in very abstract terms where it’s only thought about in the periphery as something that will happen “some day” in the future; and I think we owe this abstraction and partial repression of death to our own fear of death.  In many other cases, this fear of death manifests itself into supernatural beliefs that posit life after death, the resurrection of the dead, immortality, and other forms of magic; and importantly, all of these psychologically motivated tactics prevent one from actually accepting the truth about death as an integral part of our existence as human beings.  By denying the truth about death, one is denying the true value of the life they actually have.  On the other hand, by accepting it as an extremely  personal and intimate attribute of ourselves, we can live more authentically.

We can also see that by viewing death in more personal terms, we are pulled out the world of “Them”, the collective world of social norms and externalized identities, and able to better connect with our sense of being a Self:

“Only by taking my death into myself, according to Heidegger, does an authentic existence become possible for me.  Touched by this interior angel of death, I cease to be the impersonal and social One among many…and I am free to become myself.”

Thinking about death may make us uncomfortable, but if I know I’m going to die and could die at any moment, then shouldn’t I prioritize my personal endeavors to reflect this radical contingency?  It’s hard to argue with that conclusion but even if we should be doing this it’s understandably easy to get lost in our mindless day-to-day routines and superficial concerns, and then we simply lose sight of what we ought to be doing with our limited time.  It’s important to break out of this pattern, or at least to strive to, and in doing so we can begin to center our lives around truly personal goals and projects:

“Though terrifying, the taking of death into ourselves is also liberating: It frees us from servitude to the petty cares that threaten to engulf our daily life and thereby opens us to the essential projects by which we can make our lives personally and significantly our own.  Heidegger calls this the condition of “freedom-toward-death” or “resoluteness.”

The fear of death is also unique and telling in the sense that it isn’t really directed at any object but rather it is directed at Nothingness itself.  We fear that our lives will come to an end and that we’ll one day cease to be.  Heidegger seems to refer to this disposition as anxiety rather than fear, since it isn’t really the fear of anything but rather the fear of nothing at all; we just treat this nothing as if it were an object:

“Anxiety is not fear, being afraid of this or that definite object, but the uncanny feeling of being afraid of nothing at all.  It is precisely Nothingness that makes itself present and felt as the object of our dread.”

Accordingly, the concept of Nothingness is integral to the concept of Being, since it’s interwoven in our existence:

“In Heidegger Nothingness is a presence within our own Being, always there, in the inner quaking that goes on beneath the calm surface of our preoccupation with things.  Anxiety before Nothingness has many modalities and guises: now trembling and creative, now panicky and destructive; but always it is as inseparable from ourselves as our own breathing because anxiety is our existence itself in its radical insecurity.  In anxiety we both are and are not, at one and the same time, and this is our dread.”

What’s interesting about this perspective is the apparent asymmetry between existence and non-existence, at least insofar as it relates to human beings.  Non-existence becomes a principal concern for us as a result of our kind of existence, but non-existence itself carries with it no concerns at all.  In other words, non-existence doesn’t refer to anything at all whereas (human) existence refers to everything as well as to nothing (non-existence).  And it seems to me that anxiety and our relation to Nothingness is entirely dependent on our inherent capacity of imagination, where we use imagination to simulate possibilities, and if this imagination is capable of rendering the possibility of our own existence coming to an end, then that possibility may forcefully reorganize the relative significance of all other products of our imagination.

As confusing as it may sound, death is the possibility of ending all possibilities for any extant being.  If imagining such a possibility were not enough to fundamentally change one’s perspective of their own life, then I think that combining it with the knowledge of its inevitability, that this possibility is also a certainty, ought to.  Another interesting feature of our existence is the fact that death isn’t the only possibility that relates to Nothingness, for anytime we imagine a possibility that has not yet been realized or even if we conceive of an actuality that has been realized, we are making reference to that which is not; we are referring to what we are not, to what some thing or other is not and ultimately to that which does not exist (at a particular time and place).  I only know how to identify myself, or any object in the world for that matter, by distinguishing it from what it is not.

“Man is finite because the “not”-negation-penetrates the very core of his existence.  And whence is this “not” derived?  From Being itself.  Man is finite because he lives and moves within a finite understanding of Being.”

As we can see, Barrett describes how within Heidegger’s philosophy negation is treated as something that we live and move within and this makes sense when we consider what identity itself is dependent on.  And the mention of finitude is interesting because it’s not simply a limitation in the sense of what we aren’t capable of (e.g. omnipotence, immortality, perfection, etc.), but rather that our mode of existence entails making discriminations between what is and what isn’t, between what is possible and what is not, between what is actual and what is not.

4. Time and Temporality; History

The last section on Heidegger concerns the nature of time itself, and it begins by pointing out the relation between negation and our experience of time:

“Our finitude discloses itself essentially in time.  In existing…we stand outside ourselves at once open to Being and in the open clearing of Being; and this happens temporally as well as spatially.  Man, Heidegger says, is a creature of distance: he is perpetually beyond himself, his existence at every moment opening out toward the future.  The future is the not-yet, and the past is the no-longer; and these two negatives-the not-yet and the no-longer-penetrate his existence.  They are his finitude in its temporal manifestation.”

I suppose we could call this projection towards the future the teleological aspect of our Being.  In order to do anything within our existence, we must have a goal or a number of them, and these goals refer to possible future states of existence that can only be confirmed or disconfirmed once our growing past subsumes them as actualities or lost opportunities.  We might even say that the present is somewhat of an illusion (though as far as I know Heidegger doesn’t make this claim) because in a sense we’re always living in the future and in the past, since the person we see ourselves as and the person we want to be are a manifestation of both; whereas the present itself seems to be nothing more than a fleeting moment that only references what’s in front or behind itself.

In addition to this, if we look at how our brain functions from a predictive coding perspective, we can see that our brain generates predictive models based on our past experiences and the models it generates are pure attempts to predict the future; nowhere does the present come into play.  The present as it is experienced by us, is nothing but a prediction of what is yet to come.  I think this is integral to take into account when considering how time is experienced by us and how it should be incorporated into a concept of Being.

Heidegger’s concept of temporality is also tied to the concept of death that we explored earlier.  He sees death as necessary to put our temporal existence into a true human perspective:

“We really know time, says Heidegger, because we know we are going to die.  Without this passionate realization of our mortality, time would be simply a movement of the clock that we watch passively, calculating its advance-a movement devoid of human reasoning…Everything that makes up human existence has to be understood in the light of man’s temporality: of the not-yet, the no-longer, the here-and-now.”

It’s also interesting to consider what I’ve previously referred to as the illusion of persistent identity, an idea that’s been explored by a number of philosophers for centuries: we think of our past self as being the same person as our present or future self.  Now whether the present is actually an illusion or not is irrelevant to this point; the point is that we think of ourselves as always being there in our memories and at any moment of time.  The fact that we are all born into a society that gives us a discrete and fixed name adds to this illusion of our having an identity that is fixed as well.  But we are not the same person that we were ten years ago let alone the same person we were as a five-year old child.  We may share some memories with those previous selves but even those change over time and are often altered and reconstructed upon recall.  Our values, our ontology, our language, our primary goals in life, are all changing to varying degrees over time and we mustn’t lose sight of this fact either.  I think this “dynamic identity” is a fundamental aspect of our Being.

We might help to explain how we’re taken in by the illusion of a persistent identity by understanding once again that our identity is fundamentally projected toward the future.  Since that projection changes over time (as our goals change) and since it is a process that relies on a kind of psychological continuity, we might expect to simply focus on the projection itself rather than what that projection used to be.  Heidegger stresses the importance of the future in our concept of Being:

“Heidegger’s theory of time is novel, in that, unlike earlier philosophers with their “nows,” he gives priority to the future tense.  The future, according to him, is primary because it is the region toward which man projects and in which he defines his own being.  “Man never is, but always is to be,” to alter slightly the famous line of Pope.”

But, since we’re always relying on past experiences in order to determine our future projection, to give it a frame of reference with which we can evaluate that projected future, we end up viewing our trajectory in terms of human history:

“All these things derive their significance from a more basic fact: namely, that man is the being who, however dimly and half-consciously, always understands, and must understand, his own being historically.”

And human history, in a collective sense also plays a role, since our view of ourselves and of the world is unavoidably influenced by the cultural transmission of ideas stemming from our recent past all the way to thousands of years ago.  A lot of that influence has emanated from the Greeks especially (as Barrett has mentioned throughout Irrational Man), and it’s had a substantial impact on our conceptualization of Being as well:

“By detaching the figure from the ground the object could be made to emerge into the daylight of human consciousness; but the sense of the ground, the environing background, could also be lost.  The figure comes into sharper focus, that is, but the ground recedes, becomes invisible, is forgotten.  The Greeks detached beings from the vast environing ground of Being.  This act of detachment was accompanied by a momentous shift in the meaning of truth for the Greeks, a shift which Heidegger pinpoints as taking place in a single passage in Plato’s Republic, the celebrated allegory of the cave.”

Through the advent of reason and forced abstraction, the Greeks effectively separated the object from the context it’s normally embedded within.  But if a true understanding of our Being in the world can only be known contextually, then we can see how Heidegger may see a crucial problem with this school of thought which has propagated itself in Western philosophy ever since Plato.  Even the concept of truth itself had underwent a dramatic change as a result of the Greeks discovering and developing the process of reason:

“The quality of unhiddenness had been considered the mark of truth; but with Plato in that passage truth came to be defined, rather, as the correctness of an intellectual judgment.”

And this concept of unhiddenness seems to be a variation of “subjective truth” or “subjective understanding”, although it shouldn’t be confused with modern subjectivism; in this case the unhiddenness would naturally precipitate from whatever coherency and meaning is immediately revealed to us through our conscious experience.  I think that the redefining of truth had less of an impact on philosophy than Heidegger may have thought; the problem wasn’t the redefining of truth per se but rather the fact that a contextual understanding and subjective experience itself lost the level of importance they once had.  And I’m sure Heidegger would agree that this perceived loss of importance for both subjectivity and for a holistic understanding of human existence has led to a major shift in our view of the world, and a view that has likely resulted in some of the psychological pathologies sprouting up in our age of modernity.

Heidegger thought that allowing Being to reveal itself to us, was analogous to an artist letting the truth reveal itself naturally:

“…the artist, as well as the spectator, must submit patiently and passively to the artistic process, that he must lie in wait for the image to produce itself; that he produces false notes as soon as he tries to force anything; that, in short, he must let the truth of his art happen to him?  All of these points are part of what Heidegger means by our letting Being be.  Letting it be, the artist lets it speak to him and through him; and so too the thinker must let it be thought.”

He seems to be saying that modern ways of thinking are too forceful in the sense of our always prioritizing the subject-object distinction in our way of life.  We’ve let the obsessive organization of our lives and the various beings within those lives drown out our appreciation of, and ability to connect to, Being itself.  I think that another way we could put this is to say that we’ve split ourselves off from the sense of self that has a strong connection to nature and this has disrupted our ability to exist in a mode that’s more in tune with our evolutionary psychology, a kind of harmonious path of least resistance.  We’ve become masters over manipulating our environment while simultaneously losing our innate connection to that environment.  There’s certainly a lesson to be learned here even if the problem is difficult to articulate.

•      •      •      •      •

This concludes William Barrett’s chapter on Martin Heidegger.  I’ve gotta say that I think Heidegger’s views are fascinating as they serve to illustrate a radically different way of viewing the world and human existence, and for those that take his philosophy seriously, it forces us to re-evaluate much of the Western philosophical thought that we’ve been taking for granted, and which has constrained a lot of our thinking.  And I think it’s both refreshing and worthwhile to explore some of these novel ideas as they help us see life through an entirely new lens.  We need a shift in our frame of mind; a need to think outside the box so we have a better chance of finding new solutions to many of the problems we face in today’s world.  In the next post for this series on William Barrett’s Irrational Man, I’ll be looking at the philosophy of Jean-Paul Sartre.

Irrational Man: An Analysis (Part 3, Chapter 7: Kierkegaard)

Part III – The Existentialists

In the last post in this series on William Barrett’s Irrational Man, we examined how existentialism was influenced by a number of poets and novelists including several of the Romantics and the two most famous Russian authors, namely Dostoyevsky and Tolstoy.  In this post, we’ll be entering part 3 of Barrett’s book, and taking a look at Kierkegaard specifically.

Chapter 7 – Kierkegaard

Søren Kierkegaard is often considered to be the first existentialist philosopher and so naturally Barrett begins his exploration of individual existentialists here.  Kierkegaard was a brilliant man with a broad range of interests; he was a poet as well as a philosopher, exploring theology and religion (Christianity in particular), morality and ethics, and various aspects of human psychology.  The fact that he was also a devout Christian can be seen throughout his writings, where he attempted to delineate his own philosophy of religion with a focus on the concepts of faith, doubt, and also his disdain for any organized forms of religion.  Perhaps the most influential core of his work is the focus on individuality, subjectivity, and the lifelong search to truly know oneself.  In many ways, Kierkegaard paved the way for modern existentialism, and he did so with a kind of poetic brilliance that made clever use of both irony and metaphor.

Barrett describes Kierkegaard’s arrival in human history as the onset of an ironic form of intelligence intent on undermining itself:

“Kierkegaard does not disparage intelligence; quite the contrary, he speaks of it with respect and even reverence.  But nonetheless, at a certain moment in history this intelligence had to be opposed, and opposed with all the resources and powers of a man of brilliant intelligence.”

Kierkegaard did in fact value science as a methodology and as an enterprise, and he also saw the importance of objective knowledge; but he strongly believed that the most important kind of knowledge or truth was that which was derived from subjectivity; from the individual and their own concrete existence, through their feelings, their freedom of choice, and their understanding of who they are and who they want to become as an individual.  Since he was also a man of faith, this meant that he had to work harder than most to manage his own intellect in order to prevent it from enveloping the religious sphere of his life.  He felt that he had to suppress the intellect at least enough to maintain his own faith, for he couldn’t imagine a life without it:

“His intellectual power, he knew, was also his cross.  Without faith, which the intelligence can never supply, he would have died inside his mind, a sickly and paralyzed Hamlet.”

Kierkegaard saw faith as of the utmost importance to our particular period in history as well, since Western civilization had effectively become disconnected from Christianity, unbeknownst to the majority of those living in Kierkegaard’s time:

“The central fact for the nineteenth century, as Kierkegaard (and after him Nietzsche, from a diametrically opposite point of view) saw it, was that this civilization that had once been Christian was so no longer.  It had been a civilization that revolved around the figure of Christ, and was now, in Nietzsche’s image, like a planet detaching itself from its sun; and of this the civilization was not yet aware.”

Indeed, in Nietzsche’s The Gay Science, we hear of a madman roaming around one morning who makes such a pronouncement to a group of non-believers in a marketplace:

” ‘Where is God gone?’ he called out.  ‘I mean to tell you!  We have killed him, – you and I!  We are all his murderers…What did we do when we loosened this earth from its sun?…God is dead!  God remains dead!  And we have killed him!  How shall we console ourselves, the most murderous of all murderers?…’  Here the madman was silent and looked again at his hearers; they also were silent and looked at him in surprise.  At last he threw his lantern on the ground, so that it broke in pieces and was extinguished.  ‘I come too early,’ he then said ‘I am not yet at the right time.  This prodigious event is still on its way, and is traveling, – it has not yet reached men’s ears.’ “

Although Nietzsche will be looked at in more detail in part 8 of this post-series, it’s worth briefly mentioning what he was pointing out with this essay.  Nietzsche was highlighting the fact that at this point in our history after the Enlightenment, we had made a number of scientific discoveries about our world and our place in it, and this had made the concept of God somewhat superfluous.  As a result of the drastic rise in secularization, the proportion of people that didn’t believe in God rose substantially; and because Christianity no longer had the theistic foundation it relied upon, all of the moral systems, traditions, and the ultimate meaning in one’s life derived from Christianity had to be re-established if not abandoned altogether.

But many people, including a number of atheists, didn’t fully appreciate this fact and simply took for granted much of the cultural constructs that arose from Christianity.  Nietzsche had a feeling that most people weren’t intellectually fit for the task of re-grounding their values and finding meaning in a Godless world, and so he feared that nihilism would begin to dominate Western civilization.  Kierkegaard had similar fears, but as a man who refused to shake his own belief in God and in Christianity, he felt that it was his imperative to try and revive the Christian faith that he saw was in severe decline.

1.  The Man Himself

“The ultimate source of Kierkegaard’s power over us today lies neither in his own intelligence nor in his battle against the imperialism of intelligence-to use the formula with which we began-but in the religious and human passion of the man himself, from which the intelligence takes fire and acquires all its meaning.”

Aside from the fact that Kierkegaard’s own intelligence was primarily shaped and directed by his passion for love and for God, I tend to believe that intelligence is in some sense always subservient to the aims of one’s desires.  And if desires themselves are derivative of, or at least intimately connected to, feeling and passion, then a person’s intelligence is going to be heavily guided by subjectivity; not only in terms of the basic drives that attempt to lead us to a feeling of homeostasis but also in terms of the psychological contentment resulting from that which gives our lives meaning and purpose.

For Kierkegaard, the only way to truly discover what the meaning for one’s own life is, or to truly know oneself, is to endure the painful burden of choice eventually leading to the elimination of a number of possibilities and to the creation of a number of actualities.  One must make certain significant choices in their life such that, once those choices are made, they cannot be unmade; and every time this is done, one is increasingly committing oneself to being (or to becoming) a very specific self.  And Kierkegaard thought that renewing one’s choices daily in the sense of freely maintaining a commitment to them for the rest of one’s life (rather than simply forgetting that those choices were made and moving on to the next one) was the only way to give those choices any meaning, let alone any prolonged meaning.  Barrett mentions the relation of choice, possibility, and reality as it was experienced by Kierkegaard:

“The man who has chosen irrevocably, whose choice has once and for all sundered him from a certain possibility for himself and his life, is thereby thrown back on the reality of that self in all its mortality and finitude.  He is no longer a spectator of himself as a mere possibility; he is that self in its reality.”

As can be seen with Kierkegaard’s own life, some of these choices (such as breaking off his engagement with Regine Olsen) can be hard to live with, but the pain and suffering experienced in our lives is still ours; it is still a part of who we are as an individual and further affirms the reality of our choices, often adding an inner depth to our lives that we may otherwise never attain.

“The cosmic rationalism of Hegel would have told him his loss was not a real loss but only the appearance of loss, but this would have been an abominable insult to his suffering.”

I have to agree with Kierkegaard here that the felt experience one has is as real as anything ever could be.  To say otherwise, that is, to negate the reality of this or that felt experience is to deny the reality of any felt experience whatsoever; for they all precipitate from the same subjective currency of our own individual consciousness.  We can certainly distinguish between subjective reality and objective reality, for example, by evaluating which aspects of our experience can and cannot be verified through a third-party or through some kind of external instrumentation and measurement.  But this distinction only helps us in terms of fine-tuning our ability to make successful predictions about our world by better understanding its causal structure; it does not help us determine what is real and what is not real in the broadest sense of the term.  Reality is quite simply what we experience; nothing more and nothing less.

2. Socrates and Hegel; Existence and Reason

Barrett gives us an apt comparison between Kierkegaard and Socrates:

“As the ancient Socrates played the gadfly for his fellow Athenians stinging them into awareness of their own ignorance, so Kierkegaard would find his task, he told himself, in raising difficulties for the easy conscience of an age that was smug in the conviction of its own material progress and intellectual enlightenment.”

And both philosophers certainly made a lasting impact with their use of the Socratic method; Socrates having first promoted this method of teaching and argumentation, and Kierkegaard making heavy use of it in his first published work Either/Or, and in other works.

“He could teach only by example, and what Kierkegaard learned from the example of Socrates became fundamental for his own thinking: namely, that existence and a theory about existence are not one and the same, any more than a printed menu is as effective a form of nourishment as an actual meal.  More than that: the possession of a theory about existence may intoxicate the possessor to such a degree that he forgets the need of existence altogether.”

And this problem, of not being able to see the forest for the trees, plagues many people that are simply distracted by the details that are uncovered when they’re simply trying to better understand the world.  But living a life of contentment and deeply understanding life in general are two very different things and you can’t invest more in one without retracting time and effort from the other.  Having said that, there’s still some degree of overlap between these two goals; contentment isn’t likely to be maximally realized without some degree of in-depth understanding of the life you’re living and the experiences you’ve had.  As Socrates famously put it: “the unexamined life is not worth living.”  You just don’t want to sacrifice too many of the experiences just to formulate a theory about them.

How does reason, rationality, and thought relate to any theories that address what is actually real?  Well, the belief in a rational cosmos, such as that held by Hegel, can severely restrict one’s ontology when it comes to the concept of existence itself:

“When Hegel says, “The Real is rational, and the rational is real,” we might at first think that only a German idealist with his head in the clouds, forgetful of our earthly existence, could so far forget all the discords, gaps, and imperfections in our ordinary experience.  But the belief in a completely rational cosmos lies behind the Western philosophic tradition; at the very dawn of this tradition Parmenides stated it in his famous verse, “It is the same thing that can be thought and that can be.”  What cannot be thought, Parmenides held, cannot be real.  If existence cannot be thought, but only lived, then reason has no other recourse than to leave existence out of its picture of reality.”

I think what Parmenides said would have been more accurate (if not correct) had he rephrased it just a little differently: “It is the same thing that can be thought consciously experienced and that can be.”  Rephrasing it as such allows for a much more inclusive conception of what is considered real, since anything that is within the possibility of conscious experience is given an equal claim to being real in some way or another; which means that all the irrational thoughts or feelings, the gaps and lack of coherency in some of our experiences, are all taken into account as a part of reality. What else could we mean by saying that existence must be lived, other than the fact that existence must be consciously experienced?  If we mean to include the actions we take in the world, and thus the bodily behavior we enact, this is still included in our conscious experience and in the experiences of other conscious agents.  So once the entire experience of every conscious agent is taken into account, what else could there possibly be aside from this that we can truly say is necessary in order for existence to be lived?

It may be that existence and conscious experience are not identical, even if they’re always coincident with one another; but this is in part contingent on whether or not all matter and energy in the universe is conscious or not.  If consciousness is actually universal, then perhaps existence and some kind of experientiality or other (whether primitive or highly complex) are in fact identical with one another.  And if not, then it is still the existence of those entities which are conscious that should dominate our consideration since that is the only kind of existence that can actually be lived at all.

If we reduce our window of consideration from all conscious experience down to merely the subset of reason or rationality within that experience, then of course there’s going to be a problem with structuring theories about the world within such a limitation; for anything that doesn’t fit within its purview simply disappears:

“As the French scientist and philosopher Emile Meyerson says, reason has only one means of accounting for what does not come from itself, and that is to reduce it to nothingness…The process is still going on today, in somewhat more subtle fashion, under the names of science and Positivism, and without invoking the blessing of Hegel at all.”

Although, in order to be charitable to science, we ought to consider the upsurge of interest and development in the scientific fields of psychology and cognitive science in the decades following the writing of Barrett’s book; for consciousness and the many aspects of our psyche and our experience have taken on a much more important role in terms of what we are valuing in science and the kinds of phenomena we’re trying so hard to understand better.  If psychology and the cognitive and neurosciences include the entire goings-on of our brain and overall subjective experience, and if all the information that is processed therein is trying to be accounted for, then science should no longer be considered cut-off from, or exclusive to, that which lies outside of reason, rationality, or logic.

We mustn’t confuse or conflate reason with science even if science includes the use of reason, for science can investigate the unreasonable; it can provide us with ways of making more and more successful predictions about the causal structure of our experience even as they relate to emotions, intuition, and altered or transcendent states of consciousness like those stemming from meditation, religious experiences or psycho-pharmacological substances.  And scientific fields like moral and positive psychology are also better informing us of what kinds of lifestyles, behaviors, character traits and virtues lead to maximal flourishing and to the most fulfilling lives.  So one could even say that science has been serving as a kind of bridge between reason and existence; between rationality and the other aspects of our psyche that make life worth living.

Going back to the relation between reason and existence, Barrett mentions the conceptual closure of the former to the latter as argued by Kant:

“Kant declared, in effect, that existence can never be conceived by reason-though the conclusions he drew from this fact were very different from Kierkegaard’s.  “Being”, says Kant, “is evidently not a real predicate, or concept of something that can be added to the concept of a thing.”  That is, if I think of a thing, and then think of that thing as existing, my second concept does not add any determinate characteristic to the first…So far as thinking is concerned, there is no definite note or characteristic by which, in a concept, I can represent existence as such.”

And yet, we somehow manage to be able to distinguish between the world of imaginary objects and that of non-imaginary objects (at least most of the time); and we do this using the same physical means of perception in our brain.  I think it’s true to say that both imaginary and non-imaginary objects exist in some sense; for both exist physically as representations in our brains such that we can know them at all, even if they differ in terms of whether the representations are likely to be shared by others (i.e. how they map onto what we might call our external reality).

If I conceive of an apple sitting on a table in front of me, and then I conceive of an apple sitting on a table in front of me that you would also agree is in fact sitting on the table in front of me, then I’ve distinguished conceptually between an imaginary apple and one that exists in our external reality.  And since I can’t ever be certain whether or not I’m hallucinating that there’s an actual apple on the table in front of me (or any other aspect of my experienced existence), I must accept that the common thread of existence, in terms of what it really means for something to exist or not, is entirely grounded on its relation to (my own) conscious experience.  It is entirely grounded on our individual perception; on the way our own brains make predictions about the causes of our sensory input and so forth.

“If existence cannot be represented in a concept, he says (Kierkegaard), it is not because it is too general, remote, and tenuous a thing to be conceived of but rather because it is too dense, concrete, and rich.  I am; and this fact that I exist is so compelling and enveloping a reality that it cannot be reproduced thinly in any of my mental concepts, though it is clearly the life-and-death fact without which all my concepts would be void.”

I actually think Kierkegaard was closer to the mark than Kant was, for he claimed that it was not so much that reason reduces existence to nothingness, but rather that existence is so tangible, rich and complex that reason can’t fully encompass it.  This makes sense insofar as reason operates through the principle of reduction, abstraction, and the dissecting of a holistic experience into parts that relate to one another in a certain way in order to make sense of that experience.  If the holistic experience is needed to fully appreciate existence, then reason alone isn’t going to be up to the task.  But reason also seems to unify our experiences, and if this unification presupposes existence in order to make sense of that experience then we can’t fully appreciate existence without reason either.

3. Aesthetic, Ethical, Religious

Kierkegaard lays out three primary stages of living in his philosophy: namely, the aesthetic, the ethical, and the religious.  While there are different ways to interpret this “stage theory”, the most common interpretation treats these stages like a set of concentric circles or spheres where the aesthetic is in the very center, the ethical contains the aesthetic, and the religious subsumes both the ethical and the aesthetic.  Thus, for Kierkegaard, the religious is the most important stage that, in effect, supersedes the others; even though the religious doesn’t eliminate the other spheres, since a religious person is still capable of being ethical or having aesthetic enjoyment, and since an ethical person is still capable of aesthetic enjoyment, etc.

In a previous post where I analyzed Kierkegaard’s Fear and Trembling, I summarized these three stages as follows:

“…The aesthetic life is that of sensuous or felt experience, infinite potentiality through imagination, hiddenness or privacy, and an overarching egotism focused on the individual.  The ethical life supersedes or transcends this aesthetic way of life by relating one to “the universal”, that is, to the common good of all people, to social contracts, and to the betterment of others over oneself.  The ethical life, according to Kierkegaard, also consists of public disclosure or transparency.  Finally, the religious life supersedes the ethical (and thus also supersedes the aesthetic) but shares some characteristics of both the aesthetic and the ethical.

The religious, like the aesthetic, operates on the level of the individual, but with the added component of the individual having a direct relation to God.  And just like the ethical, the religious appeals to a conception of good and evil behavior, but God is the arbiter in this way of life rather than human beings or their nature.  Thus the sphere of ethics that Abraham might normally commit himself to in other cases is thought to be superseded by the religious sphere, the sphere of faith.  Within this sphere of faith, Abraham assumes that anything that God commands is Abraham’s absolute duty to uphold, and he also has faith that this will lead to the best ends…”

While the aesthete generally tends to live life in search of pleasure, always trying to flee away from boredom in an ever-increasing fit of desperation to continue finding moments of pleasure despite the futility of such an unsustainable goal, Kierkegaard also claims that the aesthete includes the intellectual who tries to stand outside of life; detached from it and only viewing it as a spectator rather than a participant, and categorizing each experience as either interesting or boring, and nothing more.  And it is this speculative detachment from life, which was an underpinning of Western thought, that Kierkegaard objected to; an objection that would be maintained within the rest of existential philosophy that was soon to come.

If the aesthetic is unsustainable or if someone (such as Kierkegaard) has given up such a life of pleasure, then all that remains is the other spheres of life.  For Kierkegaard, the ethical life, at least on its own, seemed to be insufficient for making up what was lost in the aesthetic:

“For a really passionate temperament that has renounced the life of pleasure, the consolations of the ethical are a warmed-over substitute at best.  Why burden ourselves with conscience and responsibility when we are going to die, and that will be the end of it?  Kierkegaard would have approved of the feeling behind Nietzsche’s saying, ‘God is dead, everything is permitted.’ ” 

One conclusion we might arrive at, after considering Kierkegaard’s attitude toward the ethical life, is that he may have made a mistake when he renounced the life of pleasure entirely.  Another conclusion we might make is that Kierkegaard’s conception of the ethical is incomplete or misguided.  If he honestly asks, in the hypothetical absence of God or any option for a religious life, why we ought to burden ourselves with conscience and responsibility, this seems to betray a fundamental flaw in his moral and ethical reasoning.  Likewise for Nietzsche, and Dostoyevsky for that matter, where they both echoed similar sentiments: “If God does not exist, then everything is permissible.”

As I’ve argued elsewhere (here, and here), the best form any moral theory can take (such that it’s also sufficiently motivating to follow) is going to be centered around the individual, and it will be grounded on the hypothetical imperative that maximizes their personal satisfaction and life fulfillment.  If some behaviors serve toward best achieving this goal and other behaviors detract from it, as a result of our human psychology, sociology, and thus as a result of the finite range of conditions that we thrive within as human beings, then regardless of whether a God exists or not, everything is most certainly not permitted.  Nietzsche was right however when he claimed that the “death of God” (so to speak) would require a means of re-establishing a ground for our morals and values, but this doesn’t mean that all possible grounds for doing so have an equal claim to being true nor will they all be equally efficacious in achieving one’s primary moral objective.

Part of the problem with Kierkegaard’s moral theorizing is his adoption of Kant’s universal maxim for ethical duty: “Act only according to that maxim whereby you can at the same time will that it should become a universal law.”  The problem is that this maxim or “categorical imperative” (the way it’s generally interpreted at least) doesn’t take individual psychological and circumstantial idiosyncrasies into account.  One can certainly insert these idiosyncrasies into Kant’s formulation, but that’s not how Kant intended it to be used nor how Kierkegaard interpreted it.  And yet, Kierkegaard seems to smuggle in such exceptions anyway, by having incorporated it into his conception of the religious way of life:

“An ethical rule, he says, expresses itself as a universal: all men under such-and-such circumstances ought to do such and such.  But the religious personality may be called upon to do something that goes against the universal norm.”

And if something goes against a universal norm, and one feels that they ought to do it anyway (above all else), then they are implicitly denying that a complete theory of ethics involves exclusive universality (a categorical imperative); rather, it must require taking some kind of individual exceptions into account.  Kierkegaard seems to be prioritizing the individual in all of his philosophy, and yet he didn’t think that a theory of ethics could plausibly account for such prioritization.

“The validity of this break with the ethical is guaranteed, if it ever is, by only one principle, which is central to Kierkegaard’s existential philosophy as well as to his Christian faith-the principle, namely, that the individual is higher than the universal. (This means also that the individual is always of higher value than the collective).”

I completely agree with Kierkegaard that the individual is always of higher value than the collective; and this can be shown by any utilitarian moral theory that forgets to take into account the psychological state of those individual actors and moral agents carrying out some plan to maximize happiness or utility for the greatest number of people.  If the imperative comes down to my having to do something such that I can no longer live with myself afterward, then the moral theory has failed miserably.  Instead, I should feel that I did the right thing in any moral dilemma (when thinking clearly and while maximally informed of the facts), even when every available option is less than optimal.  We have to be able to live with our own actions, and ultimately with the kind of person that those actions have made us become.  The concrete self that we alone have conscious access to takes on a priority over any abstract conception of other selves or any kind of universality that references only a part of our being.

“Where then as an abstract rule it commands something that goes against my deepest self (but it has to be my deepest self, and herein the fear and trembling of the choice reside), then I feel compelled out of conscience-a religious conscience superior to the ethical-to transcend that rule.  I am compelled to make an exception because I myself am an exception; that is, a concrete being whose existence can never be completely subsumed under any universal or even system of universals.”

Although I agree with Kierkegaard here for the most part, in terms of our giving the utmost importance to the individual self, I don’t think that a religious conscience is something one can distinguish from their conscience generally; rather, it would just be one’s conscience, albeit one altered by a set of religious beliefs, which may end up changing how one’s conscience operates but doesn’t change the fact that it is still their conscience nevertheless.

For example, if two people differ with respect to their belief in souls, where one has a religious belief that a fertilized egg has a soul and the other person only believes that people with a capacity for consciousness or a personality have souls (or have no soul at all), then that difference in belief may affect their conscience differently if both parties were to, for example, donate a fertilized egg to a group of stem-cell researchers.  The former may feel guilty afterward (knowing that the egg they believe to be inhabited by a soul may be destroyed), whereas the latter may feel really good about themselves for aiding important life-saving medical research.  This is why one can only make a proper moral assessment when they are taking seriously what is truly factual about the world (what is supported by evidence) and what is a cherished religious or otherwise supernatural belief.  If one’s conscience is primarily operating on true evidenced facts about the world (along with their intuition), then their conscience and what some may call their religious conscience should be referring to the exact same thing.

Despite the fact that viable moral theories should be centered around the individual rather than the collective, this doesn’t mean that we shouldn’t try and come up with sets of universal moral rules that ought to be followed most of the time.  For example, a rule like “don’t steal from others” is a good rule to follow most of the time, and if everyone in a society strives to obey such a rule, that society will be better off overall as a result; but if my child is starving and nobody is willing to help in any way, then my only option may be to steal a loaf of bread in order to prevent the greater moral crime of letting a child starve.

This example should highlight a fundamental oversight regarding Kant’s categorical imperative as well: you can build in any number of exceptions within a universal law to make it account for personal circumstances and even psychological idiosyncrasies, thus eliminating the kind of universality that Kant sought to maintain.  For example, if I willed it to be a universal law that “you shouldn’t take your own life,” I could make this better by adding an exception to it so that the law becomes “you shouldn’t take your own life unless it is to save the life of another,” or even more individually tailored to “you shouldn’t take your own life unless not doing so will cause you to suffer immensely or inhibit your overall life satisfaction and life fulfillment.”  If someone has a unique life history or psychological predisposition whereby taking their own life is the only option available to them lest they suffer needlessly, then they ought to take their own life regardless of whatever categorical imperative they are striving to uphold.

There is however still an important reason for adopting Kant’s universal maxim (at least generally speaking): the fact that universal laws like the Golden Rule provide people with an easy heuristic to quickly ascertain the likely moral status of any particular action or behavior.  If we try and tailor in all sorts of idiosyncratic exceptions (with respect to yourself as well as others), it makes the rules much more complicated and harder to remember; instead, one should use the universal rules most of the time and only when they see a legitimate reason to question it, should they consider if an exception should be made.

Another important point regarding ethical behavior or moral dilemmas is the factor of uncertainty in our knowledge of a situation:

“But even the most ordinary people are required from time to time to make decisions crucial for their own lives, and in such crises they know something of the “suspension of the ethical” of which Kierkegaard writes.  For the choice in such human situations is almost never between a good and an evil, where both are plainly as such and the choice therefore made in all the certitude of reason; rather it is between rival goods, where one is bound to do some evil either way, and where the ultimate outcome and even-of most of all-our own motives are unclear to us.  The terror of confronting oneself in such a situation is so great that most people panic and try to take cover under any universal rule that will apply, if only it will save them from the task of choosing themselves.”

And rather than making these tough decisions themselves, a lot of people would prefer for others to tell them what’s right and wrong behavior such as getting these answers from a religion, from one’s parents, from a community, etc.; but this negates the intimate consideration of the individual where each of these difficult choices made will lead them down a particular path in their life and shape who they become as a person.  The same fear drives a lot of people away from critical thinking, where many would prefer to have people tell them what’s true and false and not have to think about these things for themselves, and so they gravitate towards institutions that say they “have all the answers” (even if many that fear critical thinking wouldn’t explicitly say that this is the case, since it is primarily a manifestation of the unconscious).  Kierkegaard highly valued these difficult moments of choice and thought they were fundamental to being a true self living an authentic life.

But despite the fact that universal ethical rules are convenient and fairly effective to use in most cases, they are still far from perfect and so one will find themselves in a situation where they simply don’t know which rule to use or what to do, and one will just have to make a decision that they think will be the most easy to live with:

“Life seems to have intended it this way, for no moral blueprint has ever been drawn up that covers all the situations for us beforehand so that we can be absolutely certain under which rule the situation comes.  Such is the concreteness of existence that a situation may come under several rules at once, forcing us to choose outside any rule, and from inside ourselves…Most people, of course, do not want to recognize that in certain crises they are being brought face to face with the religious center of their existence.”

Now I wouldn’t call this the religious center of one’s existence but rather the moral center of one’s existence; it is simply the fact that we’re trying to distinguish between universal moral prescriptions (which Kierkegaard labels as “the ethical”) and those that are non-universal or dynamic (which Kierkegaard labels as “the religious”).  In any case, one can call this whatever they wish, as long as they understand the distinction that’s being made here which is still an important one worth making.  And along with acknowledging this distinction between universal and individual moral consideration, it’s also important that one engages with the world in a way where our individual emotional “palette” is faced head on rather than denied or suppressed by society or its universal conventions.

Barrett mentions how the denial of our true emotions (brought about by modernity) has inhibited our connection to the transcendent, or at least, inhibited our appreciation or respect for the causal forces that we find to be greater than ourselves:

“Modern man is farther from the truth of his own emotions than the primitive.  When we banish the shudder of fear, the rising of the hair of the flesh in dread, or the shiver of awe, we shall have lost the emotion of the holy altogether.”

But beyond acknowledging our emotions, Kierkegaard has something to say about how we choose to cope with them, most especially that of anxiety and despair:

“We are all in despair, consciously or unconsciously, according to Kierkegaard, and every means we have of coping with this despair, short of religion, is either unsuccessful or demoniacal.”

And here is another place where I have to part ways with Kierkegaard despite his brilliance in examining the human condition and the many complicated aspects of our psychology; for relying on religion (or more specifically, relying on religious belief) to cope with despair is but another distraction from the truth of our own existence.  It is an inauthentic way of living life since one is avoiding the way the world really is.  I think it’s far more effective and authentic for people to work on changing their attitude toward life and the circumstances they find themselves in without sacrificing a reliable epistemology in the process.  We need to provide ourselves with avenues for emotional expression, work to increase our mindfulness and positivity, and constantly strive to become better versions of ourselves by finding things we’re passionate about and by living a philosophically examined life.  This doesn’t mean that we have to rid ourselves of the rituals, fellowship, and meditative benefits that religion offer; but rather that we should merely dispense with the supernatural component and the dogma and irrationality that’s typically attached to and promoted by religion.

4. Subjective and Objective Truth

Kierkegaard ties his conception of the religious life to the meaning of truth itself, and he distinguishes this mode of living with the concept of religious belief.  While one can assimilate a number of religious beliefs just as one can do with non-religious beliefs, for Kierkegaard, religion itself is something else entirely:

“If the religious level of existence is understood as a stage upon life’s way, then quite clearly the truth that religion is concerned with is not at all the same as the objective truth of a creed or belief.  Religion is not a system of intellectual propositions to which the believer assents because he knows it to be true, as a system of geometry is true; existentially, for the individual himself, religion means in the end simply to be religious.  In order to make clear what it means to be religious, Kierkegaard has to reopen the whole question of the meaning of truth.”

And his distinction between objective and subjective truth is paramount to understanding this difference.  One could say perhaps that by subjective truth he is referring to a truth that must be embodied and have an intimate relation to the individual:

“But the truth of religion is not at all like (objective truth): it is a truth that must penetrate my own personal existence, or it is nothing; and I must struggle to renew it in my life every day…Strictly speaking, subjective truth is not a truth that I have, but a truth that I am.”

The struggle for renewal goes back to Kierkegaard’s conception of how the meaning a person finds in their life is in part dependent on some kind of personal commitment; it relies on making certain choices that one remakes day after day, keeping these personal choices in focus so as to not lose sight of the path we’re carving out for ourselves.  And so it seems that he views subjective truth as intimately connected to the meaning we give our lives, and to the kind of person that we are now.

Perhaps another way we can look at this conception, especially as it differs from objective truth, is to examine the relation between language, logic, and conscious reasoning on the one hand, and intuition, emotion, and the unconscious mind on the other.  Objective truth is generally communicable, it makes explicit predictions about the causal structure of reality, and it involves a way of unifying our experiences into some coherent ensemble; but subjective truth involves felt experience, emotional attachment, and a more automated sense of familiarity and relation between ourselves and the world.  And both of these facets are important for our ability to navigate the world effectively while also feeling that we’re psychologically whole or complete.

5. The Attack Upon Christendom

Kierkegaard points out an important shift in modern society that Barrett mentioned early on in this book; the move toward mass society, which has effectively eaten away at our individuality:

“The chief movement of modernity, Kierkegaard holds, is a drift toward mass society, which means the death of the individual as life becomes ever more collectivized and externalized.  The social thinking of the present age is determined, he says, by what might be called the Law of Large Numbers: it does not matter what quality each individual has, so long as we have enough individuals to add up to a large number-that is, to a crowd or mass.”

And false metrics of success like economic growth and population growth have definitely detracted from the quality each of our lives is capable of achieving.  And because of our inclinations as a social species, we are (perhaps unconsciously) drawn towards the potential survival benefits brought about by joining progressively larger and larger groups.  In terms of industrialization, we’ve been using technology to primarily allow us to support more people on the globe and to increase the output of each individual worker (to benefit the wealthiest) rather than substantially reducing the number of hours worked per week or eliminating poverty outright.  This has got to be the biggest failure of the industrial revolution and of capitalism (when not regulated properly), and one that’s so often taken for granted.

Because of the greed that’s consumed the moral compass of those at the top of our sociopolitical hierarchy, our lives have been funneled into a military-industrial complex that will only surrender our servitude when the rich eventually stop asking for more and more of the fruits of our labor.  And by the push of marketing and social pressure, we’re tricked into wanting to maintain society the way it is; to continue to buy more consumable garbage that we don’t really need and to be complacent with such a lifestyle.  The massive externalization of our psyche has led to a kind of, as Barrett put it earlier, spiritual poverty.  And Kierkegaard was well aware of this psychological degradation brought on by modernity’s unchecked collectivization.

Both Kierkegaard and Nietzsche also saw a problem with how modernity had effectively killed God; though Kierkegaard and Nietzsche differed in their attitudes toward organized religion, Christianity in particular:

“The Grand Inquisitor, the Pope of Popes, relieves men of the burden of being Christian, but at the same time leaves them the peace of believing they are Christians…Nietzsche, the passionate and religious atheist, insisted on the necessity of a religious institution, the Church, to keep the sheep in peace, thus putting himself at the opposite extreme from Kierkegaard; Dostoevski in his story of the Grand Inquisitor may be said to embrace dialectically the two extremes of Kierkegaard and Nietzsche.  The truth lies in the eternal tension between Christ and the Grand Inquisitor.  Without Christ the institution of religion is empty and evil, but without the institution as a means of mitigating it the agony in the desert of selfhood is not viable for most men.”

Modernity had helped produce organized religion, thus diminishing the personal, individualistic dimension of spirituality which Kierkegaard saw as indispensable; but modernity also facilitated the “death of God” making even organized religion increasingly difficult to adhere to since the underlying theistic foundation was destroyed for many.  Nietzsche realized the benefits of organized religion since so many people are unable to think critically for themselves, are unable to find an effective moral framework that isn’t grounded on religion, and are unable to find meaning or stability in their lives that isn’t grounded on belief in God or in some religion or other.  In short, most people aren’t able to deal with the burdens realized within existentialism.

Due to the fact that much of European culture and so many of its institutions had been built around Christianity, this made the religion much more collectivized and less personal, but it also made Christianity more vulnerable to being uprooted by new ideas that were given power from the very same collective.  Thus, it was the mass externalization of religion that made religion that much more accessible to the externalization of reason, as exemplified by scientific progress and technology.  Reason and religion could no longer co-exist in the same way they once had because the externalization drastically reduced our ability to compartmentalize the two.  And this made it that much harder to try and reestablish a more personal form of Christianity, as Kierkegaard had been longing for.

Reason also led us to the realization of our own finitude, and once this view was taken more seriously, it created yet another hurdle for the masses to maintain their religiosity; for once death is seen as inevitable, and immortality accepted as an impossibility, one of the most important uses for God becomes null and void:

“The question of death is thus central to the whole of religious thought, is that to which everything else in the religious striving is an accessory: ‘If there is no immortality, what use is God?’ “

The fear of death is a powerful motivator for adopting any number of beliefs that might help to manage the cognitive dissonance that results from it, and so the desire for eternal happiness makes death that much more frightening.  But once death is truly accepted, then many of the other beliefs that were meant to provide comfort or consolation are no longer necessary or meaningful.  For Kierkegaard, he didn’t think we could cope with the despair of an inevitable death without a religion that promised some way to overcome it and to transcend our life and existence in this world, and he thought Christianity was the best religion to accomplish this goal.

It is on this point in particular, how he fails to properly deal with death, that I find Kierkegaard to lack an important strength as an existentialist; as it seems that in order to live an authentic life, a person must accept death, that is, they must accept the finitude of our individual human existence.  As Heidegger would later go on to say, buried deep within the idea of death as the possibility of impossibility is a basis for affirming one’s own life, through the acceptance of our mortality and the uncertainty that surrounds it.

Click here for the next post in this series, part 8 on Nietzsche’s philosophy.

Irrational Man: An Analysis (Part 2, Chapter 5: “Christian Sources”)

In the previous post in this series on William Barrett’s Irrational Man, we explored some of the Hebraistic and Hellenistic contributions to the formation and onset of existentialism within Western thought.  In this post, I’m going to look at chapter 5 of Barrett’s book, which takes a look at some of the more specifically Christian sources of existentialism in the Western tradition.

1. Faith and Reason

We should expect there to be some significant overlap between Christianity and Hebraism, since the former was a dissenting sect of the latter.  One concept worth looking at is the dichotomy of faith versus reason, where in Christianity the man of faith is far and above the man of reason.  Even though faith is heavily prized within Hebraism as well, there are still some differences between the two belief systems as they relate to faith, with these differences stemming from a number of historical contingencies:

“Ancient Biblical man knew the uncertanties and waverings of faith as a matter of personal experience, but he did not yet know the full conflict of faith with reason because reason itself did not come into historical existence until later, with the Greeks.  Christian faith is therefore more intense than that of the Old Testament, and at the same time paradoxical: it is not only faith beyond reason but, if need be, against reason.”

Once again we are brought back to Plato, and his concept of rational consciousness and the explicit use of reason; a concept that had never before been articulated or developed.  Christianity no longer had reason hiding under the surface or homogeneously mixed-in with the rest of our mental traits; rather, it was now an explicit and specific way of thinking and processing information that was well known and that couldn’t simply be ignored.  Not only did the Christian have to hold faith above this now-identified capacity of thinking logically, but they also had to explicitly reject reason if it contradicted any belief that was grounded on faith.

Barrett also mentions here the problem of trying to describe the concept of faith in a language of reason, and how the opposition between the two is particularly relevant today:

“Faith can no more be described to a thoroughly rational mind than the idea of colors can be conveyed to a blind man.  Fortunately, we are able to recognize it when we see it in others, as in St. Paul, a case where faith had taken over the whole personality.  Thus vital and indescribable, faith partakes of the mystery of life itself.  The opposition between faith and reason is that between the vital and the rational-and stated in these terms, the opposition is a crucial problem today.”

I disagree with Barrett to some degree here as I don’t think it’s possible to be able to recognize a quality in others (let alone a quality that others can recognize as well) that is entirely incommunicable.  On the contrary, if you and I can both recognize some quality, then we should be able to describe it even if only in some rudimentary way.  Now this doesn’t mean that I think a description in words can do justice to every conception imaginable, but rather that, as human beings there is an inherent ability to communicate fairly well that which is involved in common modes of living and in our shared experiences; even very complex concepts that have somewhat of a fuzzy boundary.

It may be that a thoroughly rational mind will not appreciate faith in any way, nor think it is of any use, nor have had any personal experience with it; but it doesn’t mean that they can’t understand the concept, or what a mode of living dominated by faith would look like.  I also don’t think it’s fair to say that the opposition between faith and reason is that between the vital and the rational, even though this may be a real view for some.  If by vital, Barrett is alluding to what is essential to our psyche, then I don’t think he can be talking about faith, at least not in the sense of faith as belief without good reason.  But I’m willing to concede his point if instead by vital he is simply referring to an attitude that is spirited, vibrant, energetic, and thus involving some lively form of emotional expression.

He delves a little deeper into the distinction between faith and reason when he says:

“From the point of view of reason, any faith, including the faith in reason itself, is paradoxical, since faith and reason are fundamentally different functions of the human psyche.”

It seems that Barrett is guilty of making a little bit of a false equivocation here between two uses of the word faith.  I’d actually go so far as to say that the two uses of the word faith are best described in the way that Barrett himself alluded to in the last chapter: faith as trust and faith as belief.  On the one hand, we have faith as belief where some belief or set of beliefs is maintained without good reason; and on the other hand, we have faith as trust where, in the case of reason, our trust in reason is grounded on the fact that its use leads to successful predictions about the causal structure of our experience.  So there isn’t really any paradox at all when it comes to “faith” in reason, because such a faith need not involve adopting some belief without good reason, but rather there is, at most, a trust in reason based on the demonstrable and replicable success it has provided us in the past.

Barrett is right however, when he says that faith and reason are fundamentally different functions of the human psyche.  But this doesn’t mean that they are isolated from one another: for example, if one believes that having faith in God will help them to secure some afterlife in heaven, and if one also desires an afterlife in heaven, then the use of reason can actually be employed to reinforce a faith in God (as it did for me, back when I was a Christian).  This doesn’t mean that faith in God is reasonable, but rather that when certain beliefs are isolated or compartmentalized in the right way (even in the case of imagining hypothetical scenarios), reason can be used to process a logical relation between them, even if those beliefs are themselves derived from illogical cognitive biases.  To see that this is true, one need only realize that an argument can be logically valid even if the premises are not logically sound.

To be as charitable as possible with the concept of faith (at least, one that is more broadly construed), I will make one more point that I think is relevant here: having faith as a form of positivity or optimism in one’s outlook on life, given the social and personal circumstances that one finds themselves in, is perfectly rational and reasonable.  It is well known that people tend to be happier and have more fulfilling lives if they steer clear of pessimism and simply try and stay as positive as possible.  One primary reason for this has to do with what I have previously called socio-psychological feedback loops (what I would also call an evidence-based form of Karma).  Thus, one can have an empirically demonstrable good reason to have an attitude that is reminiscent of faith, yet without having to sacrifice an epistemology that is reliable and which consistently tracks on to reality.

When it comes to the motivating factors that lead people to faith, existentialist thought can shed some light on what some of these factors may be.  If we consider Paul the Apostle, for example, which Barrett mentioned earlier, he also says:

“The central fact for his faith is that Jesus did actually rise from the dead, and so that death itself is conquered-which is what in the end man most ardently longs for.”

The finite nature of man, not only with regard to the intellect but also with respect to our lives in general, is something that existentialism both recognizes and sees as an integral starting point to philosophically build off of.  And I think it should come as no surprise that the fear of death in particular is likely to be a motivating factor for most if not all supernatural beliefs found within any number of religions, including the belief in possibly resurrecting one from the dead (as in the case of St. Paul’s belief in Jesus), the belief in spirits, souls, or other disembodied minds, or any belief in an afterlife (including reincarnation).

We can see that faith is, largely anyway, a means of reducing the cognitive dissonance that results from existential elements of our experience; it is a means of countering or rejecting a number of facts surrounding the contingencies and uncertainties of our existence.  From this fact, we can then see that by examining what faith is compensating for, one can begin to extract what the existentialists eventually elaborated on.  In other words, within existentialism, it is fully accepted that we can’t escape death (for example), and that we have to face death head-on (among other things) in order to live authentically; therefore having faith in overcoming death in any way is simply a denial of at least one burden willingly taken on by the existentialist.

Within early Christianity, we can also see some parallels between an existentialist like Kierkegaard and an early Christian author like Tertullian.  Barrett gives us an excerpt from Tertullian’s De Carne Christi:

“The Son of God was crucified; I am unashamed of it because men must needs be ashamed of it.  And the Son of God died; it is by all means to be believed, because it is absurd.  And He was buried and rose again; the fact is certain because it is impossible.”

This sounds a lot like Kierkegaard, despite the sixteen centuries of separation between these two figures, and despite the fact that Tertullian was writing when Christianity was first expanding and most aggressive and Kierkegaard writing more towards the end of Christianity when it had already drastically receded under the pressure of secularization and modernity.  In this quote of Tertullian’s, we can see a kind of submission to paradox, where it is because a particular proposition is irrational or unreasonable that it becomes an article of faith.  It’s not enough to say that faith is simply used to arrive at a belief that is unreasonable or irrational, but rather that its attribute of violating reason is actually taken as a reason to use it and to believe that the claims of faith are in fact true.  So rather than merely seeing this as an example of irrationality, this actually makes for a good example of the kind of anti-rational attitudes that contributed to the existentialist revolt against some dominant forms of positivism; a revolt that didn’t really begin to precipitate until Kierkegaard’s time.

Although anti-rationalism is often associated with existentialism, I think one would be making an error if they equivocated the two or assumed that anti-rationalism was integral or necessary to existentialism; though it should be said that this is contingent on how exactly one is defining both anti-rationalism (and by extension rationalism) and existentialism.  The two terms have a variety of definitions (as does positivism, with Barrett giving us one particularly narrow definition back in chapter one).  If by rationalism, one is referring to the very narrow view that rationality is the only means of acquiring knowledge, or a view that the only thing that matters in human thought or in human life is rationality, or even a view that humans are fundamentally or primarily rational beings; then I think it is fair to say that this is largely incompatible with existentialism, since the latter is often defined as a philosophy positing that humans are not primarily rational, and that subjective meaning (rather than rationality) plays a much more primary role in our lives.

However, if rationality is simply taken as a view that reason and rational thought are integral components (even if not the only components) for discovering much of the knowledge that exists about ourselves and the universe at large, then it is by all means perfectly compatible with most existentialist schools of thought.  But in order to remain compatible, some priority needs to be given to subjective experience in terms of where meaning is derived from, how our individual identity is established, and how we are to direct our lives and inform our system of ethics.

The importance of subjective experience, which became a primary assumption motivating existentialist thought, was also appreciated by early Christian thinkers such as St. Augustine.  In fact, Barrett goes so far as to say that the interiorization of experience and the primary focus on the individual over the collective was unknown to the Greeks and didn’t come about until Christianity had begun to take hold in Western culture:

“Where Plato and Aristotle had asked the question, What is man?, St. Augustine (in the Confessions) asks, Who am I?-and this shift is decisive.  The first question presupposed a world of objects, a fixed natural and zoological order, in which man was included; and when man’s precise place in that order had been found, the specifically differentiating characteristic of reason was added.  Augustine’s question, on the other hand, stems from an altogether different, more obscure and vital center within the questioner himself: from an acutely personal sense of dereliction and loss, rather than from the detachment with which reason surveys the world of objects in order to locate its bearer, man, zoologically within it.”

So rather than looking at man as some kind of well-defined abstraction within a categorical hierarchy of animals and objects, and one bestowed with a unique faculty of rational consciousness, man is looked at from the inside, from the perspective of, and identified as, an embodied individual with an experience that is deeply rooted and inherently emotional.  And in Augustine’s question, we also see an implication that man can’t be defined in a way that the Greeks attempted to do because in asking the question, Who am I?, man became separated from the kind of zoological order that all other animals adhered to.  It is the unique sense of self and the (often foreboding) awareness of this self then, as opposed to the faculty of reason, that Augustine highlighted during his moment of reflection.

Though Augustine was cognizant of the importance of exploring the nature of our personal, human existence (albeit for him, with a focus on religious experience in particular), he also explored aspects of human existence on the cosmic scale.  But when he went down this cosmic road of inquiry, he left the nature of personal lived existence by the wayside and instead went down a Neo-Platonist path of theodicy.  Augustine was after all a formal theologian, and one who tried to come to grips with the problem of evil in the world.  But he employed Greek metaphysics for this task, and tried to use logic mixed with irrational faith-based claims about the benevolence of God, in order to show that God’s world wasn’t evil at all.  Augustine did this by eliminating evil from his conception of existence such that all evil was simply a lack of being and hence a form of non-being and therefore non-existent.  This was supposed to be our consolation: ignore any apparent evil in the world because we’re told that it doesn’t really exist; all in the name of trying to justify the cosmos as good and thus to maintain a view that God is good.

This is such a shame in the end, for in Augustine’s attempt at theodicy, he simply downplayed and ignored the abominable and precarious aspects of our existence, which are as real a part of our experience as anything could be.  Perhaps Augustine was motivated to use rationalism here in order to feel comfort and safety in a world where many feel alienated and alone.  But as Barrett says:

“…reason cannot give that security; if it could, faith would be neither necessary nor so difficult.  In the age-old struggle between the rational and the vital, the modern revolt against theodicy (or, equally, the modern recognition of its impossibility) is on the side of the vital…”

And so once again we see that a primary motivation for faith is because many cannot get the consolation they long for through the use of reason.  They can’t psychologically deal with the way the world really is and so they either use faith to believe the world is some other way or they try to use reason along with some number of faith-based premises, in evaluating our apparent place in the world.  St. Augustine actually thought that he could harmonize faith and reason (or what Barrett referred to as the vital and the rational) and this set the stage for the next thousand years of Christian thought.  Once faith claims became more explicit in the Church’s various articles of dogma, one was left to exercise rationality as they saw fit, as long as it was within the confines of that faith and dogma.  And this of course led to many medieval thinkers to mistake faith for reason (in many cases at least), reinforced by the fact that the use of reason was operating under faith-based premises.

The problematic relation between the vital and the rational didn’t disappear even after many a philosopher assumed the two forces were in complete harmony and agreement; instead the clash resurfaced in a debate between Voluntarism and Intellectualism, with our attention now turned toward St. Thomas Aquinas.  As an intellectualist, St. Thomas tried to argue that the intellect in man is prior to the will of man because the intellect determines the will, since we can only desire what we know.  Scotus on the other hand, a voluntarist, responded to this claim and said that the will determines what ideas the intellect turns toward, and thus the will ends up determining what the intellect comes to know.

As an aside, I think that Scotus was more likely correct here, because while the intellect may inform the will (and thus help to direct the will), there is something far more fundamental and unconscious at play that drives our engagement with the world as an organism evolved to survive.  For example, we have a curiosity for both novelty (seeking new information) and familiarity (a maximal understanding of incoming information) and basic biologically-driven desires for satisfaction and homeostasis.  These drives simply don’t depend on the intellect even though they can make use of it, and these drives will continue to operate even if the intellect does not.  I also disagree with St. Thomas’ claim that one can only desire that which they already know, for one can desire knowledge for it’s own sake without knowing exactly what knowledge is yet to come (again from an independent drive, which we could call the will), and one can also desire a change in the world from the way they currently know it to be to some other as yet unspecified way (e.g. to desire a world without a specific kind of suffering, even though one may not know exactly what that would be like, having never lived such a life before).

The crux of the debate between the primacy of the will versus the intellect can perhaps be re-framed accordingly:

“…not in terms of whether will is to be given primacy over the intellect, or the intellect over the will-these functions being after all but abstract fragments of the total man-but rather in terms of the primacy of the thinker over his thoughts, of the concrete and total man himself who is doing the thinking…the fact remains that Voluntarism has always been, in intention at least, an effort to go beyond the thought to the concrete existence of the thinker who is thinking that thought.”

Here again I see the concept of an embodied and evolved organism and a reflective self at play, where the contents of consciousness cannot be taken on their own, nor as primary, but rather they require a conscious agent to become manifest, let alone to have any causal power in the world.

2. Existence vs. Essence

There is a deeper root to the problem underlying the debate between St. Thomas and Scotus and it has to do with the relation between essence and existence; a relation that Sartre (among others) emphasized as critical to existentialism.  Whereas the essence of a thing is simply what the thing is (its essential properties for example), the existence of a thing simply refers to the brute fact that the thing is.  A very common theme found throughout Sartre’s writings is the basic contention that existence precedes essence (though, for him, this only applies to the case of man).  Barrett gives a very simplified explanation of such a thesis:

“In the case of man, its meaning is not difficult to grasp.  Man exists and makes himself to be what he is; his individual essence or nature comes to be out of his existence; and in this sense it is proper to say that existence precedes essence.  Man does not have a fixed essence that is handed to him ready-made; rather, he makes his own nature out of his freedom and the historical conditions in which he is placed.”

I think this concept also makes sense when we put it into the historical context going back to ancient Greece.  Whereas the Greeks tried to put humanity within a zoological order and categorize us by some essential qualities, the existentialists that later came on the scene rejected such an idea almost as a kind of categorical error.  It was simply not possible to classify human beings like you could inanimate objects or other animals that didn’t have the behavioral complexity and a will to define themselves as we do.

Barrett also explains that the essence versus existence relation breaks down into two different but related questions:

1) Does existence have primacy over essence, or the reverse?

2) In actual existing things is there a real distinction between the two?  Or are they merely different points of view that the mind takes toward the same existing thing?

In an attempt at answering these questions, I think it’s reasonable to say that essence has more to do with potentiality (a logically possible set of abstractions or properties) and existence has more to do with actuality (a logically and physically possible past or present state of being).  I also think that existence is necessary in order for any essence to be real in any way, and not because an actual object needs to exist in order to instantiate a physical instance of some specific essence, but because any essence is merely an abstraction created by a conscious agent who inferred it in the first place.  Quite simply, there is no Platonic realm for the essence to “exist” within and so it needs an existing consciousness to produce it.  So I would say that existence has primacy over essence since an existent is needed for an essence to be conceived, let alone instantiated at all.

As for the second question, I think there is a real distinction between the two and that they are different points of view that the mind takes toward the same existing thing.  I don’t see this as an either/or dichotomy.  The mind can certainly acknowledge the existence of some object, animal, or person, without giving much thought to what its essence is (aside from its being distinct enough to consider it a thing), but it can never acknowledge the essence of an actual object, animal, or person without also acknowledging its existence.  There’s also something relevant to be said about how our minds differentiate between an actual perception and an imagined one, even between an actual life and an imagined one, or an actual person and an imagined one; something that we do all the time.

On the other side of this issue, within the debate between essentialism and existentialism, are the questions of whether or not there are any essential qualities of human beings that make them human, and if so, whether or not any of these essential qualities are fixed for human beings.  Most people are familiar with the idea of human nature, whereby there are some biologically-grounded innate predispositions that affect our behavior in some ways.  We are evolved animals after all, and the only way for evolution to work is to have genes providing different phenotypic effects on the body and behavior of a population of organisms.  Since about half of our own genes are used to code for our brain’s development, its basic structure, and functionality, it should seem obvious to anyone that we’re bound to have some set of innate behavioral tendencies as well as some biological limitations on the space of possibilities for our behavior.

Thus, any set of innate behavioral tendencies that are shared by healthy human beings would constitute our human nature.  And furthermore, this behavioral overlap would be a fixed attribute of human nature insofar as the genes coding for that innate behavioral overlap don’t evolve beyond a certain degree.  Human nature is simply an undeniable fact about us grounded in our biology.  If our biology changes enough, then we will eventually cease to be human, or if we still call ourselves “human” at that point in time then we will cease to have the same nature since we will no longer be the same species.  But as long as that biological evolution remains under a certain threshold, we will have some degree of fixity in our human nature.

But unlike most other organisms on earth, human beings have an incredibly complex brain as well as a number of culturally inherited cognitive programs that provide us with a seemingly infinite number of possible behaviors and subsequent life trajectories.  And I think this fact has fueled some of the disagreement between the existentialists and the essentialists.  The essentialists (or many of them at least) have maintained that we have a particular human nature, which I think is certainly true.  But, as per the existentialists, we must also acknowledge just how vast our space of possibilities really is and not let our human nature be the ultimate arbiter of how we define ourselves.  We have far more freedom to choose a particular life project and way of life than one would be led to believe given some preconceived notion of what is essentially human.

3. The Case of Pascal

Science brought about a dramatic change to Western society, effectively carrying us out of the Middle Ages within a century; but this change also created the very environment that made modern Existentialism possible.  Cosmology, for example, led 17th century mathematician and theologian, Blaise Pascal to say that “The silence of these infinite spaces (outer space) frightens me.”  In this statement he was expressing the general reaction of humanity to the world that science had discovered; a world that made man feel homeless and insignificant.

As a religious man, Pascal was also seeking to reconcile this world discovered by science with his own faith.  This was a difficult task, but he found some direction in looking at the wretchedness of the human condition:

“In Pascal’s universe one has to search much more desperately to find any signposts that would lead the mind in the direction of faith.  And where Pascal finds such a signpost, significantly enough, is in the radically miserable condition of man himself.  How is it that this creature who shows everywhere, in comparison with other animals and with nature itself, such evidence marks of grandeur and power is at the same time so feeble and miserable?  We can only conclude, Pascal says, that man is rather like a ruined or disinherited nobleman cast out from the kingdom which ought to have been his (his fundamental premise is the image of man as a disinherited being).”

Personally I see this as indicative of our being an evolved species, albeit one with an incredibly rich level of consciousness and intelligence.  In some sense our minds’ inherent capacities have a potential that is far and beyond what most humans ever come close to actualizing, and this may create a feeling of our having lost something that we deserve; some possible yet unrealized world that would make more sense for us to have given the level of power and potential we possess.  And perhaps this lends itself to a feeling of angst or a lack of fulfillment as well.

As Pascal himself said:

“The natural misfortune of our mortal and feeble condition is so wretched that when we consider it closely, nothing can console us.” 

Furthermore, Barrett reminds us of Pascal having mentioned the human tendency of escaping from this close consideration of our condition through the two “sovereign anodynes” of habit and diversion:

“Both habit and diversion, so long as they work, conceal from man “his nothingness, his forlornness, his inadequacy, his impotence and his emptiness.”  Religion is the only possible cure for this desperate malady that is nothing other than our ordinary mortal existence itself.”

Barrett describes our state of cultivating various habits and distractions as a kind of concealment of the ever-narrowing space of possibilities in our lives, as each day passes us by with some set of hopes and dreams forever lost in the past.  I think it would be fair to say however, that the psychological benefit we gain from at least many of our habits and distractions warrants our having them in the first place.  As an analogy, if one day I have been informed by my doctor that I’m going to die from a terminal illness in a few months (thus drastically narrowing the future space of possibilities in my life), is there really a problem with me trying to keep my mind off of such a morbid fate, such that I can make the most of the time that I have left?  I certainly wouldn’t advocate burying one’s head in the sand either, nor having irrational expectations, nor living in denial; and so I think the time will be used best if I don’t completely forget about my timeline coming to an end either, because then I can more adequately appreciate that remaining time.

Moreover, I don’t agree that religion is the only possible cure for dealing with our “ordinary mortal existence”.  I think that psychological and moral development are the ultimate answers here, where a critical look at oneself and an ongoing attempt to become a better version of oneself in order to maximize one’s life fulfillment is key.  The world can be any way that it is but that doesn’t change the fact that certain attitudes toward that world and certain behaviors can make our experience of the world, as it truly is, better or worse.  Falling back on religion to console us is, to use Pascal’s own words, nothing but another habit and form of distraction, and one where we sacrifice knowing and facing the truth of our condition for some kind of ignorant bliss that relies on false beliefs about the world.  And I don’t think we can live authentically if we don’t face the condition we’re in as a species, and more importantly as individuals; by trying to believe as many true things and as few false things as possible.

When it comes to the human mind and how we deal with or process information about our everyday lives or aspects of our overall condition, it can be a rather messy process with very complicated concepts, many of which that have fuzzy boundaries and so can’t be dealt with in the same way that we deal with analyses relying strictly on logic.  In some cases of analyzing our lives, we don’t know exactly what the premises could even be, let alone how to arrange them within a logical structure to arrive at some kind of certain conclusion.  And this differs a lot from other areas of inquiry like, say, mathematics.  As Pascal was delving deeper into the study of our human condition, he realized this difference and posited that there was an interesting distinction in terms of what the mind can comprehend in these different contexts: a mathematical mind and an intuitive mind.  Within a field of study such as mathematics, it is our mind’s ability to logically comprehend various distinct and well defined ideas that we make use of, whereas in other more concrete domains of our experience, we rely quite heavily on intuition.

Not only is logic not very useful in the more concrete and complex domains of our lives, but in many cases it may actually get in the way of seeing things clearly since our intuition, though less reliable than logic, is more up to the task of dealing with complexities that haven’t yet been parsed out into well-defined ideas and concepts.  Barrett describes this a little differently than I would when he says:

“What Pascal had really seen, then, in order to have arrived at this distinction was this: that man himself is a creature of contradictions and ambivalences such as pure logic can never grasp…By delimiting a sphere of intuition over against that of logic, Pascal had of course set limits to human reason. “

In the case of logic, one is certainly limited in its use when the concepts involved are probabilistic (rather than being well-defined or “black and white”) and there are significant limitations when our attitudes toward those concepts vary contextually.  We are after all very complex creatures with a number of competing goals and with a prioritization of those goals that varies over time.  And of course we have a number of value judgments and emotional dispositions that motivate our actions in often unpredictable ways.  So it seems perfectly reasonable to me that one should acknowledge the limits we have in our use of logic and reason.

Barrett makes another interesting claim about the limits of reason when he says:

“Three centuries before Heidegger showed, through a learned and laborious exegesis, that Kant’s doctrine of the limitations of human reason really rests on the finitude of our human existence, Pascal clearly saw that the feebleness of our reason is part and parcel of the feebleness of our human condition generally.  Above all, reason does not get at the heart of religious experience.”

And I think the last sentence is most important here, where we try and understand that the limitations of reason include the inability to examine all that matters and all that is meaningful within religious experience.  Even though I am a kind of champion for reason, in the sense that I find it to be in short supply when it matters most and in the sense that I often advertise the dire need for critical thinking skills and the need for more education to combat our cognitive biases, I for one have never made the assumption that reason could ever accomplish such an arduous task of dissecting religious experience in some reductionist way while retaining or accounting for everything of value within it.  I have however used reason to analyze various religious beliefs and claims about the world in order to discover their consequences on, or incompatibilities with, a number of other beliefs.

Theologians do something similar in their attempts to prove the existence of God, by beginning with some set of presuppositions (about God or the supernatural), and then applying reason to those faith-based premises to see what would result if they were in fact true.  And Pascal saw this mental exercise as futile and misguided as well because it misses the entire point of religion:

“In any case, God as the object of a rigorous demonstration, even supposing such a demonstration were forthcoming, would have nothing to do with the living needs of religion.”

I admire the fact that Pascal was honest here about what really matters most when it comes to religion and religious beliefs.  It doesn’t matter whether or not God exists, or whether any religious claim or other is actually true or false; rather, it is the living need of religion that he perceives that ultimately motivates one to adhere to it.  Thus, theology doesn’t really matter to him because those who want to be convinced by the arguments for God’s existence will be convinced simply because they desire the conclusion insofar as they believe it will give them consolation from the human condition they find themselves in.

To add something to the bleak picture of just how contingent our existence is, Barrett mentions a personal story that Pascal shared about his having had a brush with death:

“While he was driving by the Seine one day, his carriage  suddenly swerved, the door was flung open, and Pascal almost catapulted down the embankment to his death.  The arbitrariness and suddenness of this near accident became for him another lightning flash of revelation.  Thereafter he saw Nothingness as a possibility that lurked, so to speak, beneath our feet, a gulf and an abyss into which we might tumble at any moment.  No other writer has expressed more powerfully than Pascal the radical contingency that lies at the heart of human existence-a contingency that may at any moment hurl us all unsuspecting into non-being…The idea of Nothingness or Nothing had up to this time played no role at all in Western philosophy…Nothingness had suddenly and drastically revealed itself to him.”

And here it is: the concept of Nothingness which has truly grounded a bulk of the philosophical constructs found within existentialism.  To add further to this, Pascal also saw that one’s inevitable death wasn’t the whole picture of our contingency, but only a part of it; it was our having been born at a particular time, place, and within a certain culture that vastly reduces the number of possible life trajectories one can take, and thus vastly reduces our freedom in establishing our own identity.  Our life begins with contingency and ends with it also.

Pascal also acknowledged our existence as lying in the middle of the universe, in terms of being between that of the infinitesimal and microscopic and that of the seemingly infinite lying at the cosmological scale.  Within this middle position, he saw man as being “an All in relation to Nothingness, a Nothingness in relation to the All.”  I think it’s also worth pointing out that it is because of our evolutionary history and the “middle-position” we evolved within that causes the drastic misalignment between the world we were in some sense meant to understand, and the micro and cosmic scales we discovered in the last several hundred years.

Aside from any feelings of alienation and homelessness that these different scales have precipitated, we simply don’t have an intuition that evolved a need to understand existence at the infinitesimal quantum level nor at the cosmic relativistic level, which is why these discoveries in physics are difficult to understand even by experts in the field.  We should expect these discoveries to be vastly perplexing and counter-intuitive, for our consciousness evolved to deal with life at a very finite scale; yet another example of our finitude as human beings.

It was this very same fact, that we gained so much information about our world through these kinds of discoveries, including major advancements made in mathematics, that also promoted the assumption that human nature could be perfected through the universal application of reason.

I’ll finish the post on this chapter with one final quote from Barrett that I found insightful:

“Poets are witnesses to Being before the philosophers are able to bring it into thought.  And what these particular poets were struggling to reveal, in this case, were the very conditions of Being that are ours historically today.  They were sounding, in poetic terms, the premonitory chords of our own era.”

It is the voice of the poets that we first hear lamenting about the Enlightenment and what reason seems to have done, or at least what it had begun to do.  They were some of the most important contemporary precursors to the later existentialists and philosophers that would soon begin to process (in far more detail and with far more precision) the full effects of modernity on the human psyche.

The Experientiality of Matter

If there’s one thing that Nietzsche advocated for, it was to eliminate dogmatism and absolutism from one’s thinking.  And although I generally agree with him here, I do think that there is one exception to this rule.  One thing that we can be absolutely certain of is our own conscious experience.  Nietzsche actually had something to say about this (in Beyond Good and Evil, which I explored in a previous post), where he responds to Descartes’ famous adage “cogito ergo sum” (the very adage associated with my blog!), and he basically says that Descartes’ conclusion (“I think therefore I am”) shows a lack of reflection concerning the meaning of “I think”.  He wonders how he (and by extension, Descartes) could possibly know for sure that he was doing the thinking rather than the thought doing the thinking, and he even considers the possibility that what is generally described as thinking may actually be something more like willing or feeling or something else entirely.

Despite Nietzsche’s criticism against Descartes in terms of what thought is exactly or who or what actually does the thinking, we still can’t deny that there is thought.  Perhaps if we replace “I think therefore I am” with something more like “I am conscious, therefore (my) conscious experience exists”, then we can retain some core of Descartes’ insight while throwing out the ambiguities or uncertainties associated with how exactly one interprets that conscious experience.

So I’m in agreement with Nietzsche in the sense that we can’t be certain of any particular interpretation of our conscious experience, including whether or not there is an ego or a self (which Nietzsche actually describes as a childish superstition similar to the idea of a soul), nor can we make any certain inferences about the world’s causal relations stemming from that conscious experience.  Regardless of these limitations, we can still be sure that conscious experience exists, even if it can’t be ascribed to an “I” or a “self” or any particular identity (let alone a persistent identity).

Once we’re cognizant of this certainty, and if we’re able to crawl out of the well of solipsism and eventually build up a theory about reality (for pragmatic reasons at the very least), then we must remain aware of the priority of consciousness in any resultant theory we construct about reality, with regard to its structure or any of its other properties.  Personally, I believe that some form of naturalistic physicalism (a realistic physicalism) is the best candidate for an ontological theory that is the most parsimonious and explanatory for all that we experience in our reality.  However, most people that make the move to adopt some brand of physicalism seem to throw the baby out with the bathwater (so to speak), whereby consciousness gets eliminated by assuming it’s an illusion or that it’s not physical (therefore having no room for it in a physicalist theory, aside from its neurophysiological attributes).

Although I used to feel differently about consciousness (and it’s relationship to physicalism), where I thought it was plausible for it to be some kind of an illusion, upon further reflection I’ve come to realize that this was a rather ridiculous position to hold.  Consciousness can’t be an illusion in the proper sense of the word, because the experience of consciousness is real.  Even if I’m hallucinating where my perceptions don’t directly correspond with the actual incoming sensory information transduced through my body’s sensory receptors, then we can only say that the perceptions are illusory insofar as they don’t directly map onto that incoming sensory information.  But I still can’t say that having these experiences is itself an illusion.  And this is because consciousness is self-evident and experiential in that it constitutes whatever is experienced no matter what that experience consists of.

As for my thoughts on physicalism, I came to realize that positing consciousness as an intrinsic property of at least some kinds of physical material (analogous to a property like mass) allows us to avoid having to call consciousness non-physical.  If it is simply an experiential property of matter, that doesn’t negate its being a physical property of that matter.  It may be that we can’t access this property in such a way as to evaluate it with external instrumentation, like we can for all the other properties of matter that we know of such as mass, charge, spin, or what-have-you, but that doesn’t mean an experiential property should be off limits for any physicalist theory.  It’s just that most physicalists assume that everything can or has to be reducible to the externally accessible properties that our instrumentation can measure.  And this suggests that they’ve simply assumed that the physical can only include externally accessible properties of matter, rather than both internally and externally accessible properties of matter.

Now it’s easy to see why science might push philosophy in this direction because its methodology is largely grounded on third-party verification and a form of objectivity involving the ability to accurately quantify everything about a phenomenon with little or no regard for introspection or subjectivity.  And I think that this has caused many a philosopher to paint themselves into a corner by assuming that any ontological theory underlying the totality of our reality must be constrained in the same way that the physical sciences are.  To see why this is an unwarranted assumption, let’s consider a “black box” that can only be evaluated by a certain method externally.  It would be fallacious to conclude that just because we are unable to access the inside of the box, that the box must therefore be empty inside or that there can’t be anything substantially different inside the box compared to what is outside the box.

We can analogize this limitation of studying consciousness with our ability to study black holes within the field of astrophysics, where we’ve come to realize that accessing any information about their interior (aside from how much mass there is) is impossible to do from the outside.  And if we managed to access this information (if there is any) from the inside by leaping past its outer event horizon, it would be impossible for us to escape and share any of that information.  The best we can do is to learn what we can from the outside behavior of the black hole in terms of its interaction with surrounding matter and light and infer something about the inside, like how much matter it contains (e.g. we can infer the mass of a black hole from its outer surface area).  And we can learn a little bit more by considering what is needed to create or destroy a black hole, thus creating or destroying any interior qualities that may or may not exist.

A black hole can only form from certain configurations of matter, particularly aggregates that are above a certain mass and density.  And it can only be destroyed by starving it to death, by depriving it of any new matter, where it will slowly die by evaporating entirely into Hawking radiation, thus destroying anything that was on the inside in the process.  So we can infer that any internal qualities it does have, however inaccessible they may be, can be brought into and out of existence with certain physical processes.

Similarly, we can infer some things about consciousness by observing one’s external behavior including inferring some conditions that can create, modify, or destroy that type of consciousness, but we are unable to know what it’s like to be on the inside of that system once it exists.  We’re only able to know about the inside of our own conscious system, where we are in some sense inside our own black hole with nobody else able to access this perspective.  And I think it is easy enough to imagine that certain configurations of matter simply have an intrinsic, externally inaccessible experiential property, just as certain configurations of matter lead to the creation of a black hole with its own externally inaccessible and qualitatively unknown internal properties.  Despite the fact that we can’t access the black hole’s interior with a strictly external method, to determine its internal properties, this doesn’t mean we should assume that whatever properties may exist inside it are therefore fundamentally non-physical.  Just as we wouldn’t consider alternate dimensions (such as those predicted in M-theory/String-Theory) that we can’t physically access to be non-physical.  Perhaps one or more of these inaccessible dimensions (if they exist) is what accounts for an intrinsic experiential property within matter (though this is entirely speculative and need not be true for the previous points to hold, but it’s an interesting thought nevertheless).

Here’s a relevant quote from the philosopher Galen Strawson, where he outlines what physicalism actually entails:

Real physicalists must accept that at least some ultimates are intrinsically experience-involving. They must at least embrace micropsychism. Given that everything concrete is physical, and that everything physical is constituted out of physical ultimates, and that experience is part of concrete reality, it seems the only reasonable position, more than just an ‘inference to the best explanation’… Micropsychism is not yet panpsychism, for as things stand realistic physicalists can conjecture that only some types of ultimates are intrinsically experiential. But they must allow that panpsychism may be true, and the big step has already been taken with micropsychism, the admission that at least some ultimates must be experiential. ‘And were the inmost essence of things laid open to us’ I think that the idea that some but not all physical ultimates are experiential would look like the idea that some but not all physical ultimates are spatio-temporal (on the assumption that spacetime is indeed a fundamental feature of reality). I would bet a lot against there being such radical heterogeneity at the very bottom of things. In fact (to disagree with my earlier self) it is hard to see why this view would not count as a form of dualism… So now I can say that physicalism, i.e. real physicalism, entails panexperientialism or panpsychism. All physical stuff is energy, in one form or another, and all energy, I trow, is an experience-involving phenomenon. This sounded crazy to me for a long time, but I am quite used to it, now that I know that there is no alternative short of ‘substance dualism’… Real physicalism, realistic physicalism, entails panpsychism, and whatever problems are raised by this fact are problems a real physicalist must face.

— Galen Strawson, Consciousness and Its Place in Nature: Does Physicalism Entail Panpsychism?
.
While I don’t believe that all matter has a mind per se, because a mind is generally conceived as being a complex information processing structure, I think it is likely that all matter has an experiential quality of some kind, even if most instantiations of it are entirely unrecognizable as what we’d generally consider to be “consciousness” or “mentality”.  I believe that the intuitive gap here is born from the fact that the only minds we are confident exist are those instantiated by a brain, which has the ability to make an incredibly large number of experiential discriminations between various causal relations, thus giving it a capacity that is incredibly complex and not observed anywhere else in nature.  On the other hand, a particle of matter on its own would be hypothesized to have the capacity to make only one or two of these kinds of discriminations, making it unintelligent and thus incapable of brain-like activity.  Once a person accepts that an experiential quality can come in varying degrees from say one experiential “bit” to billions or trillions of “bits” (or more), then we can plausibly see how matter could give rise to systems that have a vast range in their causal power, from rocks that don’t appear to do much at all, to living organisms that have the ability to store countless representations of causal relations (memory) allowing them to behave in increasingly complex ways.  And perhaps best of all, this approach solves the mind-body problem by eliminating the mystery of how fundamentally non-experiential stuff could possibly give rise to experientiality and consciousness.

It’s Time For Some Philosophical Investigations

Ludwig Wittgenstein’s Philosophical Investigations is a nice piece of work where he attempts to explain his views on language and the consequences of this view on various subjects like logic, semantics, cognition, and psychology.  I’ve mentioned some of his views very briefly in a couple of earlier posts, but I wanted to delve into his work in a little more depth here and comment on what strikes me as most interesting.  Lately, I’ve been looking back at some of the books I’ve read from various philosophers and have been wanting to revisit them so I can explore them in more detail and share how they connect to some of my own thoughts.  Alright…let’s begin.

Language, Meaning, & Their Probabilistic Attributes

He opens his Philosophical Investigations with a quote from St. Augustine’s Confessions that describes how a person learns a language.  St. Augustine believed that this process involved simply learning the names of objects (for example, by someone else pointing to the objects that are so named) and then stringing them together into sentences, and Wittgenstein points out that this is true to some trivial degree but it overlooks a much more fundamental relationship between language and the world.  For Wittgenstein, the meaning of words can not simply be attached to an object like a name can.  The meaning of a word or concept has much more of a fuzzy boundary as it depends on a breadth of context or associations with other concepts.  He analogizes this plurality in the meanings of words with the relationship between members of a family.  While there may be some resemblance between different uses of a word or concept, we aren’t able to formulate a strict definition to fully describe this resemblance.

One problem then, especially within philosophy, is that many people assume that the meaning of a word or concept is fixed with sharp boundaries (just like the fixed structure of words themselves).  Wittgenstein wants to dispel people of this false notion (much as Nietzsche tried to do before him) so that they can stop misusing language, as he believed that this misuse was the cause of many (if not all) of the major problems that had cropped up in philosophy over the centuries, particularly in metaphysics.  Since meaning is actually somewhat fluid and can’t be accounted for by any fixed structure, Wittgenstein thinks that any meaning that we can attach to these words is ultimately going to be determined by how those words are used.  Since he ties meaning with use, and since this use is something occurring in our social forms of life, it has an inextricably external character.  Thus, the only way to determine if someone else has a particular understanding of a word or concept is through their behavior, in response to or in association with the use of the word(s) in question.  This is especially important in the case of ambiguous sentences, which Wittgenstein explores to some degree.

Probabilistic Shared Understanding

Some of what Wittgenstein is trying to point out here are what I like to refer to as the inherently probabilistic attributes of language.  And it seems to me to be probabilistic for a few different reasons, beyond what Wittgenstein seems to address.  First, there is no guarantee that what one person means by a word or concept exactly matches the meaning from another person’s point of view, but at the very least there is almost always going to be some amount of semantic overlap (and possibly 100% in some cases) between the two individual’s intended meanings, and so there is going to be some probability that the speaker and the listener do in fact share a complete understanding.  It seems reasonable to argue that simpler concepts will have a higher probability of complete semantic overlap whereas more complex concepts are more likely to leave a gap in that shared understanding.  And I think this is true even if we can’t actually calculate what any of these probabilities are.

Now my use of the word meaning here differs from Wittgenstein’s because I am referring to something that is not exclusively shared by all parties involved and I am pointing to something that is internal (a subjective understanding of a word) rather than external (the shared use of a word).  But I think this move is necessary if we are to capture all of the attributes that people explicitly or implicitly refer to with a concept like meaning.  It seems better to compromise with Wittgenstein’s thinking and refer to the meaning of a word as a form of understanding that is intimately connected with its use, but which involves elements that are not exclusively external.

We can justify this version of meaning through an example.  If I help teach you how to ride a bike and explain that this activity is called biking or to bike, then we can use Wittgenstein’s conception of meaning and it will likely account for our shared understanding, and so I have no qualms about that, and I’d agree with Wittgenstein that this is perhaps the most important function of language.  But it may be the case that you have an understanding that biking is an activity that can only happen on Tuesdays, because that happened to be the day that I helped teach you how to ride a bike.  Though I never intended for you to understand biking in this way, there was no immediate way for me to infer that you had this misunderstanding on the day I was teaching you this.  I could only learn of this fact if you explicitly explained to me your understanding of the term with enough detail, or if I had asked you additional questions like whether or not you’d like to bike on a Wednesday (for example), with you answering “I don’t know how to do that as that doesn’t make any sense to me.”  Wittgenstein doesn’t account for this gap in understanding in his conception of meaning and I think this makes for a far less useful conception.

Now I think that Wittgenstein is still right in the sense that the only way to determine someone else’s understanding (or lack thereof) of a word is through their behavior, but due to chance as well as where our attention is directed at any given moment, we may never see the right kinds of behavior to rule out any number of possible misunderstandings, and so we’re apt to just assume that these misunderstandings don’t exist because language hides them to varying degrees.  But they can and in some cases do exist, and this is why I prefer a conception of meaning that takes these misunderstandings into account.  So I think it’s useful to see language as probabilistic in the sense that there is some probability of a complete shared understanding underlying the use of a word, and thus there is a converse probability of some degree of misunderstanding.

Language & Meaning as Probabilistic Associations Between Causal Relations

A second way of language being probabilistic is due to the fact that the unique meanings associated with any particular use of a word or concept as understood by an individual are derived from probabilistic associations between various inferred causal relations.  And I believe that this is the underlying cause of most of the problems that Wittgenstein was trying to address in this book.  He may not have been thinking about the problem in this way, but it can account for the fuzzy boundary problem associated with the task of trying to define the meaning of words since this probabilistic structure underlies our experiences, our understanding of the world, and our use of language such that it can’t be represented by static, sharp definitions.  When a person is learning a concept, like redness, they experience a number of observations and infer what is common to all of those experiences, and then they extract a particular subset of what is common and associate it with a word like red or redness (as opposed to another commonality like objectness, or roundness, or what-have-you).  But in most cases, separate experiences of redness are going to be different instantiations of redness with different hues, textures, shapes, etc., which means that redness gets associated with a large range of different qualia.

If you come across a new qualia that seems to more closely match previous experiences associated with redness rather than orangeness (for example), I would argue that this is because the brain has assigned a higher probability to that qualia being an instance of redness as opposed to, say, orangeness.  And the brain may very well test a hypothesis of the qualia matching the concept of redness versus the concept of orangeness, and depending on your previous experiences of both, and the present context of the experience, your brain may assign a higher probability of orangeness instead.  Perhaps if a red-orange colored object is mixed in with a bunch of unambiguously orange-colored objects, it will be perceived as a shade of orange (to match it with the rest of the set), but if the case were reversed and it were mixed in with a bunch of unambiguously red-colored objects, it will be perceived as a shade of red instead.

Since our perception of the world depends on context, then the meanings we assign to words or concepts also depends on context, but not only in the sense of choosing a different use of a word (like Wittgenstein argues) in some language game or other, but also by perceiving the same incoming sensory information as conceptually different qualia (like in the aforementioned case of a red-orange colored object).  In that case, we weren’t intentionally using red or orange in a different way but rather were assigning one word or the other to the exact same sensory information (with respect to the red-orange object) which depended on what else was happening in the scene that surrounded that subset of sensory information.  To me, this highlights how meaning can be fluid in multiple ways, some of that fluidity stemming from our conscious intentions and some of it from unintentional forces at play involving our prior expectations within some context which directly modify our perceived experience.

This can also be seen through Wittgenstein’s example of what he calls a duckrabbit, an ambiguous image that can be perceived as a duck or a rabbit.  I’ve taken the liberty of inserting this image here along with a copy of it which has been rotated in order to more strongly invoke the perception of a rabbit.  The first image no doubt looks more like a duck and the second image, more like a rabbit.

Now Wittgenstein says that when one is looking at the duckrabbit and sees a rabbit, they aren’t interpreting the picture as a rabbit but are simply reporting what they see.  But in the case where a person sees a duck first and then later sees a rabbit, Wittgenstein isn’t sure what to make of this.  However, he claims to be sure that whatever it is, it can’t be the case that the external world stays the same while an internal cognitive change takes place.  Wittgenstein was incorrect on this point because the external world doesn’t change (in any relevant sense) despite our seeing the duck or seeing the rabbit.  Furthermore, he never demonstrates why two different perceptions would require a change in the external world.  The fact of the matter is, you can stare at this static picture and ask yourself to see a duck or to see a rabbit and it will affect your perception accordingly.  This is partially accomplished by you mentally rotating the image in your imagination and seeing if that changes how well it matches one conception or the other, and since it matches both conceptions to a high degree, you can easily perceive it one way or the other.  Your brain is simply processing competing hypotheses to account for the incoming sensory information, and the top-down predictions of rabbitness or duckness (which you’ve acquired over past experiences) actually changes the way you perceive it with no change required in the external world (despite Wittgenstein’s assertion to the contrary).

To give yet another illustration of the probabilistic nature of language, just imagine the head of a bald man and ask yourself, if you were to add one hair at a time to this bald man’s head, at what point does he lose the property of baldness?  If hairs were slowly added at random, and you could simply say “Stop!  Now he’s no longer bald!” at some particular time, there’s no doubt in my mind that if this procedure were repeated (even if the hairs were added non-randomly), you would say “Stop!  Now he’s no longer bald!” at a different point in this transition.  Similarly if you were looking at a light that was changing color from red to orange, and were asked to say when the color has changed to orange, you would pick a point in the transition that is within some margin of error but it wouldn’t be an exact, repeatable point in the transition.  We could do this thought experiment with all sorts of concepts that are attached to words, like cat and dog and, for example, use a computer graphic program to seamlessly morph a picture of a cat into a picture of a dog and ask at what point did the cat “turn into” a dog?  It’s going to be based on a probability of coincident features that you detect which can vary over time.  Here’s a series of pictures showing a chimpanzee morphing into Bill Clinton to better illustrate this point:

At what point do we stop seeing a picture of a chimpanzee and start seeing a picture of something else?  When do we first see Bill Clinton?  What if I expanded this series of 15 images into a series of 1000 images so that this transition happened even more gradually?  It would be highly unlikely to pick the exact same point in the transition two times in a row if the images weren’t numbered or arranged in a grid.  We can analogize this phenomenon with an ongoing problem in science, known as the species problem.  This problem can be described as the inherent difficulty of defining exactly what a species is, which is necessary if one wants to determine if and when one species evolves into another.  This problem occurs because the evolutionary processes giving rise to new species are relatively slow and continuous whereas sorting those organisms into sharply defined categories involves the elimination of that generational continuity and replacing it with discrete steps.

And we can see this effect in the series of images above, where each picture could represent some large number of generations in an evolutionary timeline, where each picture/organism looks roughly like the “parent” or “child” of the picture/organism that is adjacent to it.  Despite this continuity, if we look at the first picture and the last one, they look like pictures of distinct species.  So if we want to categorize the first and last picture as distinct species, then we create a problem when trying to account for every picture/organism that lies in between that transition.  Similarly words take on an appearance of strict categorization (of meaning) when in actuality, any underlying meaning attached is probabilistic and dynamic.  And as Wittgenstein pointed out, this makes it more appropriate to consider meaning as use so that the probabilistic and dynamic attributes of meaning aren’t lost.

Now you may think you can get around this problem of fluidity or fuzzy boundaries with concepts that are simpler and more abstract, like the concept of a particular quantity (say, a quantity of four objects) or other concepts in mathematics.  But in order to learn these concepts in the first place, like quantity, and then associate particular instances of it with a word, like four, one had to be presented with a number of experiences and infer what was common to all of those experiences (as was the case with redness mentioned earlier).  And this inference (I would argue) involves a probabilistic process as well, it’s just that the resulting probability of our inferring particular experiences as an instance of four objects is incredibly high and therefore repeatable and relatively unambiguous.  Therefore that kind of inference is likely to be sustained no matter what the context, and it is likely to be shared by two individuals with 100% semantic overlap (i.e. it’s almost certain that what I mean by four is exactly what you mean by four even though this is almost certainly not the case for a concept like love or consciousness).  This makes mathematical concepts qualitatively different from other concepts (especially those that are more complex or that more closely map on to reality), but it doesn’t negate their having a probabilistic attribute or foundation.

Looking at the Big Picture

Though this discussion of language and meaning is not an exhaustive analysis of Wittgenstein’s Philosophical Investigations, it represents an analysis of the main theme present throughout.  His main point was to shed light on the disparity between how we often think of language and how we actually use it.  When we stray away from the way it is actually used in our everyday lives, in one form of social life or other, and instead misuse it such as in philosophy, this creates all sorts of problems and unwarranted conclusions.  He also wants his readers to realize that the ultimate goal of philosophy should not be to try and make metaphysical theories and deep explanations underlying everyday phenomena, since these are often born out of unwarranted generalizations and other assumptions stemming from how our grammar is structured.  Instead we ought to subdue these temptations to generalize and subdue our temptations to be dogmatic and instead use philosophy as a kind of therapeutic tool to keep our abstract thinking in check and to better understand ourselves and the world we live in.

Although I disagree with some of Wittgenstein’s claims about cognition (in terms of how intimately it is connected to the external world) and take some issue with his arguably less useful conception of meaning, he makes a lot of sense overall.  Wittgenstein was clearly hitting upon a real difference between the way actual causal relations in our experience are structured and how those relations are represented in language.  Personally, I think that work within philosophy is moving in the right direction if the contributions made therein lead us to make more successful predictions about the causal structure of the world.  And I believe this to be so even if this progress includes generalizations that may not be exactly right.  As long as we can structure our thinking to make more successful predictions, then we’re moving forward as far as I’m concerned.  In any case, I liked the book overall and thought that the interlocutory style gave the reader a nice break from the typical form seen in philosophical argumentation.  I highly recommend it!

Predictive Processing: Unlocking the Mysteries of Mind & Body (Part II)

In the first post of this series I introduced some of the basic concepts involved in the Predictive Processing (PP) theory of perception and action.  I briefly tied together the notions of belief, desire, emotion, and action from within a PP lens.  In this post, I’d like to discuss the relationship between language and ontology through the same framework.  I’ll also start talking about PP in an evolutionary context as well, though I’ll have more to say about that in future posts in this series.

Active (Bayesian) Inference as a Source for Ontology

One of the main themes within PP is the idea of active (Bayesian) inference whereby we physically interact with the world, sampling it and modifying it in order to reduce our level of uncertainty in our predictions about the causes of the brain’s inputs.  Within an evolutionary context, we can see why this form of embodied cognition is an ideal schema for an information processing system to employ in order to maximize chances of survival in our highly interactive world.

In order to reduce the amount of sensory information that has to be processed at any given time, it is far more economical for the brain to only worry about the prediction error that flows upward through the neural system, rather than processing all incoming sensory data from scratch.  If the brain is employing a set of predictions that can “explain away” most of the incoming sensory data, then the downward flow of predictions can encounter an upward flow of sensory information (effectively cancelling each other out) and the only thing that remains to propagate upward through the system and do any “cognitive work” (i.e. the only thing that needs to be processed) on the predictive models flowing downward is the remaining prediction error (prediction error = predictions of sensory input minus the actual sensory input).  This is similar to data compression strategies for video files (for example) that only worry about the information that changes over time (pixels that change brightness/color) and then simply compress the information that remains constant (pixels that do not change from frame-to-frame).

The ultimate goal for this strategy within an evolutionary context is to allow the organism to understand its environment in the most salient ways for the pragmatic purposes of accomplishing goals relating to survival.  But once humans began to develop culture and evolve culturally, the predictive strategy gained a new kind of evolutionary breathing space, being able to predict increasingly complex causal relations and developing technology along the way.  All of these inferred causal relations appear to me to be the very source of our ontology, as each hierarchically structured prediction and its ability to become associated with others provides an ideal platform for differentiating between any number of spatio-temporal conceptions and their categorical or logical organization.

An active Bayesian inference system is also ideal to explain our intellectual thirst, human curiosity, and interest in novel experiences (to some degree), because we learn more about the world (and ourselves) by interacting with it in new ways.  In doing so, we are provided with a constant means of fueling and altering our ontology.

Language & Ontology

Language is an important component as well and it fits well within a PP framework as it serves to further link perception and action together in a very important way, allowing us to make new kinds of predictions about the world that wouldn’t have been possible without it.   A tool like language makes a lot of sense from an evolutionary perspective as well since better predictions about the world result in a higher chance of survival.

When we use language by speaking or writing it, we are performing an action which is instantiated by the desire to do so (see previous post about “desire” within a PP framework).  When we interpret language by listening to it or by reading, we are performing a perceptual task which is again simply another set of predictions (in this case, pertaining to the specific causes leading to our sensory inputs).  If we were simply sending and receiving non-lingual nonsense, then the same basic predictive principles underlying perception and action would still apply, but something new emerges when we send and receive actual language (which contains information).  With language, we begin to associate certain sounds and visual information with some kind of meaning or meaningful information.  Once we can do this, we can effectively share our thoughts with one another, or at least many aspects of our thoughts with one another.  This provides for an enormous evolutionary advantage as now we can communicate almost anything we want to one another, store it in external forms of memory (books, computers, etc.), and further analyze or manipulate the information for various purposes (accounting, inventory, science, mathematics, etc.).

By being able to predict certain causal outcomes through the use of language, we are effectively using the lower level predictions associated with perceiving and emitting language to satisfy higher level predictions related to more complex goals including those that extend far into the future.  Since the information that is sent and received amounts to testing or modifying our predictions of the world, we are effectively using language to share and modulate one brain’s set of predictions with that of another brain.  One important aspect of this process is that this information is inherently probabilistic which is why language often trips people up with ambiguities, nuances, multiple meanings behind words and other attributes of language that often lead to misunderstanding.  Wittgenstein is one of the more prominent philosophers who caught onto this property of language and its consequence on philosophical problems and how we see the world structured.  I think a lot of the problems Wittgenstein elaborated on with respect to language can be better accounted for by looking at language as dealing with probabilistic ontological/causal relations that serve some pragmatic purpose, with the meaning of any word or phrase as being best described by its use rather than some clear-cut definition.

This probabilistic attribute of language in terms of the meanings of words having fuzzy boundaries also tracks very well with the ontology that a brain currently has access to.  Our ontology, or what kinds of things we think exist in the world, are often categorized in various ways with some of the more concrete entities given names such as: “animals”, “plants”, “rocks”, “cats”, “cups”, “cars”, “cities”, etc.  But if I morph a wooden chair (say, by chipping away at parts of it with a chisel), eventually it will no longer be recognizable as a chair, and it may begin to look more like a table than a chair or like nothing other than an oddly shaped chunk of wood.  During this process, it may be difficult to point to the exact moment that it stopped being a chair and instead became a table or something else, and this would make sense if what we know to be a chair or table or what-have-you is nothing more than a probabilistic high-level prediction about certain causal relations.  If my brain perceives an object that produces too high of a prediction error based on the predictive model of what a “chair” is, then it will try another model (such as the predictive model pertaining to a “table”), potentially leading to models that are less and less specific until it is satisfied with recognizing the object as merely a “chunk of wood”.

From a PP lens, we can consider lower level predictions pertaining to more basic causes of sensory input (bright/dark regions, lines, colors, curves, edges, etc.) to form some basic ontological building blocks and when they are assembled into higher level predictions, the amount of integrated information increases.  This information integration process leads to condensed probabilities about increasingly complex causal relations, and this ends up reducing the dimensionality of the cause-effect space of the predicted phenomenon (where a set of separate cause-effect repertoires are combined into a smaller number of them).

You can see the advantage here by considering what the brain might do if it’s looking at a black cat sitting on a brown chair.  What if the brain were to look at this scene as merely a set of pixels on the retina that change over time, where there’s no expectations of any subset of pixels to change in ways that differ from any other subset?  This wouldn’t be very useful in predicting how the visual scene will change over time.  What if instead, the brain differentiates one subset of pixels (that correspond to what we call a cat) from all the rest of the pixels, and it does this in part by predicting proximity relations between neighboring pixels in the subset (so if some black pixels move from the right to the left visual field, then some number of neighboring black pixels are predicted to move with it)?

This latter method treats the subset of black-colored pixels as a separate object (as opposed to treating the entire visual scene as a single object), and doing this kind of differentiation in more and more complex ways leads to a well-defined object or concept, or a large number of them.  Associating sounds like “meow” with this subset of black-colored pixels, is just one example of yet another set of properties or predictions that further defines this perceived object as distinct from the rest of the perceptual scene.  Associating this object with a visual or auditory label such as “cat” finally links this ontological object with language.  As long as we agree on what is generally meant by the word “cat” (which we determine through its use), then we can share and modify the predictive models associated with such an object or concept, as we can do with any other successful instance of linguistic communication.

Language, Context, and Linguistic Relativism

However, it should be noted that as we get to more complex causal relations (more complex concepts/objects), we can no longer give these concepts a simple one word label and expect to communicate information about them nearly as easily as we could for the concept of a “cat”.  Think about concepts like “love” or “patriotism” or “transcendence” and realize how there’s many different ways that we use those terms and how they can mean all sorts of different things and so our meaning behind those words will be heavily conveyed to others by the context that they are used in.  And context in a PP framework could be described as simply the (expected) conjunction of multiple predictive models (multiple sets of causal relations) such as the conjunction of the predictive models pertaining to the concepts of food, pizza, and a desirable taste, and the word “love” which would imply a particular use of the word “love” as used in the phrase “I love pizza”.  This use of the word “love” is different than one which involves the conjunction of predictive models pertaining to the concepts of intimacy, sex, infatuation, and care, implied in a phrase like “I love my wife”.  In any case, conjunctions of predictive models can get complicated and this carries over to our stretching our language to its very limits.

Since we are immersed in language and since it is integral in our day-to-day lives, we also end up being conditioned to think linguistically in a number of ways.  For example, we often think with an interior monologue (e.g. “I am hungry and I want pizza for lunch”) even when we don’t plan on communicating this information to anyone else, so it’s not as if we’re simply rehearsing what we need to say before we say it.  I tend to think however that this linguistic thinking (thinking in our native language) is more or less a result of the fact that the causal relations that we think about have become so strongly associated with certain linguistic labels and propositions, that we sort of automatically think of the causal relations alongside the labels that we “hear” in our head.  This seems to be true even if the causal relations could be thought of without any linguistic labels, in principle at least.  We’ve simply learned to associate them so strongly to one another that in most cases separating the two is just not possible.

On the flip side, this tendency of language to associate itself with our thoughts, also puts certain barriers or restrictions on our thoughts.  If we are always preparing to share our thoughts through language, then we’re going to become somewhat entrained to think in ways that can be most easily expressed in a linguistic form.  So although language may simply be along for the ride with many non-linguistic aspects of thought, our tendency to use it may also structure our thinking and reasoning in large ways.  This would account for why people raised in different cultures with different languages see the world in different ways based on the structure of their language.  While linguistic determinism seems to have been ruled out (the strong version of the Sapir-Whorf hypothesis), there is still strong evidence to support linguistic relativism (the weak version of the Sapir-Whorf hypothesis), whereby one’s language effects their ontology and view of how the world is structured.

If language is so heavily used day-to-day then this phenomenon makes sense as viewed through a PP lens since we’re going to end up putting a high weight on the predictions that link ontology with language since these predictions have been demonstrated to us to be useful most of the time.  Minimal prediction error means that our Bayesian evidence is further supported and the higher the weight carried by these predictions, the more these predictions will restrict our overall thinking, including how our ontology is structured.

Moving on…

I think that these are but a few of the interesting relationships between language and ontology and how a PP framework helps to put it all together nicely, and I just haven’t seen this kind of explanatory power and parsimony in any other kind of conceptual framework about how the brain functions.  This bodes well for the framework and it’s becoming less and less surprising to see it being further supported over time with studies in neuroscience, cognition, psychology, and also those pertaining to pathologies of the brain, perceptual illusions, etc.  In the next post in this series, I’m going to talk about knowledge and how it can be seen through the lens of PP.

Predictive Processing: Unlocking the Mysteries of Mind & Body (Part I)

I’ve been away from writing for a while because I’ve had some health problems relating to my neck.  A few weeks ago I had double-cervical-disc replacement surgery and so I’ve been unable to write and respond to comments and so forth for a little while.  I’m in the second week following my surgery now and have finally been able to get back to writing, which feels very good given that I’m unable to lift or resume martial arts for the time being.  Anyway, I want to resume my course of writing beginning with a post-series that pertains to Predictive Processing (PP) and the Bayesian brain.  I’ve written one post on this topic a little over a year ago (which can be found here) as I’ve become extremely interested in this topic for the last several years now.

The Predictive Processing (PP) theory of perception shows a lot of promise in terms of finding an overarching schema that can account for everything that the brain seems to do.  While its technical application is to account for the acts of perception and active inference in particular, I think it can be used more broadly to account for other descriptions of our mental life such as beliefs (and knowledge), desires, emotions, language, reasoning, cognitive biases, and even consciousness itself.  I want to explore some of these relationships as viewed through a PP lens more because I think it is the key framework needed to reconcile all of these aspects into one coherent picture, especially within the evolutionary context of an organism driven to survive.  Let’s begin this post-series by first looking at how PP relates to perception (including imagination), beliefs, emotions, and desires (and by extension, the actions resulting from particular desires).

Within a PP framework, beliefs can be best described as simply the set of particular predictions that the brain employs which encompass perception, desires, action, emotion, etc., and which are ultimately mediated and updated in order to reduce prediction errors based on incoming sensory evidence (and which approximates a Bayesian form of inference).  Perception then, which is constituted by a subset of all our beliefs (with many of them being implicit or unconscious beliefs), is more or less a form of controlled hallucination in the sense that what we consciously perceive is not the actual sensory evidence itself (not even after processing it), but rather our brain’s “best guess” of what the causes for the incoming sensory evidence are.

Desires can be best described as another subset of one’s beliefs, and a set of beliefs which has the special characteristic of being able to drive action or physical behavior in some way (whether driving internal bodily states, or external ones that move the body in various ways).  Finally, emotions can be thought of as predictions pertaining to the causes of internal bodily states and which may be driven or changed by changes in other beliefs (including changes in desires or perceptions).

When we believe something to be true or false, we are basically just modeling some kind of causal relationship (or its negation) which is able to manifest itself into a number of highly-weighted predicted perceptions and actions.  When we believe something to be likely true or likely false, the same principle applies but with a lower weight or precision on the predictions that directly corresponds to the degree of belief or disbelief (and so new sensory evidence will more easily sway such a belief).  And just like our perceptions, which are mediated by a number of low and high-level predictions pertaining to incoming sensory data, any prediction error that the brain encounters results in either updating the perceptual predictions to new ones that better reduce the prediction error and/or performing some physical action that reduces the prediction error (e.g. rotating your head, moving your eyes, reaching for an object, excreting hormones in your body, etc.).

In all these cases, we can describe the brain as having some set of Bayesian prior probabilities pertaining to the causes of incoming sensory data, and these priors changing over time in response to prediction errors arising from new incoming sensory evidence that fails to be “explained away” by the predictive models currently employed.  Strong beliefs are associated with high prior probabilities (highly-weighted predictions) and therefore need much more counterfactual sensory evidence to be overcome or modified than for weak beliefs which have relatively low priors (low-weighted predictions).

To illustrate some of these concepts, let’s consider a belief like “apples are a tasty food”.  This belief can be broken down into a number of lower level, highly-weighted predictions such as the prediction that eating a piece of what we call an “apple” will most likely result in qualia that accompany the perception of a particular satisfying taste, the lower level prediction that doing so will also cause my perception of hunger to change, and the higher level prediction that it will “give me energy” (with these latter two predictions stemming from the more basic category of “food” contained in the belief).  Another prediction or set of predictions is that these expectations will apply to not just one apple but a number of apples (different instances of one type of apple, or different types of apples altogether), and a host of other predictions.

These predictions may even result (in combination with other perceptions or beliefs) in an actual desire to eat an apple which, under a PP lens could be described as the highly weighted prediction of what it would feel like to find an apple, to reach for an apple, to grab it, to bite off a piece of it, to chew it, and to swallow it.  If I merely imagine doing such things, then the resulting predictions will necessarily carry such a small weight that they won’t be able to influence any actual motor actions (even if these imagined perceptions are able to influence other predictions that may eventually lead to some plan of action).  Imagined perceptions will also not carry enough weight (when my brain is functioning normally at least) to trick me into thinking that they are actual perceptions (by “actual”, I simply mean perceptions that correspond to incoming sensory data).  This low-weighting attribute of imagined perceptual predictions thus provides a viable way for us to have an imagination and to distinguish it from perceptions corresponding to incoming sensory data, and to distinguish it from predictions that directly cause bodily action.  On the other hand, predictions that are weighted highly enough (among other factors) will be uniquely capable of affecting our perception of the real world and/or instantiating action.

This latter case of desire and action shows how the PP model takes the organism to be an embodied prediction machine that is directly influencing and being influenced by the world that its body interacts with, with the ultimate goal of reducing any prediction error encountered (which can be thought of as maximizing Bayesian evidence).  In this particular example, the highly-weighted prediction of eating an apple is simply another way of describing a desire to eat an apple, which produces some degree of prediction error until the proper actions have taken place in order to reduce said error.  The only two ways of reducing this prediction error are to change the desire (or eliminate it) to one that no longer involves eating an apple, and/or to perform bodily actions that result in actually eating an apple.

Perhaps if I realize that I don’t have any apples in my house, but I realize that I do have bananas, then my desire will change to one that predicts my eating a banana instead.  Another way of saying this is that my higher-weighted prediction of satisfying hunger supersedes my prediction of eating an apple specifically, thus one desire is able to supersede another.  However, if the prediction weight associated with my desire to eat an apple is high enough, it may mean that my predictions will motivate me enough to avoid eating the banana, and instead to predict what it is like to walk out of my house, go to the store, and actually get an apple (and therefore, to actually do so).  Furthermore, it may motivate me to predict actions that lead me to earn the money such that I can purchase the apple (if I don’t already have the money to do so).  To do this, I would be employing a number of predictions having to do with performing actions that lead to me obtaining money, using money to purchase goods, etc.

This is but a taste of what PP has to offer, and how we can look at basic concepts within folk psychology, cognitive science, and theories of mind in a new light.  Associated with all of these beliefs, desires, emotions, and actions (which again, are simply different kinds of predictions under this framework), is a number of elements pertaining to ontology (i.e. what kinds of things we think exist in the world) and pertaining to language as well, and I’d like to explore this relationship in my next post.  This link can be found here.

Is Death Bad For You? A Response to Shelly Kagan

I’ve enjoyed reading and listening to the philosopher Shelly Kagan, both in debate, lectures, and various articles.  One topic he’s well known for is that of death, specifically the fear of death, and trying to understand the details behind, and justification for, the general attitude people have toward the concept of death.  I’ve thought about the fear of death on and off for a long time now, but coming across an article of Kagan’s reignited my interest in the topic.  He wrote an article a few years ago in The Chronicle, where he expounds on some of the ontological puzzles related to the concept of death.  I thought I’d briefly summarize the article’s main points and give a response to it here.

Can Death Be Bad For Us?

Kagan begins with the assumption that the death of a person’s body results in the end of that person’s existence.  This is certainly a reasonable assumption as there’s no evidence to the contrary, that is, that persons can exist without a living body.  Simple enough.  Then he asks the question, if death is the end of our existence, then how can being dead be bad for us?  While some would say that death is particularly bad for the survivors of the deceased since they miss the person who’s died and the relationship they once had with that person.  But it seems more complicated than that, because we could likewise have an experience where a cherished friend or family member leaves us and goes somewhere far away such that we can be confident that we’ll never see that person ever again.

Both the death of that person, and the alternative of their leaving forever to go somewhere such that we’ll never have contact with them again, result in the same loss of relationship.  Yet most people would say that if we knew about their dying instead of simply leaving forever, there’s more to be sad about in terms of death being bad for them, not simply bad for us.  And this sadness results from more than simply knowing how they died — the process of death itself — which could have been unpleasant, but also could have been entirely benign (such as dying peacefully in one’s sleep).  Similarly, Kagan tells us, the prospect of dying can be unpleasant as well, but he asserts, this only seems to make sense if death itself is bad for us.

Kagan suggests:

Maybe nonexistence is bad for me, not in an intrinsic way, like pain, and not in an instrumental way, like unemployment leading to poverty, which in turn leads to pain and suffering, but in a comparative way—what economists call opportunity costs. Death is bad for me in the comparative sense, because when I’m dead I lack life—more particularly, the good things in life. That explanation of death’s badness is known as the deprivation account.

While the deprivation account seems plausible, Kagan thinks that accepting it results in a couple of potential problems.  He argues, if something is true, it seems as if there must be some time when it’s true.  So when would it be true that death is bad for us?  Not now, he says.  Because we’re not dead now.  Not after we’re dead either, because then we no longer exist so nothing can be bad for a being that no longer exists.  This seems to lead to the conclusion that either death isn’t bad for anyone after all, or alternatively, that not all facts are datable.  He gives us another possible example of an undatable fact.  If Kagan shoots “John” today such that John slowly bleeds to death after two days, but Kagan dies tomorrow (before John dies) then after John dies, can we say that Kagan killed John?  If Kagan did kill John, when did he kill him?  Kagan no longer existed when John died so how can we say that Kagan killed John?

I think we could agree with this and say that while it’s true that Kagan didn’t technically kill John, a trivial response to this supposed conundrum is to say that Kagan’s actions led to John’s death.  This seems to solve that conundrum by working within the constraints of language, while highlighting the fact that when we say someone killed X what we really mean is that someone’s actions led to the death of X, thus allowing us to be consistent with our conceptions of existence, causality, killing, blame, etc.

Existence Requirement, Non-Existential Asymmetry, & It’s Implications

In any case, if all facts are datable (or at least facts like these), then we should be able to say when exactly death is bad for us.  Can things only be bad for us when we exist?  If so, this is what Kagan refers to as the existence requirement.  If we don’t accept such a requirement — that one must exist in order for things to be bad for us — that produces other problems, like being able to say for example that non-existence could be bad for someone who has never existed but that could have possibly existed.  This seems to be a pretty strange claim to hold to.  So if we refuse to accept that it’s a tragedy for possibly existent people to never come into existence, then we’d have to accept the existence requirement, which I would contend is a more plausible assumption to accept.  But if we do so, then it seems that we have to accept that death isn’t in fact bad for us.

Kagan suggests that we may be able to reinterpret the existence requirement, and he does this by distinguishing between two versions, a modest version which asserts that something can be bad for you only if you exist at some time or another, and a bold version which asserts that something can be bad for you only if you exist at the same time as that thing.  Accepting the modest version seems to allow us a way out of the problems posed here, but that it too has some counter-intuitive implications.

He illustrates this with another example:

Suppose that somebody’s got a nice long life. He lives 90 years. Now, imagine that, instead, he lives only 50 years. That’s clearly worse for him. And if we accept the modest existence requirement, we can indeed say that, because, after all, whether you live 50 years or 90 years, you did exist at some time or another. So the fact that you lost the 40 years you otherwise would have had is bad for you. But now imagine that instead of living 50 years, the person lives only 10 years. That’s worse still. Imagine he dies after one year. That’s worse still. An hour? Worse still. Finally, imagine I bring it about that he never exists at all. Oh, that’s fine.

He thinks this must be accepted if we accept the modest version of the existence requirement, but how can this be?  If one’s life is shortened relative to what they would have had, this is bad, and gets progressively worse as the life is hypothetically shortened, until a life span of zero is reached, in which case they no longer meet the modest existence requirement and thus can’t have anything be bad for them.  So it’s as if it gets infinitely worse as the potential life span approaches the limit of zero, and then when zero is reached, becomes benign and is no longer an issue.

I think a reasonable response to this scenario is to reject the claim that hypothetically shrinking the life span to zero is suddenly no longer an issue.  What seems to be glossed over in this example is the fact that this is a set of comparisons of one hypothetical life to another hypothetical life (two lives with different non-zero life spans), resulting in a final comparison between one hypothetical life and no life at all (a life span of zero).  This example illustrates whether or not something is better or worse in comparison, not whether something is good or bad intrinsically speaking.  The fact that somebody lived for as long as 90 years or only for 10 years isn’t necessarily good or bad but only better or worse in comparison to somebody who’s lived for a different length of time.

The Intrinsic Good of Existence & Intuitions On Death

However, I would go further and say that there is an intrinsic good to existing or being alive, and that most people would agree with such a claim (and that the strong will to live that most of us possess is evidence of our acknowledging such a good).  That’s not to say that never having lived is bad, but only to say that living is good.  If not living is neither good nor bad but considered a neutral or inconsequential state, then we can hold the position that living is better than not living, even if not living isn’t bad at all (after all it’s neutral, neither good nor bad).  Thus we can still maintain our modest existence requirement while consistently holding these views.  We can say that not living is neither good nor bad, that living 10 years is good (and better than not living), that living 50 years is even better, and that living 90 years is even better yet (assuming, for the sake of argument, that the quality of life is equivalently good in every year of one’s life).  What’s important to note here is that not having lived in the first place doesn’t involve the loss of a good, because there was never any good to begin with.  On the other hand, extending the life span involves increasing the quantity of the good, by increasing it’s duration.

Kagan seems to agree overall with the deprivation account of why we believe death is bad for us, but that some puzzles like those he presented still remain.  I think one of the important things to take away from this article is the illustration that we have obvious limitations in the language that we use to describe our ontological conceptions.  These scenarios and our intuitions about them also seem to show that we all generally accept that living or existence is intrinsically good.  It may also highlight the fact that many people intuit that some part of us (such as a soul) continues to exist after death such that death can be bad for us after all (since our post-death “self” would still exist).  While the belief in souls is irrational, it may help to explain some common intuitions about death.

Dying vs. Death, & The Loss of An Intrinsic Value

Remember that Kagan began his article by distinguishing between how one dies, the prospect of dying and death itself.  He asked us, how can the prospect of dying be bad if death itself (which is only true when we no longer exist) isn’t bad for us. Well, perhaps we should consider that when people say that death is bad for us they tend to mean that dying itself is bad for us.  That is to say, the prospect of dying isn’t unpleasant because death is bad for us, but rather because dying itself is bad for us.  If dying occurs while we’re still alive, resulting in one’s eventual loss of life, then dying can be bad for us even if we accepted the bold existence requirement — that something can only be bad for us if we exist at the same time as that thing.  So if the “thing” we’re referring to is our dying rather than our death, this would be consistent with the deprivation account of death, would allow us to put a date (or time interval) on such an event, and would seem to resolve the aforementioned problems.

As for Kagan’s opening question, when is death bad for us?  If we accept my previous response that dying is what’s bad for us, rather than death, then it would stand to reason that death itself isn’t ever bad for us (or doesn’t have to be), but rather what is bad for us is the loss of life that occurs as we die.  If I had to identify exactly when the “badness” that we’re actually referring to occurs, I suppose I would choose an increment of time before one’s death occurs (with an exclusive upper bound set to the time of death).  If time is quantized, as per quantum mechanics, then that means that the smallest interval of time is one Planck second.  So I would argue that at the very least, the last Planck second of our life (if not a longer interval), marks the event or time interval of our dying.

It is this last interval of time ticking away that is bad for us because it leads to our loss of life, which is a loss of an intrinsic good.  So while I would argue that never having received an intrinsic good in the first place isn’t bad (such as never having lived), the loss of (or the process of losing) an intrinsic good is bad.  So I agree with Kagan that the deprivation account is on the right track, but I also think the problems he’s posed are resolvable by thinking more carefully about the terminology we use when describing these concepts.

Substance Dualism, Interactionism, & Occam’s razor

Recently I got into a discussion with Gordon Hawkes on A Philosopher’s Take about the arguments and objections for substance dualism, that is, the position that there are actually two ontological substances that exist in the world: physical substances and mental (or non-physical) substances.  Here’s the link to part 1, and part 2 of that author’s post series (I recommend taking a look at this group blog and see what he and others have written over there on various interesting topics).  Many dualists would identify or liken this supposed second substance as a soul or the like, but I’m not going to delve into that particular detail within this post.  I’d prefer to focus on the basics of the arguments presented rather than those kinds of details.  Before I dive into the topic, I want to mention that Hawkes was by no means making an argument for substance dualism, but rather he was merely pointing out some flaws in common arguments against substance dualism.  Now that that’s been said, I’m going to get to the heart of the topic, but I will also be providing evidence and arguments against substance dualism.  The primary point raised in the first part of that series was the fact that just because neuroscience is continuously finding correlations between particular physical brain states and particular mental states, this doesn’t mean that these empirical findings show that dualism is necessarily false — since some forms of dualism seem to be entirely compatible with these empirical findings (e.g. interactionist dualism).  So the question ultimately boils down to whether or not mental processes are identical with, or metaphysically supervenient upon, the physical processes of the brain (if so, then substance dualism is necessarily false).

Hawkes talks about how the argument from neuroscience (as it is sometimes referred to) is fallacious because it is based on the mistaken belief that correlation (between mental states and brain states) is equivalent with identity or supervenience of mental and physical states.  Since this isn’t the case, then one can’t rationally use the neurological correlation to disprove (all forms of) substance dualism.  While I agree with this, that is, that the argument from neuroscience can’t be used to disprove (all forms of) substance dualism, it is nevertheless strong evidence that a physical foundation exists for the mind and it also provides evidence against all forms of substance dualism that posit that the mind can exist independently of the physical brain.  At the very least, it shows that the prior probability of minds existing without brains is highly unlikely.  This would seem to suggest that any supposed mental substance is necessarily dependent on a physical substance (so disembodied minds would be out of the question for the minimal substance dualist position).  Even more damning for substance dualists though, is the fact that since the argument from neuroscience suggests that minds can’t exist without physical brains, this would mean that prior to brains evolving in any living organisms within our universe, at some point in the past there weren’t any minds at all.  This in turn would suggest that the second substance posited by dualists isn’t at all conserved like the physical substances we know about are conserved (as per the Law of Conservation of Mass and Energy).  Rather, this second substance would have presumably had to have come into existence ex nihilo once some subset of the universe’s physical substances took on a particular configuration (i.e. living organisms that eventually evolved a brain complex enough to afford mental experiences/consciousness/properties).  Once all the brains in the universe disappear in the future (after the heat death of the universe guarantees such a fate), then this second substance will once again disappear from our universe.

The only way around this (as far as I can tell) is to posit that the supposed mental substance had always existed and/or will always continue to exist, but in an entirely undetectable way somehow detached from any physical substance (which is a position that seems hardly defensible given the correlation argument from neuroscience).  Since our prior probabilities of any hypothesis are based on all our background knowledge, and since the only substance we can be sure of exists (a physical substance) has been shown to consistently abide by conservation laws (within the constraints of general relativity and quantum mechanics), it is more plausible that any other ontological substance would likewise be conserved rather than not conserved.  If we had evidence to the contrary, that would change the overall consequent probability, but without such evidence, we only have data points from one ontological substance, and it appears to follow conservation laws.  For this reason alone, it is less likely that a second substance exists at all, if it isn’t itself conserved as that of the physical.

Beyond that, the argument from neuroscience also provides at least some evidence against interactionism (the idea that the mind and brain can causally interact with each other in both causal directions), and interactionism is something that substance dualists would likely need in order to have any reasonable defense of their position at all.  To see why this is true, one need only recognize the fact that the correlates of consciousness found within neuroscience consist of instances of physical brain activity that are observed prior to the person’s conscious awareness of any experience, intentions, or willed actions produced by said brain activity.  For example, studies have shown that when a person makes a conscious decision to do something (say, to press one of two possible buttons placed in front of them), there are neurological patterns that can be detected prior to their awareness of having made a decision and so these patterns can be used to correctly predict which choice the person will make even before they do!  I would say that this is definitely evidence against interactionism, because we have yet to find any cases of mental experiences occurring prior to the brain activity that is correlated with it.  We’ve only found evidence of brain activity preceding mental experiences, never the other way around.  If the mind was made from a different substance, existing independently of the physical brain (even if correlated with it), and able to causally interact with the physical brain, then it seems reasonable to expect that we should be able to detect and confirm instances of mental processes/experiences occurring prior to correlated changes in physical brain states.  Since this hasn’t been found yet in the plethora of brain studies performed thus far, the prior probability of interactionism being true is exceedingly low.  Additionally, the conservation of mass and energy that we observe (as well as the laws of physics in general) in our universe also challenges the possibility of any means of causal energy transfer between a mind to a brain or vice versa.  For the only means of causal interaction we’ve observed thus far in our universe is by way of energy/momentum transfer from one physical particle/system to another.  If a mind is non-physical, then by what means can it interact at all with a brain or vice versa?

The second part of the post series from Hawkes talked about Occam’s razor and how it’s been applied in arguments against dualism.  Hawkes argues that even though one ontological substance is less complex and otherwise preferred over two substances (when all else is equal), Occam’s razor apparently isn’t applicable in this case because physicalism has been unable to adequately address what we call a mind, mental properties, etc.  My rebuttal to this point is that dualism doesn’t adequately address what we call a mind, mental properties, etc., either.  In fact it offers no additional explanatory power than physicalism does because nobody has proposed how it could do so.  That is, nobody has yet demonstrated (as far as I know) what any possible mechanisms would be for this new substance to instantiate a mind, how this non-physical substance could possibly interact with the physical brain, etc.  Rather it seems to have been posited out of an argument from ignorance and incredulity, which is a logical fallacy.  Since physicalism hasn’t yet provided a satisfactory explanation for what some call the mind and mental properties, it is therefore supposed by dualists that a second ontological substance must exist that does explain it or account for it adequately.

Unfortunately, because of the lack of any proposed mechanism for the second substance to adequately account for the phenomena, one could simply replace the term “second substance” with “magic” and be on the same epistemic footing.  It is therefore an argument from ignorance to presume that a second substance exists, for the sole reason that nobody has yet demonstrated how the first substance can fully explain what we call mind and mental phenomena.  Just as we’ve seen throughout history where an unknown phenomena is attributed to magic or the supernatural and later found to be accounted for by a physical explanation, this means that the prior probability that this will also be the case for the phenomena of the mind and mental properties is extraordinarily high.  As a result, I think that Occam’s razor is applicable in this case, because I believe it is always applicable to an argument from ignorance that’s compared to an (even incomplete) argument from evidence.  Since physicalism accounts for many aspects of mental phenomena (such as the neuroscience correlations, etc.), dualism needs to be supported by at least some proposed mechanism (that is falsifiable) in order to nullify the application of Occam’s razor.

Those are my thoughts on the topic for now.  I did think that Hawkes made some valid points in his post series — such as the fact that correlation doesn’t equal identity or supervenience and also the fact that Occam’s razor is only applicable under particular circumstances (such as when both hypotheses explain the phenomena equally, good or bad, with one containing a more superfluous ontology).  However, I think that overall the arguments and evidence against substance dualism are strong enough to eliminate any reasonable justification for supposing that dualism is true (not that Hawkes was defending dualism as he made clear at the beginning of his post) and I also believe that both physicalism and dualism explain the phenomena equally well such that Occam’s razor is applicable (since dualism doesn’t seem to add any explanatory power to that already provided by physicalism).  So even though correlation doesn’t equal identity or supervenience, the arguments and evidence from neuroscience and physics challenge the possibility of any interactionism between the physical and supposed non-physical substance, and it challenges the existence of the second substance in general (due to it’s apparent lack of conservation over time among other reasons).