Irrational Man: An Analysis (Part 3, Chapter 10: Sartre)

In the previous post from this series on William Barrett’s Irrational Man, I delved into some of Martin Heidegger’s work, particularly his philosophy as represented in his most seminal work, Being and Time.  Jean-Paul Sartre was heavily influenced by Heidegger and so it’s fortunate for us that Barrett explores him in the very next chapter.  As such, Sartre will be the person of interest in this post.

To get a better sense of Sartre’s philosophy, we should start by considering what life was like in the French Resistance from 1940-45.  Sartre’s idea of human freedom from an existentialist perspective is made clear in his The Republic of Silence where he describes the mode of human existence for the French during the Nazi German occupation of France:

“…And because of all this we were free.  Because the Nazi venom seeped into our thoughts, every accurate thought was a conquest.  Because an all-powerful police tried to force us to hold our tongues, every word took on the value of a declaration of principles.  Because we were hunted down, every one of our gestures had the weight of a solemn commitment…”

I think we can interpret this as describing the brute authenticity of all the most salient features of our existence that can result in the face of severe forces of oppression.  By living knee-deep in an oppressive atmosphere with little if any libertarian freedom, one may inevitably reevaluate what they consider to be most important; they may exercise a great deal more care in terms of deciding what to think about, what to say, or what to do at any given time; and all of this may serve to make that which is thought about, said, or done, to be far more meaningful and free of the noise and superficiality that usually accompanies our day-to-day goings on.  Putting it this way is not to diminish the heinousness of any kind of violent oppression, but merely to point out that oppression can make one consider every part of their existence with a lot more care and concern, taking absolutely nothing for granted while living in such a precarious position.

And when standing in the face of death day after day, not only is the radical contingency of human existence made more clear than ever, but so is that which is intrinsically most valuable to us:

“Exile, captivity, and especially death (which we usually shrink from facing at all in happier days) became for us the habitual objects of our concern…And the choice that each of us made of his life was an authentic choice because it was made face to face with death, because it could always have been expressed in these terms: “Rather death than…”

This is certainly a powerful way of defining authenticity, or at least setting its upper bound; by saying that you’d rather die than abstain from some particular thought, speech, or action is to make a declaration of what’s truly the most meaningful to any individual given their current knowledge and the conditions of their own existence; and this is true even if their thoughts or actions are immoral or uninformed, for nothing matters other than the fact that one has done so with an unparalleled amount of care and personal significance.

While all of this was going on, the French government and the bourgeoisie were seemingly powerless, undetermined, and incapable of any significant recourse:

“…’Les salauds’ became a potent term for Sartre in those days-the salauds, the stinkers, the stuffy and self-righteous people congealed in the insincerity of their virtues and vices.  This atmosphere of decay breathes through Sartre’s first novel, Nausea, and it is no accident that the quotation on the flyleaf is from Celine, the poet of the abyss, of the nihilism and disgust of that period.  The nausea in Sartre’s book is the nausea of existence itself; and to those who are ready to use this as an excuse for tossing out the whole of Sartrian philosophy, we may point out that it is better to encounter one’s existence in disgust than never to encounter it at all-as the salaud in his academic or bourgeois or party-leader strait jacket never does.”

The sheer lack of personal commitment and passion plaguing so many would certainly help to reinforce a feeling of disgust or even propagate feelings of nihilism and helplessness.  And the kind of superficiality that Sartre perceived around him was (and of course, still is) in so many ways the norm; it’s the general way that people conduct themselves, and not only socially but even personally in the sense that one’s view of themselves and who they ought to be is heavily and externally imposed from collective norms and expectations.

“The essential freedom, the ultimate and final freedom that cannot be taken from a man, is to say No.  This is the basic premise in Sartre’s view of human freedom: freedom is in its very essence negative, though this negativity is also creative.”

This is an interesting conception, thinking of the essence of freedom as having a negative character, although perhaps we can compare this to or analogize this with the general concept of negative liberty (freedom from interference from others) as opposed to positive liberty (having the means and freedom to achieve one’s goals).  Since there are a number of contingencies that can prevent positive liberty from being realized, negative liberty (which could include the freedom to say ‘No’) seems to be the more fundamental or basic of the two.  It makes sense to think of this as the final freedom too, as Sartre does, since physical force and contingency can take away anyone’s ability to do all else, except to say ‘No’ or at the very least to think ‘No’ (in the event that one is coerced to say ‘Yes’).  In other words, one can have a number of freedoms taken away, but nobody can take away your freedom to desire one outcome over another, even if that desire can never be satisfied.

At this point it’s also worth mentioning that Sartre’s writings seem to reflect conflicting views on what constitutes human freedom.  His views on freedom may have changed over the course of his written works or he may just have had two different conceptions in mind.  On the one hand, in his earlier works, he seemed to refer to freedom in an ontological sense, as a property of human consciousness.  In this conception, he seemed to view freedom as the ability of a conscious agent to choose the attitude they hold and how they react toward the circumstances they find themselves in.  This would mean that everybody, even a prisoner held captive, is free insofar as they have the freedom to choose if they’ll accept or resist their situation.  Later on, Sartre began to focus on material freedom, that is, freedom from coercion.  In any case, both of these conceptions of freedom still seem to resonate with the concept of negative liberty mentioned earlier.  The only distinction would be that ontological freedom is the one form of negative liberty that is truly inalienable and final:

“Where all the avenues of action are blocked for a man, this freedom may seem a tiny and unimportant thing; but it is in fact total and absolute, and Sartre is right to insist upon it as such, for it affords man his final dignity, that of being man.”

Along with this equating human freedom with consciousness (or at least with conscious will) is the consideration that consciousness is the only epistemological certainty we have:

“He (Descartes) proposes to reject all beliefs so long as they can in any way be doubted, to resist all temptations to say Yes until his understanding is convinced according to its own light; so he rejects belief in the existence of an external world, of minds other than his own, of his own body, of his memories and sensations.  What he cannot doubt is his own consciousness, for to doubt is to be conscious, and therefore by doubting its existence he would affirm it.  “

Sartre takes this Cartesian position to the nth degree, by constraining his interpretation of human beings with his interpretation of Cartesian skepticism:

“But before this certitude shone for him (Descartes), (and even after it, before he passed on to other truths), he was a nothingness, a negativity, existing outside of nature and history, for he had temporarily abolished all belief in a world of bodies and memories.  Thus man cannot be interpreted, Sartre says, as a solid substantial thing existing amid the plenitude of things that make up a world; he is beyond nature because in his negative capability he transcends it.  Man’s freedom is to say No, and this means that he is the being by whom nothingness comes into being.”

To conclude that one is in fact a nothingness, even within the context of Cartesian doubt is, I think, going a bit too far.  While I can agree that consciousness in many ways transcends physical objects and the kind of essence we ascribe to them, it doesn’t transcend existence itself; and what is a nothingness really other than a lack of existence?

And one can think of consciousness as having some attribute of negativity, in the sense that we can think of things and even think of ourselves as being not this or not that; we can think of our desires in terms of what we don’t want; but I think this quality of negation can only serve to differentiate our conscious will (or perhaps ego) from the rest of our experience.  Since our consciousness can’t negate itself, I don’t think that this quality can negate us such that we become an actual nothingness.

“For Sartre there is no unalterable structure of essences or values given prior to man’s own existence.  That existence has meaning, finally, only as the liberty to say No, and by saying No to create a world.  If we remove God from the picture, the liberty which reveals itself in the Cartesian doubt is total and absolute; but thereby also the more anguished, and this anguish is the irreducible destiny and dignity of man.”

And here we see that common element within existentialism: the idea that existence precedes essence, where human beings are thought to have no inherent essence but only an essence that is derived from one’s own existence and one’s own consciousness.  Going back to Sartre’s conception of human freedom, perhaps the idea of saying No to create a world is grounded on the principle of our setting boundaries or limits to the world we make for ourselves; an idea that I can certainly get behind.

However, even if we remove God from the equation, which is certainly justified from the perspective of Cartesian skepticism, let alone from the lack of empirical evidence to support such a belief, we still can’t get around our having some essential human character or qualities (even if no God existed to conceive of such an essence).  While I agree that of the possible lives that you or I can have, it shouldn’t be up to others to choose which life that should be, nor what its ultimate meaning or purpose is, if we agree that there are a finite number of possible lives to choose from (individually and collectively as a species), given our constraints as finite beings (a thoroughly existential claim at that), then I think we can say that we do have an inherent essence that is defined by these biological and physical limitations.

So I do share Sartre’s view that our lives shouldn’t be defined by others, by social norms, nor defined by qualities found in only some of the possible lives one has access to; but it needs to be said that there are still only a finite number of possible life trajectories which will necessarily exclude some of the possibilities given to other types of organisms, and which will exclude any options that are not physically or logically possible.  In other words, I think human beings do have an inherent essence that is defined to some degree by the possibility space we as a species exist within; we just tend to think of essence as something that has to be well-defined, like some short list of properties or functions (which I think is too limited a view).

“Thus Sartre ends by allotting to man the kind of freedom that Descartes has ascribed only to God.  It is, he says, the freedom Descartes secretly would have given to man had he not been limited by the theological convictions of his time and place.”

And this is certainly a valid point.  What’s interesting to me with respect to the whole Cartesian experiment was that Descartes began by doubting the external world and only by introducing an ad hoc conception of a deity, let alone a benevolent deity, could he trust that his senses weren’t deceiving him.  This is despite the fact that this belief in God would be just as likely to be a deception as any other belief (and so Descartes attempted to solve this problem through invalid circular reasoning).

More to the point however is the fact that we wouldn’t be able to tell the difference between a “real” external world and one that is imagined or illusory (unbeknownst to us) anyway, and so we might as well simply stick with the fewest assumptions that yield the best predictions of future experiences.  And this includes the assumption of our having some degree of human freedom that directs our conscious will and desires; a view of our having some control over what trajectory we take in life:

“He is not subordinate to a realm of essences: rather, He creates essences and causes them to be what they are.  Hence such a God transcends the laws of logic and mathematics.  As His (God’s) existence precedes all essences, so man’s existence precedes his essence: he exists, and out of the free project which his existence can be he makes himself what he is.  “

It’s worth considering the psychological motivations for positing and defining a God that has qualities reminiscent of human creativity and teleological intentionality; that is to say, we often project our human capacity for creation and goal-directedness onto some idealized being conception that accounts for our existence in the same way that we see ourselves as accounting for the existence of any and all human constructs, be they watches, paintings, or entire cities.  We see ourselves giving purpose and functional descriptions to the things we make and then can’t help but ask what our own purposes may be; and if so, then we feel inclined to assume it must be someone other than ourselves that gives us this purpose, finally arriving at a God-concept of some kind.  And lo and behold, throughout history the gods we see in all the religions have been remarkably human with their anger, jealousy, pettiness, impulsiveness, and retributive sense of justice, thus illustrating that gods are made in man’s image as opposed to the contrary.  But this kind of thinking only leads to our diminishing our own degree of personal freedom, bestowing it to a concept instead (whether bestowed to “society” or to a “deity”), and thus it abstracts away our own individuality and responsibility.

Despite the God concept permeating much of human history, we eventually challenged it with the greatest challenges precipitating from the Enlightenment, causing many of us to discover the freedom we had all along:

“When God dies, man takes the place of God.  Such had been the prophecy of Dostoevski and Nietzsche, and Sartre on this point is their heir.  The difference, however, is that Dostoevski and Nietzsche were frenzied prophets, whereas Sartre advances his view with all the lucidity of Cartesian reason and advances it, moreover, as a basis for humanitarian and democratic social action.  To put man in the place of God may seem, to traditionalists, an unspeakable piece of diabolism; but in Sartre’s case it is done by a thinker who, to judge from his writings, is a man of overwhelming good will and generosity.”

And this idea of human beings taking the place of God, is really nothing more than the realization of our having projected qualities that were ours from the very beginning.  This may make some people uncomfortable because they think that by eliminating the God concept, that humans are somehow seeing ourselves as a God; but we still have the same limitations that human beings have always had, with our limited knowledge and our existing in a world with many forces out of our control.  But now we have an added sense of freedom that further empowers us as individuals to do the most we can with our lives and to give our lives meaning on our own terms, rather than having them externally imposed on us (see the previous post on Heidegger, and the concept of Das Man or the “They” self).  Sartre’s appreciation and promotion of this idea is admirable to say the least, as it seeks to preferentially value human beings over ideas, to value consciousness over abstractions, and to maximize each person having a voice that they can call their own.

1.  Being-for-itself and Being-in-itself

“Being-for-itself (coextensive with the realm of consciousness) is perpetually beyond itself.  Our thought goes beyond itself, toward tomorrow or yesterday, and toward the outer edges of the world.  Human existence is thus a perpetual self-transcendence: in existing we are always beyond ourselves.  Consequently we never possess our being as we possess a thing.”

This definitely resonates with Heidegger’s conception of temporality, where our understanding of our world is constrained by our past experiences and is always projecting into the future in terms of our always wanting to become a certain kind of person (even if this vision of our future self changes over time).  And here, Sartre’s conception of Being-for-itself is meant to be contrasted with his concept of Being-in-itself; where the former represents consciousness and the latter non-conscious or unconscious material objects.

As mentioned in my last post, I agree with there being a temporally transcendent quality of consciousness, and I think it can be best explained by the fact that our perception of the world is operating under a schema of prediction: our brains are always trying to predict what will happen next in terms of our sensations and these predictions are what we actually experience (as opposed to sensory information itself), where we experience a kind of controlled hallucination that evolves its modeling in response to any prediction error encountered.  We try to make these perceptual predictions come true and reduce the prediction error by either changing our mental models of the world’s causal structure to better account for the incoming sensory information or through a series of actions involving our manipulating the world in some way or another.

“This notion of the For-itself may seem obscure, but we encounter it on the most ordinary occasions.  I have been to a party; I come away, and with a momentary pang of sadness I say, “I am not myself.”  It is necessary to take this proposition quite literally as something that only man can say of himself, because only man can say it to himself.  I have the feeling of coming to myself after having lost or mislaid my being momentarily in a social encounter that estranged me from myself.  This is the first and immediate level on which the term yields its meaning.”

And here’s where the For-itself concept shows the distinction between our individual self (what Heidegger would likely call our authentic self) and the externalized self that is more or less a product of social norms and expectations stemming from the human collective.

“But the next and deeper level of meaning occurs when the feeling of sadness leads me to think in a spirit of self-reproach that I am not myself in a still more fundamental sense: I have not realized so many of the plans or projects that make up my being; I am not myself because I do not measure up to myself. “

So aside from what society expects of us, we also have our own expectations and goals, many of which that never come into fruition.  Sartre sees this as a kind of comparison where we’re evaluating our current view of ourselves with the more idealized view of ourselves that we’re constantly striving for.  And we can see this anytime we think or say something like “I shouldn’t have done that because I’m better than that…”  We’re obviously not better than ourselves, but it’s because we’re always comparing our perspective to the ideal model we hold of ourselves that we in some sense transcend any present sense of self, and this view allows us to better understand Sartre’s idea that we’re always existing beyond ourselves.

Barrett describes Sartre’s view in more detail:

“Beneath this level too there is still another and deeper meaning, rooted in the very nature of my being: I am not myself, and I can never be myself, because my being stretching out beyond itself at any given moment exceeds itself.  I am always simultaneously more and less than I am.  Herein lies the fundamental uneasiness, or anxiety, of the human condition, for Sartre.  Because we are perpetually flitting beyond ourselves, or falling behind our possibilities, we seek to ground our existence, to make it more secure.”

We might even say that we conjure up a kind of essence of who we are because we’re trying to solidify our existence such that we’re more or less like the objects we interact with in our world.  And we do this through our imagination by projecting our identity into the future, effectively defining ourselves in terms of the person we want to become rather than the person we are now.  However, our imagination is both a blessing and a curse, for it is through imagination that we express our creativity, simulating new worlds for ourselves that we can choose from; but, our imagination is also a means of comparing our current state of affairs to some ideal model, in many cases showing us what we don’t have (yet want) and showing us a number of lost opportunities which can further constrain our future space of possibilities.  Whether we love it or hate it, our capacity for imagination is both the cause and the cure for our suffering; and it is a capacity that is fundamental to our existence as human beings.

“The For-itself struggles to become the In-itself, to attain the rocklike and unshakable solidity of a thing.  But this is can never do so long as it is conscious and alive.  Man is doomed to the radical insecurity and contingency of his being; for without it he would not be man but merely a thing and would not have the human capacity for transcendence of his given situation.”

It seems then that our projection into the future, equating our self-hood with some preconceived future self, not only serves as an attempt to objectify our subjectivity, but it may also be psychologically motivated by the uncertainty that we recognize in both our short-term and long-term lives.  We feel uncomfortable and anxious about the contingency of our lives and the sheer number of possible future life trajectories, and so we conceptualize one path as if it were a fixed object and simply refer to it as “me”.

“With enormous ingenuity and virtuosity Sartre interweaves these two notions-Being-in-itself and Being-for-itself-to elucidate the complexities of human psychology.”

Sartre’s conception of Being-in-itself and Being-for-itself are largely grounded on the distinction between consciousness and that which is not conscious.  On the surface, it looks like merely another way of expressing the distinction between mind and matter, but the distinction is far more nuanced and complex.  Being-for-itself is defined by its relation to Being-in-itself, where, for Sartre, Being-for-itself is, among other things, in a constant state of want for synthesis with Being-in-itself, despite the fact that Being-for-itself involves a kind of knowledge that it is not in-itself.  Being-for-itself is in a constant state of incompleteness, where a lack of some kind permeates its mode of being; it’s constantly aware of what it is not, and whatever it may be, lacking any essential structure to build off of, is created out of nothingness.  In other words, Being-for-itself creates its own being with a blank canvas.

As I mentioned above, I don’t particularly agree with Sartre’s conception here, that we as conscious human beings create our being or our essence with a blank canvas, as there are (to stick with the artistic analogy) a limited number of ways that the canvas can be “painted”, there are a limited number of existential “colors” to work with, and while the future may be undetermined or unpredictable to some degree, this doesn’t mean that any possible future that comes to mind can come into being.  As human beings that have evolved from other forms of life on this planet, we have a number of innate or biologically-constrained predispositions including: our instincts and basic behavioral drives, the kinds of conditioning that actually work to modify our behavior, and a psychology that can only thrive within a finite range of behavioral and environmental conditions.

I see this “blank canvas” analogy as just another version of the blank slate or tabula rasa model of human knowledge within the social sciences.  But this model has been falsified with findings in behavioral genetics, psychology, and neurobiology, where various forms of evidence have shown that human beings are pre-wired for social interaction, and have genetically dependent capacities such as intelligence, memory, and reason.  And even taking into account the complicated relation between genes and environment, where the expression of the former is dependent on the latter, we still have environmental conditions that are finite, contingent, externally imposed, and which drastically shape our behavior whether we realize it or not.

On top of this, as I’ve argued elsewhere, we don’t have a libertarian form of free will either, which means that if we could go back in time to some set of initial conditions (where we had made a conscious choice of some kind, and perhaps a significant life altering choice at that), we wouldn’t be able to choose an alternative course of action, barring any quantum randomness.  This doesn’t mean that we can’t still make use of some compatibilist conceptions of free will, nor does it mean that we shouldn’t be held accountable for our actions, as these are all important for maintaining effective behavioral conditioning; but it does mean that we don’t have the kind of blank canvas that Sartre had envisioned.

The role that nothingness plays in Sartre’s philosophy goes beyond the incompleteness resulting from a self that is in some sense defined by our projection into the future.

“The Self, indeed, is in Sartre’s treatment, as in Buddhism, a bubble, and a bubble has nothing at its center. (but not nihilism)…For Sartre, on the other hand, the nothingness of the Self is the basis for the will to action: the bubble is empty and will collapse, and so what is left us but the energy and passion to spin that bubble out?”

I can certainly see some parallels between Buddhism and Sartre’s conception of the Self, where, for example, the fact that we lack a persistent identity, instead having a mode of being that’s dynamic and based on a projection into an uncertain future, is analogous to or at least consistent with the Buddhist conception of the Self being a kind of illusion.  I also tend to agree with the basic idea that we don’t have any ultimate control or ownership over our thoughts and feelings even if we feel like we do from time to time, and our experiences seem to be just one infinitesimal, fleeting moment after another.  Our attention and behavior are absorbed and shaped by the world around us, and it’s only when we fixate or identify ourselves as a concept, a thought, or a feeling, or a certain combination of these elements, that we conjure up a seemingly objective and essential sense of Self.  While Sartre’s philosophy appears to grant us a greater level of free will and human freedom than that of Buddhist thought, it’s this essential self, this eternal self or identity, interrelated with the concept of a permanent soul that both Sartre and Buddhism reject.

“Man’s existence is absurd in the midst of a cosmos that knows him not; the only meaning he can give himself is through the free project that he launches out of his own nothingness.”

This is one of the core ideas in existentialism, and also in Sartre’s own work, that I most appreciate: the idea that we (have to) give our own lives their meaning and purpose, as opposed to their being externally imposed on us from a religion, a deity (whether real or imagined), a culture, or any other source aside from ourselves, if we’re to avoid submitting the core of our being to some kind of authoritarian tyranny.

Human beings were not designed for some particular purpose, aside from natural selection having “designed” us for survival, and perhaps a thermodynamic argument could be made that all life serves the “purpose” of generating more entropy than if our planet had no life at all (which means that life actually speeds up the inevitable heat death of the universe).  But, because our intelligence and the kind of consciousness we’ve gained through evolution grants us a kind of open-ended creativity and imagination, with technology and cultural or memetic evolution effectively superseding our “selfish genes”, we no longer have to live with survival as our primary directive.  We now have the power to realize that our existence transcends our evolutionary past, and we have the ability to define ourselves based on our primary goals in life, whatever they may be, given the constraints of the world we find ourselves born into or embedded within at the present moment.  And it’s this transcendent quality that we find in a lot of Sartre’s work.

“He (Sartre) misses the very root of all of Heidegger’s thinking, which is Being itself.  There is, in Sartre, Being-for-itself and Being-in-itself but there is no Being…Sartre has advanced as the fundamental thesis of his Existentialism the proposition that existence precedes essence.  This thesis is true for Heidegger as well, in the historical, social, and biographical sense that man comes into existence and makes himself to be what he is.  But for Heidegger another proposition is even more basic than this: namely, Being precedes existence.”

In contrast with Sartre then, with Heidegger we see an ontological requirement for Being itself, prior to even having the possibility for Sartre’s distinction between Being-for-itself and Being-in-itself.  That is to say, these kinds of distinctions would, for Heidegger at least, require a more basic field of Being within which one may be able to posit distinctive types of Being.  It would seem that some ultimate region of Being would be necessary in order for the transcendent quality of consciousness to manifest in any way whatsoever:

“To be sure, Sartre has gone a considerable step beyond Descartes by making the essence of human consciousness to be transcendence: that is, to be conscious is, immediately and as such, to point beyond that isolated act of consciousness and therefore to be beyond or above it…But this step forward by Sartre is not so considerable if the transcending subject has nowhere to transcend himself: if there is not an open field or region of Being in which the fateful dualism of subject and object ceases to be.”

One might also wonder how this all fits in with the question of how a conscious subject can ever truly know an object in its entirety.  For example, Kant believed that any object in itself, which he referred to more generally as the noumenon (as opposed to the phenomenon, the object as we perceive it), was inherently unknowable.  Nietzsche thought that it didn’t matter whether or not we could know the object in itself, if such an inaccessible realm of knowledge pertaining to any object even existed in the first place.  Rather, he thought the only thing that mattered was our ability to master the object, to manipulate it and use it according to our own will (the will to power).  For Sartre, the only thing that really mattered was the will to action, which was primarily a will to some form of revolutionary action, so he doesn’t really address the problem of knowledge with respect to the subject truly knowing some object or other (nor does he specify whether or not there even is any problem, at least within his seminal work Being and Nothingness).

Heidegger was less concerned with this issue of knowledge and was more concerned with grounding the possibility of there being a subject or object in the first place:

“For what Heidegger proposes is a more basic question than that of Descartes and Kant: namely, how is it possible for the subject to be? and for the object to be?  And his answer is: Because both stand out in the truth, or un-hiddenness, of Being.  This notion of truth of Being is absent from the philosophy of Sartre; indeed, nowhere in his vast Being and Nothingness does he deal with the problem of truth in a radical and existential way: so far as he understands truth at all, he takes it in the ordinary intellectualistic sense that has been traditional with non-existential philosophers.  In the end (as well as at his very beginning) Sartre turns out thus to be a Cartesian rationalist-one, to be sure, whose material is impassioned and existential, but for all that not any less a Cartesian in his ultimate dualism between the For-itself and the In-itself.”

I don’t think it’s fair to label Sartre a Cartesian rationalist as Barrett does here, since Sartre seems to actually depart from Descartes’ form of rationalism when it comes to evaluating the overall structure of consciousness.  And there doesn’t seem to be any ultimate dualism between Being-for-itself and Being-in-itself either, if the former is instantiated or experienced as a kind of negation of the latter.  Perhaps if Sartre had used the terms Being-in-itself and not-Being-in-itself, this would be made more clear.  In any case, it’s not as if these two instantiations of being need be separate substances, especially if, as Sartre says in Being and Nothingness, that:

“The ontological error of Cartesian rationalism is not to have seen that if the absolute is defined by the primacy of existence over essence, it cannot be conceived as a substance.”

So at the very least, it seems mistaken to say that Sartre is a substance dualist given his position on existence preceding essence, even if he may be adequately described as some kind of property dualist.

“But, again like every humanism, it leaves unasked the question: What is the root of man?  In this search for roots for man-a search that has, as we have seen, absorbed thinkers and caused the malaise of poets for the last hundred and fifty years-Sartre does not participate.  He leaves man rootless.”

Sartre may not have explicitly described any kind of truth for man that wasn’t a truth of the intellect, unlike Heidegger who had his conception of the truth of Being, but Sartre did also describe a pre-reflective aspect of consciousness that didn’t seem to be identified as either the subject nor anything about the subject.  There’s a mysterious aspect of consciousness that’s simply hard to pin down, and it may be the kind of ethereal concept that one could equate with some kind of pre-propositional realm of Being.  This may not be enough to say that Sartre, in the end, leaves us rooted (so to speak), but it seems to be a concept that implicitly pulls his philosophy in that direction.

2. Literature as a Mode of Action

Sartre places a lot of importance on literature as a means of expressing the author’s freedom in a way that depends on an interaction with the freedom of the reader.

“…the perfect example of what he believes a writer should do and what he himself tries to do in his own later fiction: that is, grapple with the problems of man in his time and milieu.”

As mentioned earlier in this post, Sartre’s personal experience during the time of the French Resistance colored his written works with an imperative to assert humanity’s radical freedom and did so in relation to the historical developments that affected the masses, the working class, and the oppressed.

“It is always to the idea, and particularly the idea as it leads to social action, that Sartre responds.  Hence he cannot do justice, either in his critical theory or in his actual practice of literary criticism, to poetry, which is precisely that form of human expression in which the poet-and the reader who would enter the poet’s world-must let Being be, to use Heidegger’s phrase, and not attempt to coerce it by the will to action or the will to intellectualization.  The absence of the poet in Sartre, as a literary man, is thus another evidence of what, on the philosophical level, leads to a deficiency in his theory of Being.”

And though Sartre may not be any kind of poet, that doesn’t mean his work doesn’t have some of the subjective, emotionally charged character that one finds in poetry.  But, Barret’s point is well taken as Sartre’s works don’t have the kind of aesthetic quality, the irrationality, nor do they make as much use of metaphor, as the majority of popular poets have.

“In discharging his freedom man also wills to accept the responsibility of it, thus becoming heavy with his own guilt.  Conscience, Heidegger has said, is the will to be guilty-that is, to accept the guilt that we know will be ours whatever course of action we take.”

Freedom and responsibility clearly go hand in hand for Sartre (and for Heidegger as well), and this very basic idea makes sense from the perspective that if one believes themselves to be free, then they must necessarily accept that they own their actions insofar as they were done freely.  As for conscience, rather than Heidegger’s conception of the will to be guilty, I’d prefer to describe it as simply a manifestation of our perception of both responsibility and duty, although one can also aptly describe conscience as a product of evolution that tends to produce in us a more pro-social set of behaviors.

The tendency for many to try and absolve themselves of their freedom, by submitting themselves to the conception of self that society or others draw up for them is addressed in Sartre’s 1944 French play No Exit.  In this play, the three main characters find themselves in Hell, being punished in a way that is analogous to that of Dante’s Inferno, in this case taunted by a symbol representing the inauthentic or superficial way they lived their lives:

“Having practiced “bad faith” in life-which, in Sartre’s terms, is the surrendering of one’s human liberty in order to possess, or try to possess, one’s being as a thing-the three characters now have what they had sought to surrender themselves to.  Having died, they cannot change anything in their past lives, which are exactly what they are, no more and no less, just like the static being of things…But this is exactly what they long for in life-to lose their own subjective being by identifying themselves with what they were in the eyes of other people.”

By defining ourselves in any essential or fixed way, for example, by defining ourselves based on the unchangeable past (as all its features are indeed fixed), we lose the freedom we’d have if our identity had been directed toward the uncertain future and the possible goals and projects contained therein.  In this case with No Exit, the characters were defining themselves based on the desires and expectations of society or at least of several members of society that these patrons of Hell deemed high in status or social clout.

And going back to the relation between freedom and responsibility, we can see how one can try and rid themselves of responsibility by holding an attitude that they have no freedom regarding who they are or what they do, that they must do what it is they do (perhaps because society or others “say so”, although it may be other reasons).  This can be a slippery thing to contemplate however, when we consider different conceptions of free will.

While it’s technically true that we don’t have any kind of causa sui free will (since it’s logically impossible to have this in a deterministic or random/indeterministic world), it’s still useful to conceive of our having a kind of freedom of the will that distinguishes between animals or people that have varying degrees of autonomy.  This is similar to the conception of free will that a court of law uses, to distinguish between instances of coercion or not being of sound mind and that of conscious intentions made with a reasonable capacity for evaluating the legality or consequences off those intentions.

It’s also important to account for the fact that we can’t predict all of our own actions (nor the actions of others), and so there’s also a folk psychological concept of free will implied by this uncertainty.  Most of these conceptions are psychologically useful, they play a positive role in maintaining the efficacy of our punishment reward systems, and overall they resonate with how we lives our lives as human beings.

In general then, I think that Sartre’s conception of tying freedom and responsibility together jives with our folk psychological conceptions of free will, and it serves a use for promoting a very functional and motivating way to live one’s life.

“As a writer Sartre is always the impassioned rhetorician of the idea; and the rhetorician, no matter how great and how eloquent his rhetoric, never has the full being of the artist.  If Sartre were really a poet and an artist, we would have from him a different philosophy…”

Although Barrett seems to be pigeonholing artists to some degree here (which isn’t entirely fair to Sartre), it’s true that what we find in Sartre’s written works are not artistic in any traditional or classical sense.  Even so, I think it’s fair to say that postmodern art has benefited from and been influenced by the works of many great thinkers including Sartre.  And the task of the artist is often informing us of our own human condition in its particular time and place, including many aspects of that condition that aren’t typically talked about or explicitly in our stream of consciousness.  Sartre’s works, both written and performed, did just that; they informed us of our modern condition and of a number of interesting perspectives of our own human psychology, and they did so in a thoroughly existential way.  Sartre was an artist then, in my mind at least.

3. Existential Psychology

“Behind all Sartre’s intellectual dialectic we perceive that the In-itself is for him the archetype of nature: excessive, fruitful, blooming nature-the woman, the female.  The For-itself, by contrast, is for Sartre the masculine aspects of human psychology: it is that in virtue of which man chooses himself in his radical liberty, makes projects, and thereby gives his life what strictly human meaning it has.”

And here some of the sexist nuances, both explicitly and implicitly in the minds of Sartre and his contemporaries, starts to become apparent.  The idea that nature has more of a female character is certainly far more justifiable and sensible because females are the only sex giving birth to new life and continuing the cycle, but the idea that the male traits subsume the human qualities of choice, freedom, and worthwhile projects, is one resulting from historical dominance hierarchies, which were traditionally patriarchal or androcentric.  But modern human life has illustrated this to be no longer applicable to humans generally.  In any case, we can still use traditional archetypes that make use of this kind of sexual categorization, as long as we’re careful to avoid taking those archetypes too seriously in terms of influencing an essentialist view of the human sexes and what our roles in society ought to be.

“The essence of man…lies not in the Oedipus complex (as Freud held) nor in the inferiority complex (as Adler maintained); it lies rather in the radical liberty of man’s existence by which he chooses himself and so makes himself what he is.”

And this of course is just a reiteration of the Sartrean existence precedes essence adage, but it’s interesting to see that Sartre seems to dismiss much of modern psychology, evolutionary psychology and any role for nature or innate predispositions in human beings.  One may wonder as well how humans can be free of traditional essentialism in Sartre’s mind while at the same time he assumes some kinds of roles or archetypes that are essentially male and female.  I don’t think we can avoid essentialism in its entirety, even if the crux of Sartre’s existential adage is valid.  And I don’t think Sartre avoids it either, even if he tries to.

“Man is not to be seen as the passive plaything of unconscious forces, which determine what he is to be.  In fact, Sartre denies the existence of an unconscious mind altogether; wherever the mind manifests itself, he holds, it is conscious.”

I’m not sure if Sartre’s denial of an unconscious mind is just a play on semantics (i.e. he’s defining “mind” exclusively as the neurological activity that directly leads to consciousness) or if he truly rejected the idea that there were any forces motivating our behavior that weren’t explicit in our consciousness.  If the latter is true, then how does Sartre account for marketing strategies, cognitive biases, denial, moral intuition, and a host of other things that rely on or operate under psychological predispositions that are largely unknown to the person who possesses them?  The totality of our behavior simply can’t be accounted for without some form of neurological processing happening under the radar (so to speak), which serves some psychological purpose.

“A human personality or human life is not to be understood in terms of some hypothetical unconscious at work behind the scenes and pulling all the wires that manipulate the puppet of consciousness.  A man is his life, says Sartre; which means that he is nothing more nor less than the totality of acts that make up that life.”

This is interesting when viewed through the lens of the future, our intended projects, and the kind of person we may strive to be.  It seems that in order for this excerpt above to be correct, according to Sartre’s (and perhaps Heidegger’s) own reasoning, that we’d have to include our intentions and goals as well; we can’t simply look at the facticity of our past, but must also look at how the self transcends into the future.  And although Sartre or others may not want to identify themselves with some of the forces operating in their unconscious, the fact of the matter is, if we want to be able to predict our own behavior and that of others as effectively as possible, then we have no choice but to make use of some concept of an unconscious mind, unconscious psychology, etc. It may not be the primary way we ought to think of ourselves, but it’s an important part of any viable model of human behavior.

Analogously, one may say that they don’t want to be identified with their brain (“I am not my brain!”), and yet if you asked them if they’d rather have a brain transplant or a body transplant (say, everything below the neck), they would undoubtedly choose the latter.  Why might this be so?  An obvious answer is because their entire personality, all that they value, believe, and know, is all made manifest in the structure of their brain.  Now we may see a brain on a table and have difficulty equating it with a person, and yet we can have an entire human body, only lacking a brain, and we have no person at all.  The only additional thing needed for personhood is that brain lying on the table, assuming it is a brain inherently capable of instantiating a personality, consciousness, etc.

Another aspect of Sartrean psychology is the conception of how the Other (some other conscious self) views us given the constraints of consciousness and our ability to see another person from that person’s own subjective point of view:

“This relation to the Other is one of the most sensational and best-known aspects of Sartre’s psychology.  To the other person, who look at me from the outside, I seem an object, a thing; my subjectivity with its inner freedom escapes his gaze.  Hence his tendency is always to convert me into the object he sees.  The gaze of the Other penetrates to the depths of my existence, freezes and congeals it.  It is this, according to Sartre, that turns love and particularly sexual love into a perpetual tension and indeed warfare.  The lover wishes to possess the beloved, but the freedom of the beloved (which is his or her human essence) cannot be possessed; hence, the lover tends to reduce the beloved to an object for the sake of possessing it.”

I think this view has merit and has a lot to do with how our brains make simplistic or heuristic models of the entities and causal structure we perceive around us.  Although we can recognize that some of those other entities are subjective beings such as ourselves, living with the same kinds of intrinsic conscious experiences that we do, it’s often easier and more automatic to treat them as if they were an object.  We can’t directly experience another person’s subjectivity, which is what makes it so easy to escape our attention and consideration.  More to the point though, Sartre was claiming how this tendency to objectify another penetrates many facets of our lives, introducing a source of conflict with our relations to one another.  Ironically, this could also be described as something we do unconsciously, where we don’t explicitly say to ourselves “this person is an object,” but rather we may explicitly see them as a person and yet treat them as an object in order to serve some implicit psychological goal (e.g. to feel that you possess your significant other).

“He is right to make the liberty of choice, which is the liberty of a conscious action, total and absolute, no matter how small the area of our power: in choosing, I have to say No somewhere, and this No, which is total and totally exclusive of other alternatives, is dreadful; but only by shutting myself up in it is any resoluteness of action possible.  “

Referring back to the beginning of this post, regarding the liberty of choice (in particular a kind of negative liberty) as the ultimate form of human freedom, we can also describe the choices we make in terms of the alternative choices we had to say no to first.  We can see our ability to say no as a means of creating a world for ourselves; a world with boundaries delineating what is and isn’t desired.  And the making of these choices, the effect that each choice has, and the world we create out of them are things that we and we alone own.  Understandably then, once this freedom and responsibility are recognized, they lead to our experiencing a kind of dread, anxiety, and perhaps a feeling of being overwhelmed by the sheer space of possibilities and thus overwhelmed by the number of possibilities that we have to willfully reject:

“A friend of mine, a very intelligent and sensitive man, was over a long period in the grip of a neurosis that took the form of indecision in the fact of almost every occasion of life; sitting in a restaurant, he could not look at the printed menu to choose his lunch without seeing the abyss of the negative open before his eyes, on the page, and so falling into a sweat…(this story) confirms Sartre’s analysis of freedom, for only because freedom is what he says it is could this man have been frightened by it and have retreated into the anxiety of indecision.”

This story also reminded me of the different perspectives that can result depending on whether one is a maximizer or a sufficer: the maximizer being the person who tries to exhaustively analyze every decision they make, hoping to maximize the benefits brought about by their careful decision-making; and the sufficer being the person who is much more easily pleased, willing to make a decision fairly quickly, and not worry about whether their decision was the best possible so much as whether it was simply good enough.  On average, maximizers are far less happy as well, despite the care they take in making decisions, simply because they fret over whether the decision they landed on was really best after all, and they tend to enjoy life less than sufficers because they’re missing many experiences and missing the natural flow of those experiences as a result of their constant worrying.  The maximizer concept definitely fits in line with Sartre’s conception of the anxiety of choice brought about by the void and negation implicit in decision making.

I for one try to be a sufficer, even if I occasionally find myself trying to maximize my decision making, and I do this for the same reason that I try to be an optimist rather than a pessimist: people live happier and more satisfying lives if they try and maintain a positive attitude and if they aren’t overly concerned with every choice they make.  We don’t want to make poor decisions by making them in haste or without any care, but we also don’t want to invest too many of our psychological resources in making those decisions; we may end up at the end of our lives realizing that we’ve missed out on a good life.

“…the example points up also where Sartre’s theory is decidedly lacking: it does not show us the kind of objects in relation to which our human subjectivity can define itself in a free choice that is meaningful and not neurotic.  This is so because Sartre’s doctrine of liberty was developed out of the experience of extreme situations: the victim says to his totalitarian oppressor, No, even if you kill me; and he shuts himself up in this No and will not be shaken from it.”

I agree with Barrett entirely here.  Much of Sartre’s philosophy was developed out of an experienced oppression under the threat of violence, and this quality makes it harder to apply to an everyday life that’s not embroiled with tyranny or fascism.

“…But he who shuts himself up in the No can be demoniacal, as Kierkegaard pointed out; he can say No against himself, against his own nature.  Sartre’s doctrine of freedom does not really comprehend the concrete man who is an undivided totality of body and mind, at once, and without division, both In-itself and For-itself; but rather an isolated aspect of this total condition, the aspect of man always at the margin of his existence.”

I think the biggest element missing from Sartre’s conception of freedom or from his philosophy generally is the evolutionary and biologically grounded aspects of our psychology, the facts pertaining to our innate impulses and predispositions, our cognitive biases and other unconscious processes (which he outright denied the existence of).  This makes his specific formula of existence preceding essence far less tenable and realistic.  Barrett agrees with some of this reasoning as well when he says:

“Because Sartre’s psychology recognizes only the conscious, it cannot comprehend a form of freedom that operates in that zone of the human personality where conscious and unconscious flow into each other.  Being limited to the conscious, it inevitably becomes an ego psychology; hence freedom is understood only as the resolute project of the conscious ego.”

Indeed, the human being as a whole includes both the conscious and unconscious aspects of ourselves.  The subjective experience that defines us is a product of both sides of our psyche, not to mention the interplay of the psyche as a whole with the world around us that we’re intimately connected to (think of Heidegger’s field of Being).

The final point I’d like to bring up regarding Sartre is in regard to his atheism:

“It has been remarked that Kierkegaard’s statement of the religious position is so severe that it has turned many people who thought themselves religious to atheism.  Analogously, Sartre’s view of atheism is so stark and bleak that it seems to turn many people toward religion.  This is exactly as it should be.  The choice must be hard either way; for man, a problematic being to his depths, cannot lay hold of his ultimate commitments with a smug and easy security.”

It’s understandable that most people gravitate toward religion because of the anxiety and fear that can accompany a worldview where one has to take on the burden of making meaning for their own lives rather than having someone else or some concept “decide” this for them.  Additionally, the fear of being alone, the fear of death, the fear of having a precarious existence generally throughout our lives and also given the fact that we live in an inhospitable universe where life appears to occupy a region in space comparable to the size of a proton within a large room.

All of these reasons lend themselves to reinforcing a desire for a deity and for a narrative that defines who we ought to be, so we don’t have to decide this for ourselves.  Personally, I don’t think one can live authentically with these kinds of world views, nor is it morally responsible to live life with belief systems that inhibit the ability to distinguish between fact and fiction.  But I still understand that it’s far more difficult to reject the comforts of religion and face the radical contingencies of life and the burden of choice.  I think people can live the most fulfilling lives in the most secure way only when they have a reliable epistemology in place and only when they take the limits of human psychology seriously.  We need to understand that some ways of living, some attitudes, some ways of thinking, etc., are better than others for living sustainable, fulfilling lives.

Sartre’s atheistic philosophy is certainly bleak, but his does not represent the philosophy of atheists.  My atheistic philosophy of life, for example, takes into account the usefulness of multiple descriptions (including holistic descriptions) of our existence; the recognition of our desire to connect to the transcendent; the need for morality, meaning, and purpose; and the need for emotional channels of expression.  And regardless of it’s bleakness, Sartre’s philosophy has some useful ideas, and any philosophy that’s likely going to be successful will gather what works from a number of different schools of thought and philosophies anyway.

Well, this concludes the post on Jean-Paul Sartre (Ch. 10), and ends part 3 of this series on William Barrett’s Irrational Man.  The last and final post in this series is on part 4, Integral vs. Rational Man, which consists of chapter 11, the final chapter in Barrett’s book, The Place of the Furies.  I will post the link here when that post is complete.

Advertisements

Irrational Man: An Analysis (Part 3, Chapter 8: Nietzsche)

In the last post on this series on William Barrett’s Irrational Man, we began exploring Part 3: The Existentialists, beginning with chapter 7 on Kierkegaard.  In this post, we’ll be taking a look at Nietzsche’s philosophy in more detail.

Ch. 8 – Nietzsche

Nietzsche shared many of the same concerns as Kierkegaard, given the new challenges of modernity brought about by the Enlightenment; but each of these great thinkers approached this new chapter of our history from perspectives that were in many ways, diametrically opposed.  Kierkegaard had a grand goal of sharing with others what he thought it meant to be a Christian, and he tried to revive Christianity within a society that he found to be increasingly secularized.  He also saw that of the Christianity that did exist, it was largely organized and depersonalized, and he felt the need to stress the importance of a much more personal form of the religion.  Nietzsche on the other hand, an atheist, rejected Christianity and stressed the importance of re-establishing our values given the fact that modernity was now living in a godless world, albeit a culturally Christianized world, but one that was now without a Christ.

Despite their differences, both philosophers felt that discovering the true meaning of our lives was paramount to living an authentic life, and this new quest for meaning was largely related to the fact that our view of ourselves had changed significantly:

“By the middle of the nineteenth century, as we have seen, the problem of man had begun to dawn on certain minds in a new and more radical form: Man, it was seen, is a stranger to himself and must discover, or rediscover, who he is and what his meaning is.”

Nietzsche also understood and vividly illustrated the crucial differences between mankind and the rest of nature, most especially our self-awareness:

“It was Nietzsche who showed in its fullest sense how thoroughly problematical is the nature of man: he can never be understood as an animal species within the zoological order of nature, because he has broken free of nature and has thereby posed the question of his own meaning-and with it the meaning of nature as well-as his destiny.”

And for Nietzsche, the meaning one was to find for their life included first abandoning what he saw as an oppressive and disempowering Christian morality, which he saw as severely inhibiting our potential as human beings.  He was much more sympathetic to Greek culture and morality, and even though (perhaps ironically) Christianity itself was derived from a syncretism between Judaism and Greek Hellenism, it’s moral and psychologically relevant differences were too substantial to avoid Nietzsche’s criticism.  He thought that a return to a more archaic form of Greek culture and mentality was the solution we needed to restore or at least re-validate our instincts:

“Dionysus reborn, Nietzsche thought, might become a savior-god for the whole race, which seemed everywhere to show symptoms of fatigue and decline.”

Nietzsche learned about the god Dionysus while he was studying Greek tragedy, and he gravitated toward the Dionysian symbolism, for it contained a kind of reconciliation between high culture and our primal instincts.  Dionysus was after all the patron god of the Greek tragic festivals and so was associated with beautiful works of art; but he was also the god of wine and ritual madness, bringing people together through the grape harvest and the joy of intoxication.  As Barrett describes it:

“This god thus united miraculously in himself the height of culture with the depth of instinct, bringing together the warring opposites that divided Nietzsche himself.”

It was this window into the hidden portions of our psyche that Nietzsche tried to look through and which became an integral basis for much of his philosophy.  But the chaotic nature of our species combined with our immense power to manipulate the environment also concerned him greatly; and he thought that the way modernity was repressing many of our instincts needed to be circumvented, lest we continue to build up this tension in our unconscious only to release itself in an explosive eruption of violence:

“It is no mere matter of psychological curiosity but a question of life and death for man in our time to place himself again in contact with the archaic life of his unconscious.  Without such contact he may become the Titan who slays himself.  Man, this most dangerous of the animals, as Nietzsche called him, now holds in his hands the dangerous power of blowing himself and his planet to bits; and it is not yet even clear that this problematic and complex being is really sane.”

Indeed our species has been removed from our natural evolutionary habitat; and we’ve built our modern lives around socio-cultural and technological constructs that have, in many ways at least, inhibited the open channel between our unconscious and conscious mind.  One could even say that the prefrontal cortex region of our brain, important as it is for conscious planning and decision making, has been effectively high-jacked by human culture (memetic selection) and now it’s constantly running on overtime, over-suppressing the emotional centers of the brain (such as the limbic system).

While we’ve come to realize that emotional suppression is sometimes necessary in order to function well as a cooperative social species that depends on complex long-term social relationships, we also need to maintain a healthy dose of emotional expression as well.  If we can’t release this more primal form of “psychological energy” (for lack of a better term), then it shouldn’t be surprising to see it eventually manifest into some kind of destructive behavior.

We ought not forget that we’re just like other animals in terms of our brain being best adapted to a specific balance of behaviors and cognitive functions; and if this balance is disrupted through hyperactive or hypoactive use of one region or another, doesn’t it stand to reason that we may wind up with either an impairment in brain function or less healthy behavior?  If we’re not making a sufficient use of our emotional capacities, shouldn’t we expect them to diminish or atrophy in some way or other?  And if some neurological version of the competitive exclusion principle exists, then this would further reinforce the idea that our suppression of emotion by other capacities may cause those other capacities to become permanently dominant.  We can only imagine the kinds of consequences that ensue when this psychological impairment falls upon a species with our level of power.

1.  Ecce Homo

“In the end one experiences only oneself,” Nietzsche observes in his Zarathustra, and elsewhere he remarks, in the same vein, that all the systems of the philosophers are just so many forms of personal confession, if we but had eyes to see it.”

Here Nietzsche is expressing one of his central ideas, by pointing out the fact that we can’t separate ourselves from our thoughts, and therefore the ideas put forward by any philosopher are really betraying some set of values they hold whether unconsciously or explicitly, and they also betray one’s individual perspective; an unavoidable perspective stemming from our subjective experience which colors our interpretation of those experiences.  Perspectivism, or the view that reality and our ideas about what’s true and what we value can be mapped onto many different possible conceptual schemes, was a big part of Nietzsche’s overall philosophy, and it was interwoven into his ideas on meaning, epistemology, and ontology.

As enlightening as Nietzsche was in giving us a much needed glimpse into some of the important psychological forces that give our lives the shape they have (for better or worse), his written works show little if any psychoanalytical insight applied to himself, in terms of the darker and less pleasant side of his own psyche.

“Nietzsche’s systematic shielding of himself from the other side is relevant to his explanation of the death of God: Man killed God, he says, because he could not bear to have anyone looking at his ugliest side.”

But even if he didn’t turn his psychoanalytical eye inward toward his own unconscious, he was still very effective in shining a light on the collective unconscious of humanity as a whole.  It’s not enough that we marvel over the better angels of our nature, even though it’s important to acknowledge and understand our capacities and our potential for bettering ourselves and the world we live in; we also have to acknowledge our flaws and shortcomings and while we may try to overcome these weaknesses in one way or another, some of them are simply a part of our nature and we should accept them as such even if we may not like them.

One of the prevailing theories stemming from Western Rationalism (mentioned earlier in Barrett’s book) was that human beings were inherently perfect and rational, and it was within the realm of possibility to realize that potential; but this kind of wishful thinking was relying on a fundamental denial of an essential part of the kind of animal we really are.  And Nietzsche did a good job of putting this fact out in the limelight, and explaining how we couldn’t truly realize our potential until we came to terms with this fairly ugly side of ourselves.

Barrett distinguishes between the attitudes stemming from various forms of atheism, including that of Nietzsche’s:

“The urbane atheism of Bertrand Russell, for example, presupposes the existence of believers against whom he can score points in an argument and get off some of his best quips.  The atheism of Sartre is a more somber affair, and indeed borrows some of its color from Nietzsche: Sartre relentlessly works out the atheistic conclusion that in a universe without God man is absurd, unjustified, and without reason, as Being itself is.”

In fact Nietzsche was more concerned with the atheists that hadn’t yet accepted the full ramifications of a Godless world, rather than believers themselves.  He felt that those who believed in God still had a more coherent belief system given many of their moral values and the meaning they attached to their lives, since they were largely derived from their religious beliefs (even if the religious beliefs were unwarranted or harmful).  Those that were atheists, many of them at least, hadn’t yet taken the time to build a new foundation for their previously held values and the meaning in their lives; and if they couldn’t rebuild this foundation then they’d have to change those values and what they find to be meaningful to reflect that shift in foundation.

Overall, Nietzsche even changes the game a little in terms of how the conception of God was to be understood in philosophy:

“If God is taken as a metaphysical object whose existence has to be proved, then the position held by scientifically minded philosophers like Russell must inevitably be valid: the existence of such an object can never be empirically proved.  Therefore, God must be a superstition held by primitive and childish minds.  But both these alternative views are abstract, whereas the reality of God is concrete, a thoroughly autonomous presence that takes hold of men but of which, of course, some men are more conscious than others.  Nietzsche’s atheism reveals the true meaning of God-and does so, we might add, more effectively than a good many official forms of theism.”

Even though God may not actually exist as a conscious being, the idea of God and the feelings associated with what one labels as an experience of God, is as real as any other idea or experience and so it should still be taken seriously even in a world where no gods exist.  As an atheist myself (formerly a born-again Protestant Christian), I’ve come to appreciate the real breadth of possible meaning attached to the term “God”.  Even though the God or gods described in various forms of theism do not track onto objective reality as was pointed out by Russell, this doesn’t negate the subjective reality underlying theism.  People ascribe the label “God” to a set of certain feelings and forces that they feel are controlling their lives, their values, and their overall vision of the world.  Nietzsche had his own vision of the world as well, and so one could label this as his “god”, despite the confusion that this may cause when discussing God on metaphysical terms.

2. What Happens In “Zarathustra”; Nietzsche as Moralist

One of the greatest benefits of producing art is our ability to tap into multiple perspectives that we might otherwise never explore.  And by doing this, we stand a better chance of creating a window into our own unconscious:

“[Thus Spoke Zarathustra] was Nietzsche’s poetic work and because of this he could allow the unconscious to take over in it, to break through the restraints imposed elsewhere by the philosophic intellect. [in poetry] One loses all perception of what is imagery and simile-that is to say, the symbol itself supersedes thought, because it is richer in meaning.”

By avoiding the common philosophical trap of submitting oneself to a preconceived structure and template for processing information, poetry and other works of art can inspire new ideas and novel perspectives through the use of metaphor and allegory, emotion, and the seemingly infinite powers of our own imagination.  Not only is this true for the artist herself, but also for the spectator who stands in a unique position to interpret what they see and to make sense of it the best they can.  And in the rarest of cases, one may be able to experience something in a work of art that fundamentally changes themselves in the process.  Either way, by presenting someone with an unusual and ambiguous stimulus that they are left to interpret, they can hope to gain at least some access into their unconscious and to inspire an expression of their creativity.

In his Thus Spoke Zarathustra, Nietzsche was focusing on how human beings could ever hope to regain a feeling of completeness; for modernity had fractured the individual and placed too much emphasis on collective specialization:

“For man, says Schiller, the problem is one of forming individuals.  Modern life has departmentalized, specialized, and thereby fragmented the being of man.  We now face the problem of putting the fragments together into a whole.  In the course of his exposition, Schiller even referred back, as did Nietzsche, to the example of the Greeks, who produced real individuals and not mere learned abstract men like those of the modern age.”

A part of this reassembly process would necessarily involve reincorporating what Nietzsche thought of as the various attributes of our human nature, which many of us are in denial of (to go back to the point raised earlier in this post).  For Nietzsche, this meant taking on some characteristics that may seem inherently immoral:

“Man must incorporate his devil or, as he put it, man must become better and more evil; the tree that would grow taller must send its roots down deeper.”

Nietzsche seems to be saying that if there are aspects of our psychology that are normal albeit seemingly dark or evil, then we ought to structure our moral values along the same lines rather than in opposition to these more primal parts of our psyche:

“(Nietzsche) The whole of traditional morality, he believed, had no grasp of psychological reality and was therefore dangerously one-sided and false.”

And there is some truth to this though I wouldn’t go as far as Nietzsche does, as I believe that there are many basic and cross-cultural moral prescriptions that do in fact take many of our psychological needs into account.  Most cultures have some place for various virtues such as compassion, honesty, generosity, forgiveness, commitment, integrity, and many others; and these virtues have been shown within the fields of moral psychology and sociology to help us flourish and lead psychologically fulfilling lives.  We evolved as a social species that operates within some degree of a dominance hierarchy, and sure enough our psychological traits are what we’d expect given our evolutionary history.

However, it’s also true that we ought to avoid slipping into some form of a naturalistic fallacy; for we don’t want to simply assume that all the behaviors that were more common prior to modernity let alone prior to civilization were beneficial to us and our psychology.  For example, raping and stealing were far more common per capita prior to humans establishing some kind of civil state or society.  And a number of other behaviors that we’ve discovered to be good or bad for our physical or psychological health have been the result of a long process of trial and error, cultural evolution, and the cultural transmission of newfound ideas from one generation to the next.  In any case, we don’t want to assume that just because something has been a tradition or even a common instinctual behavior for many thousands or even hundreds of thousands of years, that it is therefore a moral behavior.  Quite the contrary; instead we need to look closely at which behaviors lead to more fulfilling lives and overall maximal satisfaction and then aim to maximize those behaviors over those that detract from this goal.

Nietzsche did raise a good point however, and one that’s supported by modern moral psychology; namely, the fact that what we think we want or need isn’t always in agreement with our actual wants and needs, once we dive below the surface.  Psychology has been an indispensable tool in discovering many of our unconscious desires and the best moral theories are going to be those that take both our conscious and unconscious motivations and traits into account.  Only then can we have a moral theory that is truly going to be sufficiently motivating to follow in the long term as well as most beneficial to us psychologically.  If one is basing their moral theory on wishful thinking rather than facts about their psychology, then they aren’t likely to develop a viable moral theory.  Nietzsche realized this and promoted it in his writings, and this was an important contribution as it bridged human psychology with a number of important topics in philosophy:

“On this point Nietzsche has a perfectly sober and straightforward case against all those idealists, from Plato onward, who have set universal ideas over and above the individual’s psychological needs.  Morality itself is blind to the tangle of its own psychological motives, as Nietzsche showed in one of his most powerful book, The Genealogy of Morals, which traces the source of morality back to the drives of power and resentment.”

I’m sure that drives of power and resentment have played some role in certain moral prescriptions and moral systems, but I doubt that this is likely to be true generally speaking; and even in cases where a moral prescription or system arose out of one of these unconscious drives, such as resentment, this doesn’t negate it’s validity nor demonstrate whether or not it’s warranted or psychologically beneficial.  The source of morality isn’t as important as whether or not the moral system itself works, though the sources of morality can be both psychologically relevant and revealing.

Our ego certainly has a lot to do with our moral behavior and whether or not we choose to face our inner demons:

“Precisely what is hardest for us to take is the devil as the personification of the pettiest, paltriest, meanest part of our personality…the one that most cruelly deflates ones egotism.”

If one is able to accept both the brighter and darker sides of themselves, they’ll also be more likely to willingly accept Nietzsche’s idea of the Eternal Return:

“The idea of the Eternal Return thus expresses, as Unamuno has pointed out, Nietzsche’s own aspirations toward eternal and immortal life…For Nietzsche the idea of the Eternal Return becomes the supreme test of courage: If Nietzsche the man must return to life again and again, with the same burden of ill health and suffering, would it not require the greatest affirmation and love of life to say Yes to this absolutely hopeless prospect?”

Although I wouldn’t expect most people to embrace this idea, because it goes against our teleological intuitions of time and causation leading to some final state or goal, it does provide a valuable means of establishing whether or not someone is really able to accept this life for what it is with all of its absurdities, pains and pleasures.  If someone willingly accepts the idea, then by extension they likely accept themselves, their lives, and the rest of human history (and even the history of life on this planet, no less).  This doesn’t mean that by accepting the idea of the Eternal Return, that people have to put every action, event, or behavior on equal footing, for example, by condoning slavery or the death of millions as a result of countless wars, just because these events are to be repeated for eternity.  But one is still accepting that all of these events were necessary in some sense; that they couldn’t have been any other way, and this acceptance should translate to one’s attitude toward life reflecting this by some kind of overarching contentment.

I think that the more common reaction to this kind of idea is slipping into some kind of nihilism, where a person no longer cares about anything, and no longer finds any meaning or purpose in their lives.  And this reaction is perfectly understandable given our teleological intuitions; most people don’t or wouldn’t want to invest time and effort into some goal if they knew that the goal would never be accomplished or knew that it would eventually be undone.  But, oddly enough, even if the Eternal Return isn’t an actual fact of our universe, we still run into the same basic situation within our own lifetimes whereby we know that there are a number of goals that will never be achieved because of our own inevitable death.

There’s also the fact that even if we did achieve a number of goals, once we die, we can no longer reap any benefits from them.  There may be other people that can benefit from our own past accomplishments, but eventually they will die as well.  Eventually all life in the universe will expire, and this is bound to happen long before the inevitable heat death of the universe transpires; and once it happens, there won’t be anybody experiencing anything at all let alone reaping the benefits of a goal from someone’s distant past.  Once this is taken into account, Nietzsche’s idea doesn’t sound all that bad, does it?  If we knew that there would always be time and some amount of conscious experience for eternity, then there will always be some form of immortality for consciousness; and if it’s eternally recurring, then we end up becoming immortal in some sense.

Perhaps the idea of the Eternal Return (or Recurrence), even though this idea began many hundreds if not a few thousand years before Nietzsche, was motivated by Nietzsche’s own fear of death.  Even though he thought of the idea as horrifying and paralyzing (not least because of all the undesirable parts of our own personal existence), since he didn’t believe in a traditional afterlife, maybe the fear of death motivated him to adopt such an idea, even though he also saw it as a physically plausible consequence of probability and the laws of physics.  But either way, beyond it being far more plausible an idea compared to that of an eternal paradise after death, it’s also a very different idea; not only because one’s memory or identity isn’t preserved such that one actually experiences immortality when the “temporal loop” defining their lifetime starts all over again (i.e. the “movie” of one’s life is just played over and over again, unbeknownst to the person in the movie), but this idea is also different because it involves accepting the world the way it is for all eternity rather than an idea revolving around the way they wish it was.  The Eternal Return fully engages with the existentialist reality of human life with all its absurdities, contingencies, and the rest.

3. Power and Nihilism

Nietzsche, like Kierkegaard before him, was considered to be an unsystematic thinker by many, though Barrett points out that this view is mistaken when it comes to Nietzsche; a common view resulting from the form his writings took which were largely aphoristic.  Nietzsche himself even said that he was viewing science and philosophy through the eyes of art, but nevertheless his work eventually revealed a systematic structure:

“As thinking gradually took over the whole person, and everything else in his life being starved out, it was inevitable that this thought should tend to close itself off in a system.”

This systematization of his philosophy was revealed in the notes he was making for The Will to Power, a work he never finished but one that was more or less assembled and published posthumously by his sister Elisabeth.  The systematization was centered on the idea that a will to power was the underlying force that ultimately guided our behavior, striving to achieve the highest possible position in life as opposed to being fundamentally driven by some biological imperative to survive.  But it’s worth pointing out here that Nietzsche didn’t seem to be merely talking about a driving force behind human behavior, but rather he seemed to be referencing an even more fundamental driving force that was in some sense driving all of reality; and this more inclusive view would be analogous to Schopenhauer’s will to live.

It’s interesting to consider this perspective from the lens of biology, where if we try and reconcile this with a Darwinian account of evolution in which survival is key, we could surmise that a will to power is but a mechanism for survival.  If we grant Nietzsche’s position and the position of the various anti-Darwinian thinkers that influenced him, that is, that a will to survive or survival more generally is somehow secondary to a will to power, then we run into a problem: if survival is a secondary drive or goal, then we’d expect survival to be less important than power; but without survival you can’t have power and thus it makes far more evolutionary sense that a will to survive is more basic than a will to power.

However, I am willing to grant that as soon as organisms began evolving brains along with the rest of their nervous system, the will to power (or something analogous to it) became increasingly important; and by the time humans came on the scene with their highly complex brains, the will to power may have become manifest in our neurological drive to better predict our environment over time.  I wrote about this idea in a previous post where I had explored Nietzsche’s Beyond Good and Evil, where I explained what I saw as a correlation between Nietzsche’s will to power and the idea of the brain as a predictive processor:

“…I’d like to build on it a little and suggest that both expressions of a will to power can be seen as complementary strategies to fulfill one’s desire for maximal autonomy, but with the added caveat that this autonomy serves to fulfill a desire for maximal causal power by harnessing as much control over our experience and understanding of the world as possible.  On the one hand, we can try and change the world in certain ways to fulfill this desire (including through the domination of other wills to power), or we can try and change ourselves and our view of the world (up to and including changing our desires if we find them to be misdirecting us away from our greatest goal).  We may even change our desires such that they are compatible with an external force attempting to dominate us, thus rendering the external domination powerless (or at least less powerful than it was), and then we could conceivably regain a form of power over our experience and understanding of the world.

I’ve argued elsewhere that I think that our view of the world as well as our actions and desires can be properly described as predictions of various causal relations (this is based on my personal view of knowledge combined with a Predictive Processing account of brain function).  Reconciling this train of thought with Nietzsche’s basic idea of a will to power, I think we could…equate maximal autonomy with maximal predictive success (including the predictions pertaining to our desires). Looking at autonomy and a will to power in this way, we can see that one is more likely to make successful predictions about the actions of another if they subjugate the other’s will to power by their own.  And one can also increase the success of their predictions by changing them in the right ways, including increasing their complexity to better match the causal structure of the world, and by changing our desires and actions as well.”

On the other hand, if we treat the brain’s predictive activity as a kind of mechanistic description of how the brain works rather than a type of drive per se, and if we treat our underlying drives as something that merely falls under this umbrella of description, then we might want to consider whether or not any of the drives that are typically posited in psychological theory (such as power, sex, or a will to live) are actually more basic than any other or typically dominant over the others.  Barrett suggests an alternative, saying that we ought to look at these elements more holistically:

“What if the human psyche cannot be carved up into compartments and one compartment wedged in under another as being more basic?  What if such dichotomizing really overlooks the organic unity of the human psyche, which is such that a single impulse can be just as much an impulse toward love on the one hand as it is toward power on the other?”

I agree with Barrett at least in the sense that these drives seem to be operating together much of the time, and when they aren’t in unison, they often seem to be operating on the same level at least.  Another way to put this could be to say that as a social species, we’ve evolved a number of different modular behavioral strategies that are typically best suited for particular social circumstances, though some circumstances may be such that multiple strategies will work and thus multiple strategies may be used simultaneously without working against each other.

But I also think that our conscious attention plays a role as well where the primary drive(s) used to produce the subsequent behavior may be affected by thinking about certain things or in a certain way, such as how much you love someone, what you think you can gain from them, etc.  And this would mean that the various underlying drives for our behavior are individually selected or prioritized based on one’s current experiences and where their attention is directed at any particular moment.  It may still be the case that some drives are typically dominant over others, such as a will to power, even if certain circumstances can lead to another drive temporarily taking over, however short-lived that may be.

The will to power, even if it’s not primary, would still help to explain some of the enormous advancements we’ve made in our scientific and technological progress, while also explaining (at least to some degree) the apparent disparity between our standard of living and overall physio-psychological health on the one hand, and our immense power to manipulate the environment on the other:

“Technology in the twentieth century has taken such enormous strides beyond that of the nineteenth that it now bulks larger as an instrument of naked power than as an instrument for human well-being.”

We could fully grasp Barrett’s point here by thinking about the opening scene in Stanley Kubrick’s 2001: A Space Odyssey, where “The Dawn of Man” was essentially made manifest with the discovery of tools, specifically weapons.  Once this seemingly insignificant discovery was made, it completely changed the game for our species.  While it gave us a new means of defending ourselves and an ability to hunt other animals, thus providing us with the requisite surplus of protein and calories needed for brain development and enhanced intelligence, it also catalyzed an arms race.  And with the advent of human civilization, our historical trajectory around the globe became largely dominated by our differential means of destruction; whoever had the biggest stick, the largest army, the best projectiles, and ultimately the most concentrated form of energy, would likely win the battle, forever changing the fate of the parties involved.

Indeed war has been such a core part of human history, so it’s no wonder Nietzsche stumbled upon such an idea; and even if he hadn’t, someone else likely would have even if they wouldn’t have done so with the same level of creativity and intellectual rigor.  If our history has been so colored with war, which is a physical instantiation of a will to power in order to dominate another group, then we should expect many to see it as a fundamental part of who we are.  Personally, I don’t think we’re fundamentally war-driven creatures but rather that war results from our ability to engage in it combined with a perceived lack of and desire for resources, freedom, and stability in our lives; but many of these goods, freedom in particular, imply a particular level of power over oneself and their environment, so even a fight for freedom is in one way or another a kind of will to power as well.

And what are we to make of our quest for power in terms of the big picture?  Theoretically, there is an upper limit to the amount of power any individual or group can possibly attain, and if one gets to a point where they are seeking power for power’s sake, and using their newly acquired power to harness even more power, then what will happen when we eventually hit that ceiling?  If our values become centered around power, and we lose any anchor we once had to provide meaning for our lives, then it seems we would be on a direct path toward nihilism:

“For Nietzsche, the problem of nihilism arose out of the discovery that “God is dead.” “God” here means the historical God of the Christian faith.  But in a wider philosophical sense it means also the whole realm of supersensible reality-Platonic Ideas, the Absolute, or what not-that philosophy has traditionally posited beyond the sensible realm, and in which it has located man’s highest values.  Now that this other, higher, eternal realm is gone, Nietzsche declared, man’s highest values lose their value…The only value Nietzsche can set up to take the place of these highest values that have lost their value for contemporary man is: Power.”

As Barrett explains, Nietzsche seemed to think that power was the only thing that could replace the eternal realm that we valued so much; but of course this does nothing to alleviate the problem of nihilism in the long term since the infinite void still awaits those at the end of the finite road to maximal power.

“If this moment in Western history is but the fateful outcome of the fundamental ways of thought that lie at the very basis of our civilization-and particularly of that way of thought that sunders man from nature, sees nature as a realm of objects to be mastered and conquered, and can therefore end only with the exaltation of the will to power-then we have to find out how this one-sided and ultimately nihilistic emphasis upon the power over things may be corrected.”

I think the answer to this problem lies in a combination of strategies, but with the overarching goal of maximizing life fulfillment.  We need to reprogram our general attitude toward power such that it is truly instrumental rather than perceived as intrinsically valuable; and this means that our basic goal becomes accumulating power such that we can maximize our personal satisfaction and life fulfillment, and nothing more.  Among other things, the power we acquire should be centered around a power over ourselves and our own psychology; finding ways of living and thinking which are likely to include the fostering of a more respectful relationship with the nature that created us in the first place.

And now that we’re knee deep in the Information Age, we will be able to harness the potential of genetic engineering, artificial intelligence, and artificial reality; and within these avenues of research we will be able to change our own biology such that we can quickly adapt our psychology to the ever-changing cultural environment of the modern world, and we’ll be able to free up our time to create any world we want.  We just need to keep the ultimate goal in mind, fulfillment and contentment, and direct our increasing power towards this goal in particular instead of merely chipping away at it in the background as some secondary priority.  If we don’t prioritize it, then we may simply perpetuate and amplify the meaningless sources of distraction in our lives, eventually getting lost in the chaos, and forgetting about our ultimate potential and the imperatives needed to realize it.

I’ll be posting a link here to part 9 of this post-series, exploring Heidegger’s philosophy, once it has been completed.

Irrational Man: An Analysis (Part 3, Chapter 7: Kierkegaard)

Part III – The Existentialists

In the last post in this series on William Barrett’s Irrational Man, we examined how existentialism was influenced by a number of poets and novelists including several of the Romantics and the two most famous Russian authors, namely Dostoyevsky and Tolstoy.  In this post, we’ll be entering part 3 of Barrett’s book, and taking a look at Kierkegaard specifically.

Chapter 7 – Kierkegaard

Søren Kierkegaard is often considered to be the first existentialist philosopher and so naturally Barrett begins his exploration of individual existentialists here.  Kierkegaard was a brilliant man with a broad range of interests; he was a poet as well as a philosopher, exploring theology and religion (Christianity in particular), morality and ethics, and various aspects of human psychology.  The fact that he was also a devout Christian can be seen throughout his writings, where he attempted to delineate his own philosophy of religion with a focus on the concepts of faith, doubt, and also his disdain for any organized forms of religion.  Perhaps the most influential core of his work is the focus on individuality, subjectivity, and the lifelong search to truly know oneself.  In many ways, Kierkegaard paved the way for modern existentialism, and he did so with a kind of poetic brilliance that made clever use of both irony and metaphor.

Barrett describes Kierkegaard’s arrival in human history as the onset of an ironic form of intelligence intent on undermining itself:

“Kierkegaard does not disparage intelligence; quite the contrary, he speaks of it with respect and even reverence.  But nonetheless, at a certain moment in history this intelligence had to be opposed, and opposed with all the resources and powers of a man of brilliant intelligence.”

Kierkegaard did in fact value science as a methodology and as an enterprise, and he also saw the importance of objective knowledge; but he strongly believed that the most important kind of knowledge or truth was that which was derived from subjectivity; from the individual and their own concrete existence, through their feelings, their freedom of choice, and their understanding of who they are and who they want to become as an individual.  Since he was also a man of faith, this meant that he had to work harder than most to manage his own intellect in order to prevent it from enveloping the religious sphere of his life.  He felt that he had to suppress the intellect at least enough to maintain his own faith, for he couldn’t imagine a life without it:

“His intellectual power, he knew, was also his cross.  Without faith, which the intelligence can never supply, he would have died inside his mind, a sickly and paralyzed Hamlet.”

Kierkegaard saw faith as of the utmost importance to our particular period in history as well, since Western civilization had effectively become disconnected from Christianity, unbeknownst to the majority of those living in Kierkegaard’s time:

“The central fact for the nineteenth century, as Kierkegaard (and after him Nietzsche, from a diametrically opposite point of view) saw it, was that this civilization that had once been Christian was so no longer.  It had been a civilization that revolved around the figure of Christ, and was now, in Nietzsche’s image, like a planet detaching itself from its sun; and of this the civilization was not yet aware.”

Indeed, in Nietzsche’s The Gay Science, we hear of a madman roaming around one morning who makes such a pronouncement to a group of non-believers in a marketplace:

” ‘Where is God gone?’ he called out.  ‘I mean to tell you!  We have killed him, – you and I!  We are all his murderers…What did we do when we loosened this earth from its sun?…God is dead!  God remains dead!  And we have killed him!  How shall we console ourselves, the most murderous of all murderers?…’  Here the madman was silent and looked again at his hearers; they also were silent and looked at him in surprise.  At last he threw his lantern on the ground, so that it broke in pieces and was extinguished.  ‘I come too early,’ he then said ‘I am not yet at the right time.  This prodigious event is still on its way, and is traveling, – it has not yet reached men’s ears.’ “

Although Nietzsche will be looked at in more detail in part 8 of this post-series, it’s worth briefly mentioning what he was pointing out with this essay.  Nietzsche was highlighting the fact that at this point in our history after the Enlightenment, we had made a number of scientific discoveries about our world and our place in it, and this had made the concept of God somewhat superfluous.  As a result of the drastic rise in secularization, the proportion of people that didn’t believe in God rose substantially; and because Christianity no longer had the theistic foundation it relied upon, all of the moral systems, traditions, and the ultimate meaning in one’s life derived from Christianity had to be re-established if not abandoned altogether.

But many people, including a number of atheists, didn’t fully appreciate this fact and simply took for granted much of the cultural constructs that arose from Christianity.  Nietzsche had a feeling that most people weren’t intellectually fit for the task of re-grounding their values and finding meaning in a Godless world, and so he feared that nihilism would begin to dominate Western civilization.  Kierkegaard had similar fears, but as a man who refused to shake his own belief in God and in Christianity, he felt that it was his imperative to try and revive the Christian faith that he saw was in severe decline.

1.  The Man Himself

“The ultimate source of Kierkegaard’s power over us today lies neither in his own intelligence nor in his battle against the imperialism of intelligence-to use the formula with which we began-but in the religious and human passion of the man himself, from which the intelligence takes fire and acquires all its meaning.”

Aside from the fact that Kierkegaard’s own intelligence was primarily shaped and directed by his passion for love and for God, I tend to believe that intelligence is in some sense always subservient to the aims of one’s desires.  And if desires themselves are derivative of, or at least intimately connected to, feeling and passion, then a person’s intelligence is going to be heavily guided by subjectivity; not only in terms of the basic drives that attempt to lead us to a feeling of homeostasis but also in terms of the psychological contentment resulting from that which gives our lives meaning and purpose.

For Kierkegaard, the only way to truly discover what the meaning for one’s own life is, or to truly know oneself, is to endure the painful burden of choice eventually leading to the elimination of a number of possibilities and to the creation of a number of actualities.  One must make certain significant choices in their life such that, once those choices are made, they cannot be unmade; and every time this is done, one is increasingly committing oneself to being (or to becoming) a very specific self.  And Kierkegaard thought that renewing one’s choices daily in the sense of freely maintaining a commitment to them for the rest of one’s life (rather than simply forgetting that those choices were made and moving on to the next one) was the only way to give those choices any meaning, let alone any prolonged meaning.  Barrett mentions the relation of choice, possibility, and reality as it was experienced by Kierkegaard:

“The man who has chosen irrevocably, whose choice has once and for all sundered him from a certain possibility for himself and his life, is thereby thrown back on the reality of that self in all its mortality and finitude.  He is no longer a spectator of himself as a mere possibility; he is that self in its reality.”

As can be seen with Kierkegaard’s own life, some of these choices (such as breaking off his engagement with Regine Olsen) can be hard to live with, but the pain and suffering experienced in our lives is still ours; it is still a part of who we are as an individual and further affirms the reality of our choices, often adding an inner depth to our lives that we may otherwise never attain.

“The cosmic rationalism of Hegel would have told him his loss was not a real loss but only the appearance of loss, but this would have been an abominable insult to his suffering.”

I have to agree with Kierkegaard here that the felt experience one has is as real as anything ever could be.  To say otherwise, that is, to negate the reality of this or that felt experience is to deny the reality of any felt experience whatsoever; for they all precipitate from the same subjective currency of our own individual consciousness.  We can certainly distinguish between subjective reality and objective reality, for example, by evaluating which aspects of our experience can and cannot be verified through a third-party or through some kind of external instrumentation and measurement.  But this distinction only helps us in terms of fine-tuning our ability to make successful predictions about our world by better understanding its causal structure; it does not help us determine what is real and what is not real in the broadest sense of the term.  Reality is quite simply what we experience; nothing more and nothing less.

2. Socrates and Hegel; Existence and Reason

Barrett gives us an apt comparison between Kierkegaard and Socrates:

“As the ancient Socrates played the gadfly for his fellow Athenians stinging them into awareness of their own ignorance, so Kierkegaard would find his task, he told himself, in raising difficulties for the easy conscience of an age that was smug in the conviction of its own material progress and intellectual enlightenment.”

And both philosophers certainly made a lasting impact with their use of the Socratic method; Socrates having first promoted this method of teaching and argumentation, and Kierkegaard making heavy use of it in his first published work Either/Or, and in other works.

“He could teach only by example, and what Kierkegaard learned from the example of Socrates became fundamental for his own thinking: namely, that existence and a theory about existence are not one and the same, any more than a printed menu is as effective a form of nourishment as an actual meal.  More than that: the possession of a theory about existence may intoxicate the possessor to such a degree that he forgets the need of existence altogether.”

And this problem, of not being able to see the forest for the trees, plagues many people that are simply distracted by the details that are uncovered when they’re simply trying to better understand the world.  But living a life of contentment and deeply understanding life in general are two very different things and you can’t invest more in one without retracting time and effort from the other.  Having said that, there’s still some degree of overlap between these two goals; contentment isn’t likely to be maximally realized without some degree of in-depth understanding of the life you’re living and the experiences you’ve had.  As Socrates famously put it: “the unexamined life is not worth living.”  You just don’t want to sacrifice too many of the experiences just to formulate a theory about them.

How does reason, rationality, and thought relate to any theories that address what is actually real?  Well, the belief in a rational cosmos, such as that held by Hegel, can severely restrict one’s ontology when it comes to the concept of existence itself:

“When Hegel says, “The Real is rational, and the rational is real,” we might at first think that only a German idealist with his head in the clouds, forgetful of our earthly existence, could so far forget all the discords, gaps, and imperfections in our ordinary experience.  But the belief in a completely rational cosmos lies behind the Western philosophic tradition; at the very dawn of this tradition Parmenides stated it in his famous verse, “It is the same thing that can be thought and that can be.”  What cannot be thought, Parmenides held, cannot be real.  If existence cannot be thought, but only lived, then reason has no other recourse than to leave existence out of its picture of reality.”

I think what Parmenides said would have been more accurate (if not correct) had he rephrased it just a little differently: “It is the same thing that can be thought consciously experienced and that can be.”  Rephrasing it as such allows for a much more inclusive conception of what is considered real, since anything that is within the possibility of conscious experience is given an equal claim to being real in some way or another; which means that all the irrational thoughts or feelings, the gaps and lack of coherency in some of our experiences, are all taken into account as a part of reality. What else could we mean by saying that existence must be lived, other than the fact that existence must be consciously experienced?  If we mean to include the actions we take in the world, and thus the bodily behavior we enact, this is still included in our conscious experience and in the experiences of other conscious agents.  So once the entire experience of every conscious agent is taken into account, what else could there possibly be aside from this that we can truly say is necessary in order for existence to be lived?

It may be that existence and conscious experience are not identical, even if they’re always coincident with one another; but this is in part contingent on whether or not all matter and energy in the universe is conscious or not.  If consciousness is actually universal, then perhaps existence and some kind of experientiality or other (whether primitive or highly complex) are in fact identical with one another.  And if not, then it is still the existence of those entities which are conscious that should dominate our consideration since that is the only kind of existence that can actually be lived at all.

If we reduce our window of consideration from all conscious experience down to merely the subset of reason or rationality within that experience, then of course there’s going to be a problem with structuring theories about the world within such a limitation; for anything that doesn’t fit within its purview simply disappears:

“As the French scientist and philosopher Emile Meyerson says, reason has only one means of accounting for what does not come from itself, and that is to reduce it to nothingness…The process is still going on today, in somewhat more subtle fashion, under the names of science and Positivism, and without invoking the blessing of Hegel at all.”

Although, in order to be charitable to science, we ought to consider the upsurge of interest and development in the scientific fields of psychology and cognitive science in the decades following the writing of Barrett’s book; for consciousness and the many aspects of our psyche and our experience have taken on a much more important role in terms of what we are valuing in science and the kinds of phenomena we’re trying so hard to understand better.  If psychology and the cognitive and neurosciences include the entire goings-on of our brain and overall subjective experience, and if all the information that is processed therein is trying to be accounted for, then science should no longer be considered cut-off from, or exclusive to, that which lies outside of reason, rationality, or logic.

We mustn’t confuse or conflate reason with science even if science includes the use of reason, for science can investigate the unreasonable; it can provide us with ways of making more and more successful predictions about the causal structure of our experience even as they relate to emotions, intuition, and altered or transcendent states of consciousness like those stemming from meditation, religious experiences or psycho-pharmacological substances.  And scientific fields like moral and positive psychology are also better informing us of what kinds of lifestyles, behaviors, character traits and virtues lead to maximal flourishing and to the most fulfilling lives.  So one could even say that science has been serving as a kind of bridge between reason and existence; between rationality and the other aspects of our psyche that make life worth living.

Going back to the relation between reason and existence, Barrett mentions the conceptual closure of the former to the latter as argued by Kant:

“Kant declared, in effect, that existence can never be conceived by reason-though the conclusions he drew from this fact were very different from Kierkegaard’s.  “Being”, says Kant, “is evidently not a real predicate, or concept of something that can be added to the concept of a thing.”  That is, if I think of a thing, and then think of that thing as existing, my second concept does not add any determinate characteristic to the first…So far as thinking is concerned, there is no definite note or characteristic by which, in a concept, I can represent existence as such.”

And yet, we somehow manage to be able to distinguish between the world of imaginary objects and that of non-imaginary objects (at least most of the time); and we do this using the same physical means of perception in our brain.  I think it’s true to say that both imaginary and non-imaginary objects exist in some sense; for both exist physically as representations in our brains such that we can know them at all, even if they differ in terms of whether the representations are likely to be shared by others (i.e. how they map onto what we might call our external reality).

If I conceive of an apple sitting on a table in front of me, and then I conceive of an apple sitting on a table in front of me that you would also agree is in fact sitting on the table in front of me, then I’ve distinguished conceptually between an imaginary apple and one that exists in our external reality.  And since I can’t ever be certain whether or not I’m hallucinating that there’s an actual apple on the table in front of me (or any other aspect of my experienced existence), I must accept that the common thread of existence, in terms of what it really means for something to exist or not, is entirely grounded on its relation to (my own) conscious experience.  It is entirely grounded on our individual perception; on the way our own brains make predictions about the causes of our sensory input and so forth.

“If existence cannot be represented in a concept, he says (Kierkegaard), it is not because it is too general, remote, and tenuous a thing to be conceived of but rather because it is too dense, concrete, and rich.  I am; and this fact that I exist is so compelling and enveloping a reality that it cannot be reproduced thinly in any of my mental concepts, though it is clearly the life-and-death fact without which all my concepts would be void.”

I actually think Kierkegaard was closer to the mark than Kant was, for he claimed that it was not so much that reason reduces existence to nothingness, but rather that existence is so tangible, rich and complex that reason can’t fully encompass it.  This makes sense insofar as reason operates through the principle of reduction, abstraction, and the dissecting of a holistic experience into parts that relate to one another in a certain way in order to make sense of that experience.  If the holistic experience is needed to fully appreciate existence, then reason alone isn’t going to be up to the task.  But reason also seems to unify our experiences, and if this unification presupposes existence in order to make sense of that experience then we can’t fully appreciate existence without reason either.

3. Aesthetic, Ethical, Religious

Kierkegaard lays out three primary stages of living in his philosophy: namely, the aesthetic, the ethical, and the religious.  While there are different ways to interpret this “stage theory”, the most common interpretation treats these stages like a set of concentric circles or spheres where the aesthetic is in the very center, the ethical contains the aesthetic, and the religious subsumes both the ethical and the aesthetic.  Thus, for Kierkegaard, the religious is the most important stage that, in effect, supersedes the others; even though the religious doesn’t eliminate the other spheres, since a religious person is still capable of being ethical or having aesthetic enjoyment, and since an ethical person is still capable of aesthetic enjoyment, etc.

In a previous post where I analyzed Kierkegaard’s Fear and Trembling, I summarized these three stages as follows:

“…The aesthetic life is that of sensuous or felt experience, infinite potentiality through imagination, hiddenness or privacy, and an overarching egotism focused on the individual.  The ethical life supersedes or transcends this aesthetic way of life by relating one to “the universal”, that is, to the common good of all people, to social contracts, and to the betterment of others over oneself.  The ethical life, according to Kierkegaard, also consists of public disclosure or transparency.  Finally, the religious life supersedes the ethical (and thus also supersedes the aesthetic) but shares some characteristics of both the aesthetic and the ethical.

The religious, like the aesthetic, operates on the level of the individual, but with the added component of the individual having a direct relation to God.  And just like the ethical, the religious appeals to a conception of good and evil behavior, but God is the arbiter in this way of life rather than human beings or their nature.  Thus the sphere of ethics that Abraham might normally commit himself to in other cases is thought to be superseded by the religious sphere, the sphere of faith.  Within this sphere of faith, Abraham assumes that anything that God commands is Abraham’s absolute duty to uphold, and he also has faith that this will lead to the best ends…”

While the aesthete generally tends to live life in search of pleasure, always trying to flee away from boredom in an ever-increasing fit of desperation to continue finding moments of pleasure despite the futility of such an unsustainable goal, Kierkegaard also claims that the aesthete includes the intellectual who tries to stand outside of life; detached from it and only viewing it as a spectator rather than a participant, and categorizing each experience as either interesting or boring, and nothing more.  And it is this speculative detachment from life, which was an underpinning of Western thought, that Kierkegaard objected to; an objection that would be maintained within the rest of existential philosophy that was soon to come.

If the aesthetic is unsustainable or if someone (such as Kierkegaard) has given up such a life of pleasure, then all that remains is the other spheres of life.  For Kierkegaard, the ethical life, at least on its own, seemed to be insufficient for making up what was lost in the aesthetic:

“For a really passionate temperament that has renounced the life of pleasure, the consolations of the ethical are a warmed-over substitute at best.  Why burden ourselves with conscience and responsibility when we are going to die, and that will be the end of it?  Kierkegaard would have approved of the feeling behind Nietzsche’s saying, ‘God is dead, everything is permitted.’ ” 

One conclusion we might arrive at, after considering Kierkegaard’s attitude toward the ethical life, is that he may have made a mistake when he renounced the life of pleasure entirely.  Another conclusion we might make is that Kierkegaard’s conception of the ethical is incomplete or misguided.  If he honestly asks, in the hypothetical absence of God or any option for a religious life, why we ought to burden ourselves with conscience and responsibility, this seems to betray a fundamental flaw in his moral and ethical reasoning.  Likewise for Nietzsche, and Dostoyevsky for that matter, where they both echoed similar sentiments: “If God does not exist, then everything is permissible.”

As I’ve argued elsewhere (here, and here), the best form any moral theory can take (such that it’s also sufficiently motivating to follow) is going to be centered around the individual, and it will be grounded on the hypothetical imperative that maximizes their personal satisfaction and life fulfillment.  If some behaviors serve toward best achieving this goal and other behaviors detract from it, as a result of our human psychology, sociology, and thus as a result of the finite range of conditions that we thrive within as human beings, then regardless of whether a God exists or not, everything is most certainly not permitted.  Nietzsche was right however when he claimed that the “death of God” (so to speak) would require a means of re-establishing a ground for our morals and values, but this doesn’t mean that all possible grounds for doing so have an equal claim to being true nor will they all be equally efficacious in achieving one’s primary moral objective.

Part of the problem with Kierkegaard’s moral theorizing is his adoption of Kant’s universal maxim for ethical duty: “Act only according to that maxim whereby you can at the same time will that it should become a universal law.”  The problem is that this maxim or “categorical imperative” (the way it’s generally interpreted at least) doesn’t take individual psychological and circumstantial idiosyncrasies into account.  One can certainly insert these idiosyncrasies into Kant’s formulation, but that’s not how Kant intended it to be used nor how Kierkegaard interpreted it.  And yet, Kierkegaard seems to smuggle in such exceptions anyway, by having incorporated it into his conception of the religious way of life:

“An ethical rule, he says, expresses itself as a universal: all men under such-and-such circumstances ought to do such and such.  But the religious personality may be called upon to do something that goes against the universal norm.”

And if something goes against a universal norm, and one feels that they ought to do it anyway (above all else), then they are implicitly denying that a complete theory of ethics involves exclusive universality (a categorical imperative); rather, it must require taking some kind of individual exceptions into account.  Kierkegaard seems to be prioritizing the individual in all of his philosophy, and yet he didn’t think that a theory of ethics could plausibly account for such prioritization.

“The validity of this break with the ethical is guaranteed, if it ever is, by only one principle, which is central to Kierkegaard’s existential philosophy as well as to his Christian faith-the principle, namely, that the individual is higher than the universal. (This means also that the individual is always of higher value than the collective).”

I completely agree with Kierkegaard that the individual is always of higher value than the collective; and this can be shown by any utilitarian moral theory that forgets to take into account the psychological state of those individual actors and moral agents carrying out some plan to maximize happiness or utility for the greatest number of people.  If the imperative comes down to my having to do something such that I can no longer live with myself afterward, then the moral theory has failed miserably.  Instead, I should feel that I did the right thing in any moral dilemma (when thinking clearly and while maximally informed of the facts), even when every available option is less than optimal.  We have to be able to live with our own actions, and ultimately with the kind of person that those actions have made us become.  The concrete self that we alone have conscious access to takes on a priority over any abstract conception of other selves or any kind of universality that references only a part of our being.

“Where then as an abstract rule it commands something that goes against my deepest self (but it has to be my deepest self, and herein the fear and trembling of the choice reside), then I feel compelled out of conscience-a religious conscience superior to the ethical-to transcend that rule.  I am compelled to make an exception because I myself am an exception; that is, a concrete being whose existence can never be completely subsumed under any universal or even system of universals.”

Although I agree with Kierkegaard here for the most part, in terms of our giving the utmost importance to the individual self, I don’t think that a religious conscience is something one can distinguish from their conscience generally; rather, it would just be one’s conscience, albeit one altered by a set of religious beliefs, which may end up changing how one’s conscience operates but doesn’t change the fact that it is still their conscience nevertheless.

For example, if two people differ with respect to their belief in souls, where one has a religious belief that a fertilized egg has a soul and the other person only believes that people with a capacity for consciousness or a personality have souls (or have no soul at all), then that difference in belief may affect their conscience differently if both parties were to, for example, donate a fertilized egg to a group of stem-cell researchers.  The former may feel guilty afterward (knowing that the egg they believe to be inhabited by a soul may be destroyed), whereas the latter may feel really good about themselves for aiding important life-saving medical research.  This is why one can only make a proper moral assessment when they are taking seriously what is truly factual about the world (what is supported by evidence) and what is a cherished religious or otherwise supernatural belief.  If one’s conscience is primarily operating on true evidenced facts about the world (along with their intuition), then their conscience and what some may call their religious conscience should be referring to the exact same thing.

Despite the fact that viable moral theories should be centered around the individual rather than the collective, this doesn’t mean that we shouldn’t try and come up with sets of universal moral rules that ought to be followed most of the time.  For example, a rule like “don’t steal from others” is a good rule to follow most of the time, and if everyone in a society strives to obey such a rule, that society will be better off overall as a result; but if my child is starving and nobody is willing to help in any way, then my only option may be to steal a loaf of bread in order to prevent the greater moral crime of letting a child starve.

This example should highlight a fundamental oversight regarding Kant’s categorical imperative as well: you can build in any number of exceptions within a universal law to make it account for personal circumstances and even psychological idiosyncrasies, thus eliminating the kind of universality that Kant sought to maintain.  For example, if I willed it to be a universal law that “you shouldn’t take your own life,” I could make this better by adding an exception to it so that the law becomes “you shouldn’t take your own life unless it is to save the life of another,” or even more individually tailored to “you shouldn’t take your own life unless not doing so will cause you to suffer immensely or inhibit your overall life satisfaction and life fulfillment.”  If someone has a unique life history or psychological predisposition whereby taking their own life is the only option available to them lest they suffer needlessly, then they ought to take their own life regardless of whatever categorical imperative they are striving to uphold.

There is however still an important reason for adopting Kant’s universal maxim (at least generally speaking): the fact that universal laws like the Golden Rule provide people with an easy heuristic to quickly ascertain the likely moral status of any particular action or behavior.  If we try and tailor in all sorts of idiosyncratic exceptions (with respect to yourself as well as others), it makes the rules much more complicated and harder to remember; instead, one should use the universal rules most of the time and only when they see a legitimate reason to question it, should they consider if an exception should be made.

Another important point regarding ethical behavior or moral dilemmas is the factor of uncertainty in our knowledge of a situation:

“But even the most ordinary people are required from time to time to make decisions crucial for their own lives, and in such crises they know something of the “suspension of the ethical” of which Kierkegaard writes.  For the choice in such human situations is almost never between a good and an evil, where both are plainly as such and the choice therefore made in all the certitude of reason; rather it is between rival goods, where one is bound to do some evil either way, and where the ultimate outcome and even-of most of all-our own motives are unclear to us.  The terror of confronting oneself in such a situation is so great that most people panic and try to take cover under any universal rule that will apply, if only it will save them from the task of choosing themselves.”

And rather than making these tough decisions themselves, a lot of people would prefer for others to tell them what’s right and wrong behavior such as getting these answers from a religion, from one’s parents, from a community, etc.; but this negates the intimate consideration of the individual where each of these difficult choices made will lead them down a particular path in their life and shape who they become as a person.  The same fear drives a lot of people away from critical thinking, where many would prefer to have people tell them what’s true and false and not have to think about these things for themselves, and so they gravitate towards institutions that say they “have all the answers” (even if many that fear critical thinking wouldn’t explicitly say that this is the case, since it is primarily a manifestation of the unconscious).  Kierkegaard highly valued these difficult moments of choice and thought they were fundamental to being a true self living an authentic life.

But despite the fact that universal ethical rules are convenient and fairly effective to use in most cases, they are still far from perfect and so one will find themselves in a situation where they simply don’t know which rule to use or what to do, and one will just have to make a decision that they think will be the most easy to live with:

“Life seems to have intended it this way, for no moral blueprint has ever been drawn up that covers all the situations for us beforehand so that we can be absolutely certain under which rule the situation comes.  Such is the concreteness of existence that a situation may come under several rules at once, forcing us to choose outside any rule, and from inside ourselves…Most people, of course, do not want to recognize that in certain crises they are being brought face to face with the religious center of their existence.”

Now I wouldn’t call this the religious center of one’s existence but rather the moral center of one’s existence; it is simply the fact that we’re trying to distinguish between universal moral prescriptions (which Kierkegaard labels as “the ethical”) and those that are non-universal or dynamic (which Kierkegaard labels as “the religious”).  In any case, one can call this whatever they wish, as long as they understand the distinction that’s being made here which is still an important one worth making.  And along with acknowledging this distinction between universal and individual moral consideration, it’s also important that one engages with the world in a way where our individual emotional “palette” is faced head on rather than denied or suppressed by society or its universal conventions.

Barrett mentions how the denial of our true emotions (brought about by modernity) has inhibited our connection to the transcendent, or at least, inhibited our appreciation or respect for the causal forces that we find to be greater than ourselves:

“Modern man is farther from the truth of his own emotions than the primitive.  When we banish the shudder of fear, the rising of the hair of the flesh in dread, or the shiver of awe, we shall have lost the emotion of the holy altogether.”

But beyond acknowledging our emotions, Kierkegaard has something to say about how we choose to cope with them, most especially that of anxiety and despair:

“We are all in despair, consciously or unconsciously, according to Kierkegaard, and every means we have of coping with this despair, short of religion, is either unsuccessful or demoniacal.”

And here is another place where I have to part ways with Kierkegaard despite his brilliance in examining the human condition and the many complicated aspects of our psychology; for relying on religion (or more specifically, relying on religious belief) to cope with despair is but another distraction from the truth of our own existence.  It is an inauthentic way of living life since one is avoiding the way the world really is.  I think it’s far more effective and authentic for people to work on changing their attitude toward life and the circumstances they find themselves in without sacrificing a reliable epistemology in the process.  We need to provide ourselves with avenues for emotional expression, work to increase our mindfulness and positivity, and constantly strive to become better versions of ourselves by finding things we’re passionate about and by living a philosophically examined life.  This doesn’t mean that we have to rid ourselves of the rituals, fellowship, and meditative benefits that religion offer; but rather that we should merely dispense with the supernatural component and the dogma and irrationality that’s typically attached to and promoted by religion.

4. Subjective and Objective Truth

Kierkegaard ties his conception of the religious life to the meaning of truth itself, and he distinguishes this mode of living with the concept of religious belief.  While one can assimilate a number of religious beliefs just as one can do with non-religious beliefs, for Kierkegaard, religion itself is something else entirely:

“If the religious level of existence is understood as a stage upon life’s way, then quite clearly the truth that religion is concerned with is not at all the same as the objective truth of a creed or belief.  Religion is not a system of intellectual propositions to which the believer assents because he knows it to be true, as a system of geometry is true; existentially, for the individual himself, religion means in the end simply to be religious.  In order to make clear what it means to be religious, Kierkegaard has to reopen the whole question of the meaning of truth.”

And his distinction between objective and subjective truth is paramount to understanding this difference.  One could say perhaps that by subjective truth he is referring to a truth that must be embodied and have an intimate relation to the individual:

“But the truth of religion is not at all like (objective truth): it is a truth that must penetrate my own personal existence, or it is nothing; and I must struggle to renew it in my life every day…Strictly speaking, subjective truth is not a truth that I have, but a truth that I am.”

The struggle for renewal goes back to Kierkegaard’s conception of how the meaning a person finds in their life is in part dependent on some kind of personal commitment; it relies on making certain choices that one remakes day after day, keeping these personal choices in focus so as to not lose sight of the path we’re carving out for ourselves.  And so it seems that he views subjective truth as intimately connected to the meaning we give our lives, and to the kind of person that we are now.

Perhaps another way we can look at this conception, especially as it differs from objective truth, is to examine the relation between language, logic, and conscious reasoning on the one hand, and intuition, emotion, and the unconscious mind on the other.  Objective truth is generally communicable, it makes explicit predictions about the causal structure of reality, and it involves a way of unifying our experiences into some coherent ensemble; but subjective truth involves felt experience, emotional attachment, and a more automated sense of familiarity and relation between ourselves and the world.  And both of these facets are important for our ability to navigate the world effectively while also feeling that we’re psychologically whole or complete.

5. The Attack Upon Christendom

Kierkegaard points out an important shift in modern society that Barrett mentioned early on in this book; the move toward mass society, which has effectively eaten away at our individuality:

“The chief movement of modernity, Kierkegaard holds, is a drift toward mass society, which means the death of the individual as life becomes ever more collectivized and externalized.  The social thinking of the present age is determined, he says, by what might be called the Law of Large Numbers: it does not matter what quality each individual has, so long as we have enough individuals to add up to a large number-that is, to a crowd or mass.”

And false metrics of success like economic growth and population growth have definitely detracted from the quality each of our lives is capable of achieving.  And because of our inclinations as a social species, we are (perhaps unconsciously) drawn towards the potential survival benefits brought about by joining progressively larger and larger groups.  In terms of industrialization, we’ve been using technology to primarily allow us to support more people on the globe and to increase the output of each individual worker (to benefit the wealthiest) rather than substantially reducing the number of hours worked per week or eliminating poverty outright.  This has got to be the biggest failure of the industrial revolution and of capitalism (when not regulated properly), and one that’s so often taken for granted.

Because of the greed that’s consumed the moral compass of those at the top of our sociopolitical hierarchy, our lives have been funneled into a military-industrial complex that will only surrender our servitude when the rich eventually stop asking for more and more of the fruits of our labor.  And by the push of marketing and social pressure, we’re tricked into wanting to maintain society the way it is; to continue to buy more consumable garbage that we don’t really need and to be complacent with such a lifestyle.  The massive externalization of our psyche has led to a kind of, as Barrett put it earlier, spiritual poverty.  And Kierkegaard was well aware of this psychological degradation brought on by modernity’s unchecked collectivization.

Both Kierkegaard and Nietzsche also saw a problem with how modernity had effectively killed God; though Kierkegaard and Nietzsche differed in their attitudes toward organized religion, Christianity in particular:

“The Grand Inquisitor, the Pope of Popes, relieves men of the burden of being Christian, but at the same time leaves them the peace of believing they are Christians…Nietzsche, the passionate and religious atheist, insisted on the necessity of a religious institution, the Church, to keep the sheep in peace, thus putting himself at the opposite extreme from Kierkegaard; Dostoevski in his story of the Grand Inquisitor may be said to embrace dialectically the two extremes of Kierkegaard and Nietzsche.  The truth lies in the eternal tension between Christ and the Grand Inquisitor.  Without Christ the institution of religion is empty and evil, but without the institution as a means of mitigating it the agony in the desert of selfhood is not viable for most men.”

Modernity had helped produce organized religion, thus diminishing the personal, individualistic dimension of spirituality which Kierkegaard saw as indispensable; but modernity also facilitated the “death of God” making even organized religion increasingly difficult to adhere to since the underlying theistic foundation was destroyed for many.  Nietzsche realized the benefits of organized religion since so many people are unable to think critically for themselves, are unable to find an effective moral framework that isn’t grounded on religion, and are unable to find meaning or stability in their lives that isn’t grounded on belief in God or in some religion or other.  In short, most people aren’t able to deal with the burdens realized within existentialism.

Due to the fact that much of European culture and so many of its institutions had been built around Christianity, this made the religion much more collectivized and less personal, but it also made Christianity more vulnerable to being uprooted by new ideas that were given power from the very same collective.  Thus, it was the mass externalization of religion that made religion that much more accessible to the externalization of reason, as exemplified by scientific progress and technology.  Reason and religion could no longer co-exist in the same way they once had because the externalization drastically reduced our ability to compartmentalize the two.  And this made it that much harder to try and reestablish a more personal form of Christianity, as Kierkegaard had been longing for.

Reason also led us to the realization of our own finitude, and once this view was taken more seriously, it created yet another hurdle for the masses to maintain their religiosity; for once death is seen as inevitable, and immortality accepted as an impossibility, one of the most important uses for God becomes null and void:

“The question of death is thus central to the whole of religious thought, is that to which everything else in the religious striving is an accessory: ‘If there is no immortality, what use is God?’ “

The fear of death is a powerful motivator for adopting any number of beliefs that might help to manage the cognitive dissonance that results from it, and so the desire for eternal happiness makes death that much more frightening.  But once death is truly accepted, then many of the other beliefs that were meant to provide comfort or consolation are no longer necessary or meaningful.  For Kierkegaard, he didn’t think we could cope with the despair of an inevitable death without a religion that promised some way to overcome it and to transcend our life and existence in this world, and he thought Christianity was the best religion to accomplish this goal.

It is on this point in particular, how he fails to properly deal with death, that I find Kierkegaard to lack an important strength as an existentialist; as it seems that in order to live an authentic life, a person must accept death, that is, they must accept the finitude of our individual human existence.  As Heidegger would later go on to say, buried deep within the idea of death as the possibility of impossibility is a basis for affirming one’s own life, through the acceptance of our mortality and the uncertainty that surrounds it.

Click here for the next post in this series, part 8 on Nietzsche’s philosophy.

Irrational Man: An Analysis (Part 1, Chapter 3: “The Testimony of Modern Art”)

In the previous post in this series on William Barrett’s Irrational Man, I explored Part 1, Chapter 2: The Encounter with Nothingness, where Barrett gives an overview of some of the historical contingencies that have catalyzed the advent of existentialism: namely, the decline of religion, the rational ordering of society through capitalism and industrialization, and the finitude found within science and mathematics.  In this post, I want to explore Part I, Chapter 3: The Testimony of Modern Art.  Let’s begin…

Ch. 3 – The Testimony of Modern Art

In this chapter, Barrett expands the scope of existentialism, its drives and effects, on the content of modern art.  As he sees it, existentialist anxiety, discontent, and facing certain truths resulting from our modern understanding of the world we live in have heavily influenced if not predominated the influence on modern art.  Many find modern art to be, as he puts it:

“…too bare and bleak, too negative or nihilistic, too shocking or scandalous; it dishes out unpalatable truths.”

Perhaps it should come as no surprise that these kinds of qualities in much of modern art are but a product of existentialist angst, feelings of solitude, and an outright clash between traditional norms and narratives about human life and the views of those who have accepted much of what modernity has brought to light, however difficult and uncomfortable that acceptance is.

We might also be tempted to ask ourselves if modern art represents something more generally about our present state.  Barrett sheds some light on this question when he says:

“..Modern art thus begins, and sometimes ends, as a confession of spiritual poverty.  That is its greatness and its triumph, but also the needle it jabs into the Philistine’s sore spot, for the last thing he wants to be reminded of is his spiritual poverty.  In fact, his greatest poverty is not to know how impoverished he is, and so long as he mouths the empty ideals or religious phrases of the past he is but as tinkling brass.”

I can certainly see a lot of modern art as being an expression or manifestation of the spiritual poverty of our modern age.  It’s true that religion no longer serves the same stabilizing role for our society as it once did, nor can we deny that the knowledge we’ve gained since the Enlightenment has caused a compartmentalizing effect on our psyche with respect to reason and religious belief (with the latter being eliminated for many if the compartmentalization is insufficient to overcome any existing cognitive dissonance).  We can also honestly say that many in the modern world have lost a sense of meaning and purpose in their lives, and feel a loss of connection to their community or to the rest of humanity in general, largely as a result of the way society (and in turn, how each life within that society) has become structured.

But, as Barrett says, the fact that many people don’t realize just how impoverished they are, is the greatest form of poverty realized by many living in modernity.  And we could perhaps summarize this spiritual poverty as simply the lack of having a well-rounded expression of one’s entire psyche.  It seems to me that this qualitative state is tied to another aspect of the overall process: in particular, our degree of critical self-reflection which affects our vision of our own personal growth, our ethical development, and ultimately our ability to define meaning for our lives on our own individual terms.

One could describe a kind of trade-off that has occurred during humanity’s transition to modernity: we once had a more common religious structure that pervaded one’s entire life and which was shared by most everyone else living in pre-modern society, and this was replaced by a secular society that encouraged new forms of conformity aside from religion; and we once had a religious structure that allowed one to connect to some of the deeper layers of their inner self, and this was replaced with more of an industrialized, consumerist structure involving psychological externalization which lended itself to the powers of conformity already present in the collective social sphere of our lives.

Since artistic expression serves as a kind of window into the predominating psychology of the people and artists living at any particular time, Barrett makes a very good point when he says:

“Even if existential philosophy had not been formulated, we would know from modern art that a new and radical conception of man was at work in this period.”

And within the modern art movement, we can see a kind of compensatory effect occurring where the externalization in modern society is countered with a vast supply of subjectivity including the creation of very unique and highly imaginative abstractions.  But, underneath or within many of these abstractions lies a fundamental perspective of modern humans living as a kind of stranger to the world, surrounded by an alien environment, with a yearning to feel a sense of belonging and familiarity.

We’ve seen similar changes in artistic expression within literature as well.  Whereas literature had historically been created under the assumption of a linear temporality operating within the bounds of a well-defined beginning, middle, and end, it was beginning to show more chaotic or unpredictable qualities in its temporal structure, less intuitive plot progressions, and in many cases leaving the reader with what appeared to be an open or unresolved ending, and even a feeling of discontent or shock.  This is what we’d expect to occur if we realize the Greek roots of Western civilization, ultimately based on a culture that believed the universe to have a logical structure, with a teleological, anthropomorphic and anthropocentric order of events that cohered into an intelligible whole.  Once this view of the universe changed to one that saw the world as less predictable and indifferent to human wants and needs, the resultant psychological changes coincided with a change in literary style and expression.

In all these cases, we can see that modern art has no clear-cut image of what it means to be human or what exactly a human being is, for the simple reason that it sees human beings as lacking any fixed essence or nature; it sees humans as transcending any pre-defined identity or mold.  Lacking any fixed essence, I think that modern conceptions of humanity entail a radical form of freedom to define ourselves if we choose to do so, even though this worthwhile goal is often difficult, uncomfortable, and a project that never really ends until we die.  Actually striving to make use of this freedom is needed now more than ever, given the level of conformity and the increasingly abstract ways of living that modern society foists upon us.

Another interesting quote of Barrett’s regards the relationship between modern art and conceptions of the meaningless:

“Modern art has discarded the traditional assumptions of rational form.  The modern artist sees man not as the rational animal, in the sense handed down to the West by the Greeks, but as something else.  Reality, too, reveals itself to the artist not as the Great Chain of Being, which the tradition of Western rationalism had declared intelligible down to its smallest link and in its totality, but as much more refractory: as opaque, dense, concrete, and in the end inexplicable.  At the limits of reason one comes face to fact with the meaningless; and the artist today shows us the absurd, the inexplicable, the meaningless in our daily life.

This is interesting, especially given Barrett’s previous claim (in chapter 1) about existentialism’s opposition to the positivist position that “…the whole surrounding area in which ordinary men live from day to day and have their dealings with other men is consigned to the outer darkness of the meaningless.”  Barrett’s more recent claim above, while not necessarily in contradiction with the previous claim, suggests (at the very least) an interesting nuance within existentialist thought.  It suggests that positivism wants to keep silent about the meaningless, whereas existentialism does not; but it also suggests that there’s some agreement between positivism’s claim of what is meaningless and that of existentialism.  Both supposedly contrary schools of thought make claims to what is meaningless either implicitly or explicitly, and both have some agreement as to what falls under the umbrella of the meaningless; it’s just that existentialism accepts and promulgates this meaninglessness as a fundamental part of our human existence whereas positivism more or less rejects this as not even worth talking about, let alone worth using to help construct one’s world view.

Barrett finishes this chapter with a brief reminder of the immense technological progress we’ve made in modern times and the massive externalization of our lives that accompanied this change.  But there is a growing disparity between this external power and our inner poverty; an irony that modern art wants to expose.  Tying this all together, he says:

“The bomb reveals the dreadful and total contingency of human existence.  Existentialism is the philosophy of the atomic age.”

And that pretty much says it all.  Originally, life on this planet (eventually including our own species) was born from the sun, in terms of its elements and its ultimate source of energy.  Now we live in an age where we’ve harnessed the power that drives the sun itself (nuclear fusion); the very power that may one day lead to the end of our own existence.  I find this situation to be far more ironic than the disparity between our inner and outer lives as Barrett points out, as we are on the brink of wiping ourselves out by the very mechanism that allowed us to exist in the first place.  Nothing could be a more poetic example of the contingency of our own existence.

In the next post in this series, I’ll explore Irrational Man, Part 2: The Sources of Existentialism in the Western Tradition, Chapter 4: Hebraism and Hellenism.

Predictive Processing: Unlocking the Mysteries of Mind & Body (Part V)

In the previous post, part 4 in this series on Predictive Processing (PP), I explored some aspects of reasoning and how different forms of reasoning can be built from a foundational bedrock of Bayesian inference (click here for parts 1, 2, or 3).  This has a lot to do with language, but I also claimed that it depends on how the brain is likely generating new models, which I think is likely to involve some kind of natural selection operating on neural networks.  The hierarchical structure of the generative models for these predictions as described within a PP framework, also seems to fit well with the hierarchical structure that we find in the brain’s neural networks.  In this post, I’m going to talk about the relation between memory, imagination, and unconscious and conscious forms of reasoning.

Memory, Imagination, and Reasoning

Memory is of course crucial to the PP framework whether for constructing real-time predictions of incoming sensory information (for perception) or for long-term predictions involving high-level, increasingly abstract generative models that allow us to accomplish complex future goals (like planning to go grocery shopping, or planning for retirement).  Either case requires the brain to have stored some kind of information pertaining to predicted causal relations.  Rather than memories being some kind of exact copy of past experiences (where they’d be stored like data on a computer), research has shown that memory functions more like a reconstruction of those past experiences which are modified by current knowledge and context, and produced by some of the same faculties used in imagination.

This accounts for any false or erroneous aspects of our memories, where the recalled memory can differ substantially from how the original event was experienced.  It also accounts for why our memories become increasingly altered as more time passes.  Over time, we learn new things, continuing to change many of our predictive models about the world, and thus have a more involved reconstructive process the older the memories are.  And the context we find ourselves in when trying to recall certain memories, further affect this reconstruction process, adapting our memories in some sense to better match what we find most salient and relevant in the present moment.

Conscious vs. Unconscious Processing & Intuitive Reasoning (Intuition)

Another attribute of memory is that it is primarily unconscious, where we seem to have this pool of information that is kept out of consciousness until parts of it are needed (during memory recall or or other conscious thought processes).  In fact, within the PP framework we can think of most of our generative models (predictions), especially those operating in the lower levels of the hierarchy, as being out of our conscious awareness as well.  However, since our memories are composed of (or reconstructed with) many higher level predictions, and since only a limited number of them can enter our conscious awareness at any moment, this implies that most of the higher-level predictions are also being maintained or processed unconsciously as well.

It’s worth noting however that when we were first forming these memories, a lot of the information was in our consciousness (the higher-level, more abstract predictions in particular).  Within PP, consciousness plays a special role since our attention modifies what is called the precision weight (or synaptic gain) on any prediction error that flows upward through the predictive hierarchy.  This means that the prediction errors produced from the incoming sensory information or at even higher levels of processing are able to have a greater impact on modifying and updating the predictive models.  This makes sense from an evolutionary perspective, where we can ration our cognitive resources in a more adaptable way, by allowing things that catch our attention (which may be more important to our survival prospects) to have the greatest effect on how we understand the world around us and how we need to act at any given moment.

After repeatedly encountering certain predicted causal relations in a conscious fashion, the more likely those predictions can become automated or unconsciously processed.  And if this has happened with certain rules of inference that govern how we manipulate and process many of our predictive models, it seems reasonable to suspect that this would contribute to what we call our intuitive reasoning (or intuition).  After all, intuition seems to give people the sense of knowing something without knowing how it was acquired and without any present conscious process of reasoning.

This is similar to muscle memory or procedural memory (like learning how to ride a bike) which is consciously processed at first (thus involving many parts of the cerebral cortex), but after enough repetition it becomes a faster and more automated process that is accomplished more economically and efficiently by the basal ganglia and cerebellum, parts of the brain that are believed to handle a great deal of unconscious processing like that needed for procedural memory.  This would mean that the predictions associated with these kinds of causal relations begin to function out of our consciousness, even if the same predictive strategy is still in place.

As mentioned above, one difference between this unconscious intuition and other forms of reasoning that operate within the purview of consciousness is that our intuitions are less likely to be updated or changed based on new experiential evidence since our conscious attention isn’t involved in the updating process. This means that the precision weight of upward flowing prediction errors that encounter downward flowing predictions that are operating unconsciously will have little impact in updating those predictions.  Furthermore, the fact that the most automated predictions are often those that we’ve been using for most of our lives, means that they are also likely to have extremely high Bayesian priors, further isolating them from modification.

Some of these priors may become what are called hyperpriors or priors over priors (many of these believed to be established early in life) where there may be nothing that can overcome them, because they describe an extremely abstract feature of the world.  An example of a possible hyperprior could be one that demands that the brain settle on one generative model even when it’s comparable to several others under consideration.  One could call this a “tie breaker” hyperprior, where if the brain didn’t have this kind of predictive mechanism in place, it may never be able to settle on a model, causing it to see the world (or some aspect of it) as a superposition of equiprobable states rather than simply one determinate state.  We could see the potential problem in an organism’s survival prospects if it didn’t have this kind of hyperprior in place.  Whether or not a hyperprior like this is a form of innate specificity, or acquired in early learning is debatable.

An obvious trade-off with intuition (or any kind of innate biases) is that it provides us with fast, automated predictions that are robust and likely to be reliable much of the time, but at the expense of not being able to adequately handle more novel or complex situations, thereby leading to fallacious inferences.  Our cognitive biases are also likely related to this kind of unconscious reasoning whereby evolution has naturally selected cognitive strategies that work well for the kind of environment we evolved in (African savanna, jungle, etc.) even at the expense of our not being able to adapt as well culturally or in very artificial situations.

Imagination vs. Perception

One large benefit of storing so much perceptual information in our memories (predictive models with different spatio-temporal scales) is our ability to re-create it offline (so to speak).  This is where imagination comes in, where we are able to effectively simulate perceptions without requiring a stream of incoming sensory data that matches it.  Notice however that this is still a form of perception, because we can still see, hear, feel, taste and smell predicted causal relations that have been inferred from past sensory experiences.

The crucial difference, within a PP framework, is the role of precision weighting on the prediction error, just as we saw above in terms of trying to update intuitions.  If precision weighting is set or adjusted to be relatively low with respect to a particular set of predictive models, then prediction error will have little if any impact on the model.  During imagination, we effectively decouple the bottom-up prediction error from the top-down predictions associated with our sensory cortex (by reducing the precision weighting of the prediction error), thus allowing us to intentionally perceive things that aren’t actually in the external world.  We need not decouple the error from the predictions entirely, as we may want our imagination to somehow correlate with what we’re actually perceiving in the external world.  For example, maybe I want to watch a car driving down the street and simply imagine that it is a different color, while still seeing the rest of the scene as I normally would.  In general though, it is this decoupling “knob” that we can turn (precision weighting) that underlies our ability to produce and discriminate between normal perception and our imagination.

So what happens when we lose the ability to control our perception in a normal way (whether consciously or not)?  Well, this usually results in our having some kind of hallucination.  Since perception is often referred to as a form of controlled hallucination (within PP), we could better describe a pathological hallucination (such as that arising from certain psychedelic drugs or a condition like Schizophrenia) as a form of uncontrolled hallucination.  In some cases, even with a perfectly normal/healthy brain, when the prediction error simply can’t be minimized enough, or the brain is continuously switching between models, based on what we’re looking at, we experience perceptual illusions.

Whether it’s illusions, hallucinations, or any other kind of perceptual pathology (like not being able to recognize faces), PP offers a good explanation for why these kinds of experiences can happen to us.  It’s either because the models are poor (their causal structure or priors) or something isn’t being controlled properly, like the delicate balance between precision weighting and prediction error, any of which that could result from an imbalance in neurotransmitters or some kind of brain damage.

Imagination & Conscious Reasoning

While most people would tend to define imagination as that which pertains to visual imagery, I prefer to classify all conscious experiences that are not directly resulting from online perception as imagination.  In other words, any part of our conscious experience that isn’t stemming from an immediate inference of incoming sensory information is what I consider to be imagination.  This is because any kind of conscious thinking is going to involve an experience that could in theory be re-created by an artificial stream of incoming sensory information (along with our top-down generative models that put that information into a particular context of understanding).  As long as the incoming sensory information was a particular way (any way that we can imagine!), even if it could never be that way in the actual external world we live in, it seems to me that it should be able to reproduce any conscious process given the right top-down predictive model.  Another way of saying this is that imagination is simply another word to describe any kind of offline conscious mental simulation.

This also means that I’d classify any and all kinds of conscious reasoning processes as yet another form of imagination.  Just as is the case with more standard conceptions of imagination (within PP at least), we are simply taking particular predictive models, manipulating them in certain ways in order to simulate some result with this process decoupled (at least in part) from actual incoming sensory information.  We may for example, apply a rule of inference that we’ve picked up on and manipulate several predictive models of causal relations using that rule.  As mentioned in the previous post and in the post from part 2 of this series, language is also likely to play a special role here where we’ll likely be using it to help guide this conceptual manipulation process by organizing and further representing the causal relations in a linguistic form, and then determining the resulting inference (which will more than likely be in a linguistic form as well).  In doing so, we are able to take highly abstract properties of causal relations and apply rules to them to extract new information.

If I imagine a purple elephant trumpeting and flying in the air over my house, even though I’ve never experienced such a thing, it seems clear that I’m manipulating several different types of predicted causal relations at varying levels of abstraction and experiencing the result of that manipulation.  This involves inferred causal relations like those pertaining to visual aspects of elephants, the color purple, flying objects, motion in general, houses, the air, and inferred causal relations pertaining to auditory aspects like trumpeting sounds and so forth.

Specific instances of these kinds of experienced causal relations have led to my inferring them as an abstract probabilistically-defined property (e.g. elephantness, purpleness, flyingness, etc.) that can be reused and modified to some degree to produce an infinite number of possible recreated perceptual scenes.  These may not be physically possible perceptual scenes (since elephants don’t have wings to fly, for example) but regardless I’m able to add or subtract, mix and match, and ultimately manipulate properties in countless ways, only limited really by what is logically possible (so I can’t possibly imagine what a square circle would look like).

What if I’m performing a mathematical calculation, like “adding 9 + 9”, or some other similar problem?  This appears (upon first glance at least) to be very qualitatively different than simply imagining things that we tend to perceive in the world like elephants, books, music, and other things, even if they are imagined in some phantasmagorical way.  As crazy as those imagined things may be, they still contain things like shapes, colors, sounds, etc., and a mathematical calculation seems to lack this.  I think the key thing to realize here is the fundamental process of imagination as being able to add or subtract and manipulate abstract properties in any way that is logically possible (given our current set of predictive models).  This means that we can imagine properties or abstractions that lack all the richness of a typical visual/auditory perceptual scene.

In the case of a mathematical calculation, I would be manipulating previously acquired predicted causal relations that pertain to quantity and changes in quantity.  Once I was old enough to infer that separate objects existed in the world, then I could infer an abstraction of how many objects there were in some space at some particular time.  Eventually, I could abstract the property of how many objects without applying it to any particular object at all.  Using language to associate a linguistic symbol for each and every specific quantity would lay the groundwork for a system of “numbers” (where numbers are just quantities pertaining to no particular object at all).  Once this was done, then my brain could use the abstraction of quantity and manipulate it by following certain inferred rules of how quantities can change by adding to or subtracting from them.  After some practice and experience I would now be in a reasonable position to consciously think about “adding 9 + 9”, and either do it by following a manual iterative rule of addition that I’ve learned to do with real or imagined visual objects (like adding up some number of apples or dots/points in a row or grid), or I can simply use a memorized addition table and search/recall the sum I’m interested in (9 + 9 = 18).

Whether we consider imagining a purple elephant, mentally adding up numbers, thinking about what I’m going to say to my wife when I see her next, or trying to explicitly apply logical rules to some set of concepts, all of these forms of conscious thought or reasoning are all simply different sets of predictive models that I’m simply manipulating in mental simulations until I arrive at a perception that’s understood in the desired context and that has minimal prediction error.

Putting it all together

In summary, I think we can gain a lot of insight by looking at all the different aspects of brain function through a PP framework.  Imagination, perception, memory, intuition, and conscious reasoning fit together very well when viewed as different aspects of hierarchical predictive models that are manipulated and altered in ways that give us a much more firm grip on the world we live in and its inferred causal structure.  Not only that, but this kind of cognitive architecture also provides us with an enormous potential for creativity and intelligence.  In the next post in this series, I’m going to talk about consciousness, specifically theories of consciousness and how they may be viewed through a PP framework.

Predictive Processing: Unlocking the Mysteries of Mind & Body (Part I)

I’ve been away from writing for a while because I’ve had some health problems relating to my neck.  A few weeks ago I had double-cervical-disc replacement surgery and so I’ve been unable to write and respond to comments and so forth for a little while.  I’m in the second week following my surgery now and have finally been able to get back to writing, which feels very good given that I’m unable to lift or resume martial arts for the time being.  Anyway, I want to resume my course of writing beginning with a post-series that pertains to Predictive Processing (PP) and the Bayesian brain.  I’ve written one post on this topic a little over a year ago (which can be found here) as I’ve become extremely interested in this topic for the last several years now.

The Predictive Processing (PP) theory of perception shows a lot of promise in terms of finding an overarching schema that can account for everything that the brain seems to do.  While its technical application is to account for the acts of perception and active inference in particular, I think it can be used more broadly to account for other descriptions of our mental life such as beliefs (and knowledge), desires, emotions, language, reasoning, cognitive biases, and even consciousness itself.  I want to explore some of these relationships as viewed through a PP lens more because I think it is the key framework needed to reconcile all of these aspects into one coherent picture, especially within the evolutionary context of an organism driven to survive.  Let’s begin this post-series by first looking at how PP relates to perception (including imagination), beliefs, emotions, and desires (and by extension, the actions resulting from particular desires).

Within a PP framework, beliefs can be best described as simply the set of particular predictions that the brain employs which encompass perception, desires, action, emotion, etc., and which are ultimately mediated and updated in order to reduce prediction errors based on incoming sensory evidence (and which approximates a Bayesian form of inference).  Perception then, which is constituted by a subset of all our beliefs (with many of them being implicit or unconscious beliefs), is more or less a form of controlled hallucination in the sense that what we consciously perceive is not the actual sensory evidence itself (not even after processing it), but rather our brain’s “best guess” of what the causes for the incoming sensory evidence are.

Desires can be best described as another subset of one’s beliefs, and a set of beliefs which has the special characteristic of being able to drive action or physical behavior in some way (whether driving internal bodily states, or external ones that move the body in various ways).  Finally, emotions can be thought of as predictions pertaining to the causes of internal bodily states and which may be driven or changed by changes in other beliefs (including changes in desires or perceptions).

When we believe something to be true or false, we are basically just modeling some kind of causal relationship (or its negation) which is able to manifest itself into a number of highly-weighted predicted perceptions and actions.  When we believe something to be likely true or likely false, the same principle applies but with a lower weight or precision on the predictions that directly corresponds to the degree of belief or disbelief (and so new sensory evidence will more easily sway such a belief).  And just like our perceptions, which are mediated by a number of low and high-level predictions pertaining to incoming sensory data, any prediction error that the brain encounters results in either updating the perceptual predictions to new ones that better reduce the prediction error and/or performing some physical action that reduces the prediction error (e.g. rotating your head, moving your eyes, reaching for an object, excreting hormones in your body, etc.).

In all these cases, we can describe the brain as having some set of Bayesian prior probabilities pertaining to the causes of incoming sensory data, and these priors changing over time in response to prediction errors arising from new incoming sensory evidence that fails to be “explained away” by the predictive models currently employed.  Strong beliefs are associated with high prior probabilities (highly-weighted predictions) and therefore need much more counterfactual sensory evidence to be overcome or modified than for weak beliefs which have relatively low priors (low-weighted predictions).

To illustrate some of these concepts, let’s consider a belief like “apples are a tasty food”.  This belief can be broken down into a number of lower level, highly-weighted predictions such as the prediction that eating a piece of what we call an “apple” will most likely result in qualia that accompany the perception of a particular satisfying taste, the lower level prediction that doing so will also cause my perception of hunger to change, and the higher level prediction that it will “give me energy” (with these latter two predictions stemming from the more basic category of “food” contained in the belief).  Another prediction or set of predictions is that these expectations will apply to not just one apple but a number of apples (different instances of one type of apple, or different types of apples altogether), and a host of other predictions.

These predictions may even result (in combination with other perceptions or beliefs) in an actual desire to eat an apple which, under a PP lens could be described as the highly weighted prediction of what it would feel like to find an apple, to reach for an apple, to grab it, to bite off a piece of it, to chew it, and to swallow it.  If I merely imagine doing such things, then the resulting predictions will necessarily carry such a small weight that they won’t be able to influence any actual motor actions (even if these imagined perceptions are able to influence other predictions that may eventually lead to some plan of action).  Imagined perceptions will also not carry enough weight (when my brain is functioning normally at least) to trick me into thinking that they are actual perceptions (by “actual”, I simply mean perceptions that correspond to incoming sensory data).  This low-weighting attribute of imagined perceptual predictions thus provides a viable way for us to have an imagination and to distinguish it from perceptions corresponding to incoming sensory data, and to distinguish it from predictions that directly cause bodily action.  On the other hand, predictions that are weighted highly enough (among other factors) will be uniquely capable of affecting our perception of the real world and/or instantiating action.

This latter case of desire and action shows how the PP model takes the organism to be an embodied prediction machine that is directly influencing and being influenced by the world that its body interacts with, with the ultimate goal of reducing any prediction error encountered (which can be thought of as maximizing Bayesian evidence).  In this particular example, the highly-weighted prediction of eating an apple is simply another way of describing a desire to eat an apple, which produces some degree of prediction error until the proper actions have taken place in order to reduce said error.  The only two ways of reducing this prediction error are to change the desire (or eliminate it) to one that no longer involves eating an apple, and/or to perform bodily actions that result in actually eating an apple.

Perhaps if I realize that I don’t have any apples in my house, but I realize that I do have bananas, then my desire will change to one that predicts my eating a banana instead.  Another way of saying this is that my higher-weighted prediction of satisfying hunger supersedes my prediction of eating an apple specifically, thus one desire is able to supersede another.  However, if the prediction weight associated with my desire to eat an apple is high enough, it may mean that my predictions will motivate me enough to avoid eating the banana, and instead to predict what it is like to walk out of my house, go to the store, and actually get an apple (and therefore, to actually do so).  Furthermore, it may motivate me to predict actions that lead me to earn the money such that I can purchase the apple (if I don’t already have the money to do so).  To do this, I would be employing a number of predictions having to do with performing actions that lead to me obtaining money, using money to purchase goods, etc.

This is but a taste of what PP has to offer, and how we can look at basic concepts within folk psychology, cognitive science, and theories of mind in a new light.  Associated with all of these beliefs, desires, emotions, and actions (which again, are simply different kinds of predictions under this framework), is a number of elements pertaining to ontology (i.e. what kinds of things we think exist in the world) and pertaining to language as well, and I’d like to explore this relationship in my next post.  This link can be found here.

The Brain as a Prediction Machine

Over the last year, I’ve been reading a lot about Karl Friston and Andy Clark’s work on the concept of perception and action being mediated by a neurological schema centered on “predictive coding”, what Friston calls “active inference”, the “free energy principle”, and Bayesian inference in general as it applies to neuro-scientific models of perception, attention, and action.  Here’s a few links (Friston here; Clark here, here, and here) to some of their work as it is worth reading for those interested in neural modeling, information theory, and learning more about meta-theories pertaining to how the brain integrates and processes information.

I find it fascinating how this newer research and these concepts relate to and help to bring together some content from several of my previous blog posts, in particular, those that mention the concept of hierarchical neurological hardware and those that mention my own definition of knowledge “as recognized causal patterns that allow us to make successful predictions.”  For those that may be interested, here’s a list of posts I’ve made over the last few years that I think contain some relevant content (in chronological order).

The ideas formulated by Friston and expanded on by Clark center around the brain being (in large part) a prediction generating machine.  This fits in line with my own conclusions about what the brain seems to be doing when it’s acquiring knowledge over time (however limited my reading is on the subject).  Here’s an image of the basic predictive processing schema:

PPschema

The basic Predictive Processing schema (adapted from Lupyan and Clark (2014))

One key element in Friston and Clark’s work (among the work of some others) is the amalgamation of perception and action.  In this framework, perception itself is simply the result of the brain’s highest level predictions of incoming sensory data.  But also important in this framework is that prediction error minimization is accomplished through embodiment itself.  That is to say, their models posit that the brain not only tries to reduce prediction errors by updating its prediction models based on the actual incoming sensory information (with only the error feeding forward to update the models, similar to data compression schema), but the concept of active inference involves the minimization of prediction error through the use of motor outputs.  This could be taken to mean that motor outputs themselves are, in a sense, caused by the brain trying to reduce prediction errors pertaining to predicted sensory input — specifically sensory input that we would say stems from our desires and goals (e.g. desire to fulfill hunger, commuting to work, opening the car door, etc.).

To give a simple example of this model in action, let’s consider an apple resting on a table in front of me.  If I see the apple in front of me and I have a desire to grab it, my brain would not only predict what that apple looks like and how it is perceived over time (and how my arm looks while reaching for it), but it would also predict what it should feel like to reach for the apple.  So if I reach for it based on the somato-sensory prediction and there is some error in that prediction, corroborated by my visual cortex observing my arm moving in some wrong direction, the brain would respond by updating its models that predict what it should feel so that my arm starts moving in the proper direction.  This prediction error minimization is then fine-tuned as I get closer to the apple and can finally grab it.

This embodiment ingrained in the predictive processing models of Friston and Clark can also be well exemplified by the so-called “Outfielder’s Problem”.  In this problem, an outfielder is trying to catch a fly ball.  Now we know that outfielders are highly skilled at doing this rather effectively.  But if we ask the outfielder to merely stand still and watch a batted ball and predict where it will land, their accuracy is generally pretty bad.  So when we think about what strategy the brain takes to accomplish this when moving the body quickly, we begin to see the relevance of active inference and embodiment in the brain’s prediction schema.  The outfielder’s brain employs a brilliant strategy called “optical acceleration cancellation” (OAC).  Here, the well-trained outfielder sees the fly ball, and moves his or her body (while watching the ball) in order to cancel out any optical acceleration observed during the ball’s flight.  If they do this, then they will end up exactly where the ball was going to land, and then they’re able to catch it successfully.

We can imagine fine-grained examples of this active inference during everyday tasks, where I may simply be looking at a picture on my living room wall, and when my brain is predicting how it will look over the span of a few seconds, my head may slightly change its tilt, or direction, or my eyes may slowly move a fraction of a degree this way or that way, however imperceptible to me.  My brain in this case is predicting what the picture on the wall will look like over time and this prediction (according to my understanding of Clark) is identical to what we actually perceive.  One key thing to note here is that the prediction models are not simply updated based on the prediction error that is fed forward through the brain’s neurological hierarchies, but it is also getting some “help” from various motor movements to correct for the errors through action, rather than simply freezing all my muscles and updating the model itself (which may in fact be far less economical for the brain to do).

Another area of research that pertains to this framework, including ways of testing its validity, is that of evolutionary psychology and biology, where one would surmise (if these models are correct) that evolution likely provided our brains with certain hard-wired predictive models and our learning processes over time use these as starting points to produce innate reflexes (such as infant suckling to give a simple example) that allow us to survive long enough to update our models with actual new acquired information.  There are many different facets to this framework and I look forward to reading more about Friston and Clark’s work over the next few years.  I have a feeling that they have hit on something big, something that will help to answer a lot of questions about embodied cognition, perception, and even consciousness itself.

I encourage you to check out the links I provided pertaining to Friston and Clark’s work, to get a taste of the brilliant ideas they’ve been working on.