The Open Mind

Cogito Ergo Sum

Archive for the ‘Sociology’ Category

Irrational Man: An Analysis (Part 2, Chapter 5: “Christian Sources”)

leave a comment »

In the previous post in this series on William Barrett’s Irrational Man, we explored some of the Hebraistic and Hellenistic contributions to the formation and onset of existentialism within Western thought.  In this post, I’m going to look at chapter 5 of Barrett’s book, which takes a look at some of the more specifically Christian sources of existentialism in the Western tradition.

1. Faith and Reason

We should expect there to be some significant overlap between Christianity and Hebraism, since the former was a dissenting sect of the latter.  One concept worth looking at is the dichotomy of faith versus reason, where in Christianity the man of faith is far and above the man of reason.  Even though faith is heavily prized within Hebraism as well, there are still some differences between the two belief systems as they relate to faith, with these differences stemming from a number of historical contingencies:

“Ancient Biblical man knew the uncertanties and waverings of faith as a matter of personal experience, but he did not yet know the full conflict of faith with reason because reason itself did not come into historical existence until later, with the Greeks.  Christian faith is therefore more intense than that of the Old Testament, and at the same time paradoxical: it is not only faith beyond reason but, if need be, against reason.”

Once again we are brought back to Plato, and his concept of rational consciousness and the explicit use of reason; a concept that had never before been articulated or developed.  Christianity no longer had reason hiding under the surface or homogeneously mixed-in with the rest of our mental traits; rather, it was now an explicit and specific way of thinking and processing information that was well known and that couldn’t simply be ignored.  Not only did the Christian have to hold faith above this now-identified capacity of thinking logically, but they also had to explicitly reject reason if it contradicted any belief that was grounded on faith.

Barrett also mentions here the problem of trying to describe the concept of faith in a language of reason, and how the opposition between the two is particularly relevant today:

“Faith can no more be described to a thoroughly rational mind than the idea of colors can be conveyed to a blind man.  Fortunately, we are able to recognize it when we see it in others, as in St. Paul, a case where faith had taken over the whole personality.  Thus vital and indescribable, faith partakes of the mystery of life itself.  The opposition between faith and reason is that between the vital and the rational-and stated in these terms, the opposition is a crucial problem today.”

I disagree with Barrett to some degree here as I don’t think it’s possible to be able to recognize a quality in others (let alone a quality that others can recognize as well) that is entirely incommunicable.  On the contrary, if you and I can both recognize some quality, then we should be able to describe it even if only in some rudimentary way.  Now this doesn’t mean that I think a description in words can do justice to every conception imaginable, but rather that, as human beings there is an inherent ability to communicate fairly well that which is involved in common modes of living and in our shared experiences; even very complex concepts that have somewhat of a fuzzy boundary.

It may be that a thoroughly rational mind will not appreciate faith in any way, nor think it is of any use, nor have had any personal experience with it; but it doesn’t mean that they can’t understand the concept, or what a mode of living dominated by faith would look like.  I also don’t think it’s fair to say that the opposition between faith and reason is that between the vital and the rational, even though this may be a real view for some.  If by vital, Barrett is alluding to what is essential to our psyche, then I don’t think he can be talking about faith, at least not in the sense of faith as belief without good reason.  But I’m willing to concede his point if instead by vital he is simply referring to an attitude that is spirited, vibrant, energetic, and thus involving some lively form of emotional expression.

He delves a little deeper into the distinction between faith and reason when he says:

“From the point of view of reason, any faith, including the faith in reason itself, is paradoxical, since faith and reason are fundamentally different functions of the human psyche.”

It seems that Barrett is guilty of making a little bit of a false equivocation here between two uses of the word faith.  I’d actually go so far as to say that the two uses of the word faith are best described in the way that Barrett himself alluded to in the last chapter: faith as trust and faith as belief.  On the one hand, we have faith as belief where some belief or set of beliefs is maintained without good reason; and on the other hand, we have faith as trust where, in the case of reason, our trust in reason is grounded on the fact that its use leads to successful predictions about the causal structure of our experience.  So there isn’t really any paradox at all when it comes to “faith” in reason, because such a faith need not involve adopting some belief without good reason, but rather there is, at most, a trust in reason based on the demonstrable and replicable success it has provided us in the past.

Barrett is right however, when he says that faith and reason are fundamentally different functions of the human psyche.  But this doesn’t mean that they are isolated from one another: for example, if one believes that having faith in God will help them to secure some afterlife in heaven, and if one also desires an afterlife in heaven, then the use of reason can actually be employed to reinforce a faith in God (as it did for me, back when I was a Christian).  This doesn’t mean that faith in God is reasonable, but rather that when certain beliefs are isolated or compartmentalized in the right way (even in the case of imagining hypothetical scenarios), reason can be used to process a logical relation between them, even if those beliefs are themselves derived from illogical cognitive biases.  To see that this is true, one need only realize that an argument can be logically valid even if the premises are not logically sound.

To be as charitable as possible with the concept of faith (at least, one that is more broadly construed), I will make one more point that I think is relevant here: having faith as a form of positivity or optimism in one’s outlook on life, given the social and personal circumstances that one finds themselves in, is perfectly rational and reasonable.  It is well known that people tend to be happier and have more fulfilling lives if they steer clear of pessimism and simply try and stay as positive as possible.  One primary reason for this has to do with what I have previously called socio-psychological feedback loops (what I would also call an evidence-based form of Karma).  Thus, one can have an empirically demonstrable good reason to have an attitude that is reminiscent of faith, yet without having to sacrifice an epistemology that is reliable and which consistently tracks on to reality.

When it comes to the motivating factors that lead people to faith, existentialist thought can shed some light on what some of these factors may be.  If we consider Paul the Apostle, for example, which Barrett mentioned earlier, he also says:

“The central fact for his faith is that Jesus did actually rise from the dead, and so that death itself is conquered-which is what in the end man most ardently longs for.”

The finite nature of man, not only with regard to the intellect but also with respect to our lives in general, is something that existentialism both recognizes and sees as an integral starting point to philosophically build off of.  And I think it should come as no surprise that the fear of death in particular is likely to be a motivating factor for most if not all supernatural beliefs found within any number of religions, including the belief in possibly resurrecting one from the dead (as in the case of St. Paul’s belief in Jesus), the belief in spirits, souls, or other disembodied minds, or any belief in an afterlife (including reincarnation).

We can see that faith is, largely anyway, a means of reducing the cognitive dissonance that results from existential elements of our experience; it is a means of countering or rejecting a number of facts surrounding the contingencies and uncertainties of our existence.  From this fact, we can then see that by examining what faith is compensating for, one can begin to extract what the existentialists eventually elaborated on.  In other words, within existentialism, it is fully accepted that we can’t escape death (for example), and that we have to face death head-on (among other things) in order to live authentically; therefore having faith in overcoming death in any way is simply a denial of at least one burden willingly taken on by the existentialist.

Within early Christianity, we can also see some parallels between an existentialist like Kierkegaard and an early Christian author like Tertullian.  Barrett gives us an excerpt from Tertullian’s De Carne Christi:

“The Son of God was crucified; I am unashamed of it because men must needs be ashamed of it.  And the Son of God died; it is by all means to be believed, because it is absurd.  And He was buried and rose again; the fact is certain because it is impossible.”

This sounds a lot like Kierkegaard, despite the sixteen centuries of separation between these two figures, and despite the fact that Tertullian was writing when Christianity was first expanding and most aggressive and Kierkegaard writing more towards the end of Christianity when it had already drastically receded under the pressure of secularization and modernity.  In this quote of Tertullian’s, we can see a kind of submission to paradox, where it is because a particular proposition is irrational or unreasonable that it becomes an article of faith.  It’s not enough to say that faith is simply used to arrive at a belief that is unreasonable or irrational, but rather that its attribute of violating reason is actually taken as a reason to use it and to believe that the claims of faith are in fact true.  So rather than merely seeing this as an example of irrationality, this actually makes for a good example of the kind of anti-rational attitudes that contributed to the existentialist revolt against some dominant forms of positivism; a revolt that didn’t really begin to precipitate until Kierkegaard’s time.

Although anti-rationalism is often associated with existentialism, I think one would be making an error if they equivocated the two or assumed that anti-rationalism was integral or necessary to existentialism; though it should be said that this is contingent on how exactly one is defining both anti-rationalism (and by extension rationalism) and existentialism.  The two terms have a variety of definitions (as does positivism, with Barrett giving us one particularly narrow definition back in chapter one).  If by rationalism, one is referring to the very narrow view that rationality is the only means of acquiring knowledge, or a view that the only thing that matters in human thought or in human life is rationality, or even a view that humans are fundamentally or primarily rational beings; then I think it is fair to say that this is largely incompatible with existentialism, since the latter is often defined as a philosophy positing that humans are not primarily rational, and that subjective meaning (rather than rationality) plays a much more primary role in our lives.

However, if rationality is simply taken as a view that reason and rational thought are integral components (even if not the only components) for discovering much of the knowledge that exists about ourselves and the universe at large, then it is by all means perfectly compatible with most existentialist schools of thought.  But in order to remain compatible, some priority needs to be given to subjective experience in terms of where meaning is derived from, how our individual identity is established, and how we are to direct our lives and inform our system of ethics.

The importance of subjective experience, which became a primary assumption motivating existentialist thought, was also appreciated by early Christian thinkers such as St. Augustine.  In fact, Barrett goes so far as to say that the interiorization of experience and the primary focus on the individual over the collective was unknown to the Greeks and didn’t come about until Christianity had begun to take hold in Western culture:

“Where Plato and Aristotle had asked the question, What is man?, St. Augustine (in the Confessions) asks, Who am I?-and this shift is decisive.  The first question presupposed a world of objects, a fixed natural and zoological order, in which man was included; and when man’s precise place in that order had been found, the specifically differentiating characteristic of reason was added.  Augustine’s question, on the other hand, stems from an altogether different, more obscure and vital center within the questioner himself: from an acutely personal sense of dereliction and loss, rather than from the detachment with which reason surveys the world of objects in order to locate its bearer, man, zoologically within it.”

So rather than looking at man as some kind of well-defined abstraction within a categorical hierarchy of animals and objects, and one bestowed with a unique faculty of rational consciousness, man is looked at from the inside, from the perspective of, and identified as, an embodied individual with an experience that is deeply rooted and inherently emotional.  And in Augustine’s question, we also see an implication that man can’t be defined in a way that the Greeks attempted to do because in asking the question, Who am I?, man became separated from the kind of zoological order that all other animals adhered to.  It is the unique sense of self and the (often foreboding) awareness of this self then, as opposed to the faculty of reason, that Augustine highlighted during his moment of reflection.

Though Augustine was cognizant of the importance of exploring the nature of our personal, human existence (albeit for him, with a focus on religious experience in particular), he also explored aspects of human existence on the cosmic scale.  But when he went down this cosmic road of inquiry, he left the nature of personal lived existence by the wayside and instead went down a Neo-Platonist path of theodicy.  Augustine was after all a formal theologian, and one who tried to come to grips with the problem of evil in the world.  But he employed Greek metaphysics for this task, and tried to use logic mixed with irrational faith-based claims about the benevolence of God, in order to show that God’s world wasn’t evil at all.  Augustine did this by eliminating evil from his conception of existence such that all evil was simply a lack of being and hence a form of non-being and therefore non-existent.  This was supposed to be our consolation: ignore any apparent evil in the world because we’re told that it doesn’t really exist; all in the name of trying to justify the cosmos as good and thus to maintain a view that God is good.

This is such a shame in the end, for in Augustine’s attempt at theodicy, he simply downplayed and ignored the abominable and precarious aspects of our existence, which are as real a part of our experience as anything could be.  Perhaps Augustine was motivated to use rationalism here in order to feel comfort and safety in a world where many feel alienated and alone.  But as Barrett says:

“…reason cannot give that security; if it could, faith would be neither necessary nor so difficult.  In the age-old struggle between the rational and the vital, the modern revolt against theodicy (or, equally, the modern recognition of its impossibility) is on the side of the vital…”

And so once again we see that a primary motivation for faith is because many cannot get the consolation they long for through the use of reason.  They can’t psychologically deal with the way the world really is and so they either use faith to believe the world is some other way or they try to use reason along with some number of faith-based premises, in evaluating our apparent place in the world.  St. Augustine actually thought that he could harmonize faith and reason (or what Barrett referred to as the vital and the rational) and this set the stage for the next thousand years of Christian thought.  Once faith claims became more explicit in the Church’s various articles of dogma, one was left to exercise rationality as they saw fit, as long as it was within the confines of that faith and dogma.  And this of course led to many medieval thinkers to mistake faith for reason (in many cases at least), reinforced by the fact that the use of reason was operating under faith-based premises.

The problematic relation between the vital and the rational didn’t disappear even after many a philosopher assumed the two forces were in complete harmony and agreement; instead the clash resurfaced in a debate between Voluntarism and Intellectualism, with our attention now turned toward St. Thomas Aquinas.  As an intellectualist, St. Thomas tried to argue that the intellect in man is prior to the will of man because the intellect determines the will, since we can only desire what we know.  Scotus on the other hand, a voluntarist, responded to this claim and said that the will determines what ideas the intellect turns toward, and thus the will ends up determining what the intellect comes to know.

As an aside, I think that Scotus was more likely correct here, because while the intellect may inform the will (and thus help to direct the will), there is something far more fundamental and unconscious at play that drives our engagement with the world as an organism evolved to survive.  For example, we have a curiosity for both novelty (seeking new information) and familiarity (a maximal understanding of incoming information) and basic biologically-driven desires for satisfaction and homeostasis.  These drives simply don’t depend on the intellect even though they can make use of it, and these drives will continue to operate even if the intellect does not.  I also disagree with St. Thomas’ claim that one can only desire that which they already know, for one can desire knowledge for it’s own sake without knowing exactly what knowledge is yet to come (again from an independent drive, which we could call the will), and one can also desire a change in the world from the way they currently know it to be to some other as yet unspecified way (e.g. to desire a world without a specific kind of suffering, even though one may not know exactly what that would be like, having never lived such a life before).

The crux of the debate between the primacy of the will versus the intellect can perhaps be re-framed accordingly:

“…not in terms of whether will is to be given primacy over the intellect, or the intellect over the will-these functions being after all but abstract fragments of the total man-but rather in terms of the primacy of the thinker over his thoughts, of the concrete and total man himself who is doing the thinking…the fact remains that Voluntarism has always been, in intention at least, an effort to go beyond the thought to the concrete existence of the thinker who is thinking that thought.”

Here again I see the concept of an embodied and evolved organism and a reflective self at play, where the contents of consciousness cannot be taken on their own, nor as primary, but rather they require a conscious agent to become manifest, let alone to have any causal power in the world.

2. Existence vs. Essence

There is a deeper root to the problem underlying the debate between St. Thomas and Scotus and it has to do with the relation between essence and existence; a relation that Sartre (among others) emphasized as critical to existentialism.  Whereas the essence of a thing is simply what the thing is (its essential properties for example), the existence of a thing simply refers to the brute fact that the thing is.  A very common theme found throughout Sartre’s writings is the basic contention that existence precedes essence (though, for him, this only applies to the case of man).  Barrett gives a very simplified explanation of such a thesis:

“In the case of man, its meaning is not difficult to grasp.  Man exists and makes himself to be what he is; his individual essence or nature comes to be out of his existence; and in this sense it is proper to say that existence precedes essence.  Man does not have a fixed essence that is handed to him ready-made; rather, he makes his own nature out of his freedom and the historical conditions in which he is placed.”

I think this concept also makes sense when we put it into the historical context going back to ancient Greece.  Whereas the Greeks tried to put humanity within a zoological order and categorize us by some essential qualities, the existentialists that later came on the scene rejected such an idea almost as a kind of categorical error.  It was simply not possible to classify human beings like you could inanimate objects or other animals that didn’t have the behavioral complexity and a will to define themselves as we do.

Barrett also explains that the essence versus existence relation breaks down into two different but related questions:

1) Does existence have primacy over essence, or the reverse?

2) In actual existing things is there a real distinction between the two?  Or are they merely different points of view that the mind takes toward the same existing thing?

In an attempt at answering these questions, I think it’s reasonable to say that essence has more to do with potentiality (a logically possible set of abstractions or properties) and existence has more to do with actuality (a logically and physically possible past or present state of being).  I also think that existence is necessary in order for any essence to be real in any way, and not because an actual object needs to exist in order to instantiate a physical instance of some specific essence, but because any essence is merely an abstraction created by a conscious agent who inferred it in the first place.  Quite simply, there is no Platonic realm for the essence to “exist” within and so it needs an existing consciousness to produce it.  So I would say that existence has primacy over essence since an existent is needed for an essence to be conceived, let alone instantiated at all.

As for the second question, I think there is a real distinction between the two and that they are different points of view that the mind takes toward the same existing thing.  I don’t see this as an either/or dichotomy.  The mind can certainly acknowledge the existence of some object, animal, or person, without giving much thought to what its essence is (aside from its being distinct enough to consider it a thing), but it can never acknowledge the essence of an actual object, animal, or person without also acknowledging its existence.  There’s also something relevant to be said about how our minds differentiate between an actual perception and an imagined one, even between an actual life and an imagined one, or an actual person and an imagined one; something that we do all the time.

On the other side of this issue, within the debate between essentialism and existentialism, are the questions of whether or not there are any essential qualities of human beings that make them human, and if so, whether or not any of these essential qualities are fixed for human beings.  Most people are familiar with the idea of human nature, whereby there are some biologically-grounded innate predispositions that affect our behavior in some ways.  We are evolved animals after all, and the only way for evolution to work is to have genes providing different phenotypic effects on the body and behavior of a population of organisms.  Since about half of our own genes are used to code for our brain’s development, its basic structure, and functionality, it should seem obvious to anyone that we’re bound to have some set of innate behavioral tendencies as well as some biological limitations on the space of possibilities for our behavior.

Thus, any set of innate behavioral tendencies that are shared by healthy human beings would constitute our human nature.  And furthermore, this behavioral overlap would be a fixed attribute of human nature insofar as the genes coding for that innate behavioral overlap don’t evolve beyond a certain degree.  Human nature is simply an undeniable fact about us grounded in our biology.  If our biology changes enough, then we will eventually cease to be human, or if we still call ourselves “human” at that point in time then we will cease to have the same nature since we will no longer be the same species.  But as long as that biological evolution remains under a certain threshold, we will have some degree of fixity in our human nature.

But unlike most other organisms on earth, human beings have an incredibly complex brain as well as a number of culturally inherited cognitive programs that provide us with a seemingly infinite number of possible behaviors and subsequent life trajectories.  And I think this fact has fueled some of the disagreement between the existentialists and the essentialists.  The essentialists (or many of them at least) have maintained that we have a particular human nature, which I think is certainly true.  But, as per the existentialists, we must also acknowledge just how vast our space of possibilities really is and not let our human nature be the ultimate arbiter of how we define ourselves.  We have far more freedom to choose a particular life project and way of life than one would be led to believe given some preconceived notion of what is essentially human.

3. The Case of Pascal

Science brought about a dramatic change to Western society, effectively carrying us out of the Middle Ages within a century; but this change also created the very environment that made modern Existentialism possible.  Cosmology, for example, led 17th century mathematician and theologian, Blaise Pascal to say that “The silence of these infinite spaces (outer space) frightens me.”  In this statement he was expressing the general reaction of humanity to the world that science had discovered; a world that made man feel homeless and insignificant.

As a religious man, Pascal was also seeking to reconcile this world discovered by science with his own faith.  This was a difficult task, but he found some direction in looking at the wretchedness of the human condition:

“In Pascal’s universe one has to search much more desperately to find any signposts that would lead the mind in the direction of faith.  And where Pascal finds such a signpost, significantly enough, is in the radically miserable condition of man himself.  How is it that this creature who shows everywhere, in comparison with other animals and with nature itself, such evidence marks of grandeur and power is at the same time so feeble and miserable?  We can only conclude, Pascal says, that man is rather like a ruined or disinherited nobleman cast out from the kingdom which ought to have been his (his fundamental premise is the image of man as a disinherited being).”

Personally I see this as indicative of our being an evolved species, albeit one with an incredibly rich level of consciousness and intelligence.  In some sense our minds’ inherent capacities have a potential that is far and beyond what most humans ever come close to actualizing, and this may create a feeling of our having lost something that we deserve; some possible yet unrealized world that would make more sense for us to have given the level of power and potential we possess.  And perhaps this lends itself to a feeling of angst or a lack of fulfillment as well.

As Pascal himself said:

“The natural misfortune of our mortal and feeble condition is so wretched that when we consider it closely, nothing can console us.” 

Furthermore, Barrett reminds us of Pascal having mentioned the human tendency of escaping from this close consideration of our condition through the two “sovereign anodynes” of habit and diversion:

“Both habit and diversion, so long as they work, conceal from man “his nothingness, his forlornness, his inadequacy, his impotence and his emptiness.”  Religion is the only possible cure for this desperate malady that is nothing other than our ordinary mortal existence itself.”

Barrett describes our state of cultivating various habits and distractions as a kind of concealment of the ever-narrowing space of possibilities in our lives, as each day passes us by with some set of hopes and dreams forever lost in the past.  I think it would be fair to say however, that the psychological benefit we gain from at least many of our habits and distractions warrants our having them in the first place.  As an analogy, if one day I have been informed by my doctor that I’m going to die from a terminal illness in a few months (thus drastically narrowing the future space of possibilities in my life), is there really a problem with me trying to keep my mind off of such a morbid fate, such that I can make the most of the time that I have left?  I certainly wouldn’t advocate burying one’s head in the sand either, nor having irrational expectations, nor living in denial; and so I think the time will be used best if I don’t completely forget about my timeline coming to an end either, because then I can more adequately appreciate that remaining time.

Moreover, I don’t agree that religion is the only possible cure for dealing with our “ordinary mortal existence”.  I think that psychological and moral development are the ultimate answers here, where a critical look at oneself and an ongoing attempt to become a better version of oneself in order to maximize one’s life fulfillment is key.  The world can be any way that it is but that doesn’t change the fact that certain attitudes toward that world and certain behaviors can make our experience of the world, as it truly is, better or worse.  Falling back on religion to console us is, to use Pascal’s own words, nothing but another habit and form of distraction, and one where we sacrifice knowing and facing the truth of our condition for some kind of ignorant bliss that relies on false beliefs about the world.  And I don’t think we can live authentically if we don’t face the condition we’re in as a species, and more importantly as individuals; by trying to believe as many true things and as few false things as possible.

When it comes to the human mind and how we deal with or process information about our everyday lives or aspects of our overall condition, it can be a rather messy process with very complicated concepts, many of which that have fuzzy boundaries and so can’t be dealt with in the same way that we deal with analyses relying strictly on logic.  In some cases of analyzing our lives, we don’t know exactly what the premises could even be, let alone how to arrange them within a logical structure to arrive at some kind of certain conclusion.  And this differs a lot from other areas of inquiry like, say, mathematics.  As Pascal was delving deeper into the study of our human condition, he realized this difference and posited that there was an interesting distinction in terms of what the mind can comprehend in these different contexts: a mathematical mind and an intuitive mind.  Within a field of study such as mathematics, it is our mind’s ability to logically comprehend various distinct and well defined ideas that we make use of, whereas in other more concrete domains of our experience, we rely quite heavily on intuition.

Not only is logic not very useful in the more concrete and complex domains of our lives, but in many cases it may actually get in the way of seeing things clearly since our intuition, though less reliable than logic, is more up to the task of dealing with complexities that haven’t yet been parsed out into well-defined ideas and concepts.  Barrett describes this a little differently than I would when he says:

“What Pascal had really seen, then, in order to have arrived at this distinction was this: that man himself is a creature of contradictions and ambivalences such as pure logic can never grasp…By delimiting a sphere of intuition over against that of logic, Pascal had of course set limits to human reason. “

In the case of logic, one is certainly limited in its use when the concepts involved are probabilistic (rather than being well-defined or “black and white”) and there are significant limitations when our attitudes toward those concepts vary contextually.  We are after all very complex creatures with a number of competing goals and with a prioritization of those goals that varies over time.  And of course we have a number of value judgments and emotional dispositions that motivate our actions in often unpredictable ways.  So it seems perfectly reasonable to me that one should acknowledge the limits we have in our use of logic and reason.

Barrett makes another interesting claim about the limits of reason when he says:

“Three centuries before Heidegger showed, through a learned and laborious exegesis, that Kant’s doctrine of the limitations of human reason really rests on the finitude of our human existence, Pascal clearly saw that the feebleness of our reason is part and parcel of the feebleness of our human condition generally.  Above all, reason does not get at the heart of religious experience.”

And I think the last sentence is most important here, where we try and understand that the limitations of reason include the inability to examine all that matters and all that is meaningful within religious experience.  Even though I am a kind of champion for reason, in the sense that I find it to be in short supply when it matters most and in the sense that I often advertise the dire need for critical thinking skills and the need for more education to combat our cognitive biases, I for one have never made the assumption that reason could ever accomplish such an arduous task of dissecting religious experience in some reductionist way while retaining or accounting for everything of value within it.  I have however used reason to analyze various religious beliefs and claims about the world in order to discover their consequences on, or incompatibilities with, a number of other beliefs.

Theologians do something similar in their attempts to prove the existence of God, by beginning with some set of presuppositions (about God or the supernatural), and then applying reason to those faith-based premises to see what would result if they were in fact true.  And Pascal saw this mental exercise as futile and misguided as well because it misses the entire point of religion:

“In any case, God as the object of a rigorous demonstration, even supposing such a demonstration were forthcoming, would have nothing to do with the living needs of religion.”

I admire the fact that Pascal was honest here about what really matters most when it comes to religion and religious beliefs.  It doesn’t matter whether or not God exists, or whether any religious claim or other is actually true or false; rather, it is the living need of religion that he perceives that ultimately motivates one to adhere to it.  Thus, theology doesn’t really matter to him because those who want to be convinced by the arguments for God’s existence will be convinced simply because they desire the conclusion insofar as they believe it will give them consolation from the human condition they find themselves in.

To add something to the bleak picture of just how contingent our existence is, Barrett mentions a personal story that Pascal shared about his having had a brush with death:

“While he was driving by the Seine one day, his carriage  suddenly swerved, the door was flung open, and Pascal almost catapulted down the embankment to his death.  The arbitrariness and suddenness of this near accident became for him another lightning flash of revelation.  Thereafter he saw Nothingness as a possibility that lurked, so to speak, beneath our feet, a gulf and an abyss into which we might tumble at any moment.  No other writer has expressed more powerfully than Pascal the radical contingency that lies at the heart of human existence-a contingency that may at any moment hurl us all unsuspecting into non-being…The idea of Nothingness or Nothing had up to this time played no role at all in Western philosophy…Nothingness had suddenly and drastically revealed itself to him.”

And here it is: the concept of Nothingness which has truly grounded a bulk of the philosophical constructs found within existentialism.  To add further to this, Pascal also saw that one’s inevitable death wasn’t the whole picture of our contingency, but only a part of it; it was our having been born at a particular time, place, and within a certain culture that vastly reduces the number of possible life trajectories one can take, and thus vastly reduces our freedom in establishing our own identity.  Our life begins with contingency and ends with it also.

Pascal also acknowledged our existence as lying in the middle of the universe, in terms of being between that of the infinitesimal and microscopic and that of the seemingly infinite lying at the cosmological scale.  Within this middle position, he saw man as being “an All in relation to Nothingness, a Nothingness in relation to the All.”  I think it’s also worth pointing out that it is because of our evolutionary history and the “middle-position” we evolved within that causes the drastic misalignment between the world we were in some sense meant to understand, and the micro and cosmic scales we discovered in the last several hundred years.

Aside from any feelings of alienation and homelessness that these different scales have precipitated, we simply don’t have an intuition that evolved a need to understand existence at the infinitesimal quantum level nor at the cosmic relativistic level, which is why these discoveries in physics are difficult to understand even by experts in the field.  We should expect these discoveries to be vastly perplexing and counter-intuitive, for our consciousness evolved to deal with life at a very finite scale; yet another example of our finitude as human beings.

It was this very same fact, that we gained so much information about our world through these kinds of discoveries, including major advancements made in mathematics, that also promoted the assumption that human nature could be perfected through the universal application of reason.

I’ll finish the post on this chapter with one final quote from Barrett that I found insightful:

“Poets are witnesses to Being before the philosophers are able to bring it into thought.  And what these particular poets were struggling to reveal, in this case, were the very conditions of Being that are ours historically today.  They were sounding, in poetic terms, the premonitory chords of our own era.”

It is the voice of the poets that we first hear lamenting about the Enlightenment and what reason seems to have done, or at least what it had begun to do.  They were some of the most important contemporary precursors to the later existentialists and philosophers that would soon begin to process (in far more detail and with far more precision) the full effects of modernity on the human psyche.

Advertisements

Irrational Man: An Analysis (Part 2, Chapter 4: “The Sources of Existentialism in the Western Tradition”)

leave a comment »

In the previous post in this series on William Barrett’s Irrational Man, I explored Part 1, Chapter 3: The Testimony of Modern Art, where Barrett illustrates how existentialist thought is best exemplified in modern art.  The feelings of alienation, discontent, and meaninglessness pervade a number of modern expressions, and so as is so often the case throughout history, we can use artistic expression as a window to peer inside our evolving psyche and witness the prevailing views of the world at any point in time.

In this post, I’m going to explore Part II, Chapter 4: The Sources of Existentialism in the Western Tradition.  This chapter has a lot of content and is quite dense, and so naturally this fact is reflected in the length of this post.

Part II: “The Sources of Existentialism in the Western Tradition”

Ch. 4 – Hebraism and Hellenism

Barrett begins this chapter by pointing out two forces governing the historical trajectory of Western civilization and Western thought: Hebraism and Hellenism.  He mentions an excerpt of Matthew Arnold’s, in his book Culture and Anarchy; a book that was concerned with the contemporary situation occurring in nineteenth-century England, where Arnold writes:

“We may regard this energy driving at practice, this paramount sense of the obligation of duty, self-control, and work, this earnestness in going manfully with the best light we have, as one force.  And we may regard the intelligence driving at those ideas which are, after all, the basis of right practice, the ardent sense for all the new and changing combinations of them which man’s development brings with it, the indomitable impulse to know and adjust them perfectly, as another force.  And these two forces we may regard as in some sense rivals–rivals not by the necessity of their own nature, but as exhibited in man and his history–and rivals dividing the empire of the world between them.  And to give these forces names from the two races of men who have supplied the most splendid manifestations of them, we may call them respectively the forces of Hebraism and Hellenism…and it ought to be, though it never is, evenly and happily balanced between them.”

And while we may have felt a stronger attraction to one force over the other at different points in our history, both forces have played an important role in how we’ve structured our individual lives and society at large.  What distinguishes these two forces ultimately comes down to the difference between doing and knowing; between the Hebrew’s concern for practice and right conduct, and the Greek’s concern for knowledge and right thinking.  I can’t help but notice this Hebraistic influence in one of the earliest expressions of existentialist thought when Soren Kierkegaard (in one of his earlier journals) had said: “What I really need is to get clear about what I must do, not what I must know, except insofar as knowledge must precede every act”.  Here we see that some set of moral virtues are what form the fundamental substance and meaning of life within Hebraism (and by extension, Kierkegaard’s philosophy), in contrast with the Hellenistic subordination of the moral virtues to those of the intellectual variety.

I for one am tempted to mention Aristotle here, since he was a Greek philosopher, yet one who formulated and extolled moral virtues, forming the foundation for most if not all modern systems of virtue ethics; despite all of his work on science, logic and epistemology.  And sure enough, Arnold does mention Aristotle briefly, but he says that for Aristotle the moral virtues are “but the porch and access to the intellectual (virtues), and with these last is blessedness.”  So it’s still fair to say that the intellectual virtues were given priority over the moral virtues within Greek thought, even if moral virtues were an important element, serving as a moral means to a combined moral-intellectual end.  We’re still left then with a distinction of what is prioritized or what the ultimate teleology is for Hebraism and Hellenism: moral man versus intellectual man.

One perception of Arnold’s that colors his overall thesis is a form of uneasiness that he sees as lurking within the Hebrew conception of man; an uneasiness stemming from a conception of man that has been infused with the idea of sin, which is simply not found in Greek philosophy.  Furthermore, this idea of sin that pervades the Hebraistic view of man is not limited to one’s moral or immoral actions; rather it penetrates into the core being of man.  As Barrett puts it:

“But the sinfulness that man experiences in the Bible…cannot be confined to a supposed compartment of the individual’s being that has to do with his moral acts.  This sinfulness pervades the whole being of man: it is indeed man’s being, insofar as in his feebleness and finiteness as a creature he stands naked in the presence of God.”

So we have a predominantly moral conception of man within Hebraism, but one that is amalgamated with an essential finitude, an acknowledgement of imperfection, and the expectation of our being morally flawed human beings.  Now when we compare this to the philosophers of Ancient Greece, who had a more idealistic conception of man, where humans were believed to have the capacity to access and understand the universe in its entirety, then we can see the groundwork that was laid for somewhat rivalrous but nevertheless important motivations and contributions to the cultural evolution of Western civilization: science and philosophy from the Greeks, and a conception of “the Law” entrenched in religion and religious practice from the Hebrews.

1. The Hebraic Man of Faith

Barrett begins here by explaining how the Law, though important for its effects on having bound the Jewish community together for centuries despite their many tribulations as a people, the Law is not central to Hebraism but rather the basis of the Law is what lies at its center.  To see what this basis is, we are directed to reread the Book of Job in the Hebrew Bible:

“…reread it in a way that takes us beyond Arnold and into our own time, reread it with an historical sense of the primitive or primary mode of existence of the people who gave expression to this work.  For earlier man, the outcome of the Book of Job was not such a foregone conclusion as it is for us later readers, for whom centuries of familiarity and forgetfulness have dulled the violence of the confrontation between man and God that is central to the narrative.”

Rather than simply taking the commandments of one’s religion for granted and following them without pause, Job’s face-to-face confrontation with his Creator and his demand for justification was in some sense the first time the door had been opened to theological critique and reflection.  The Greeks did something similar where eventually they began to apply reason and rationality to examine religion, stepping outside the confines of simply blindly following religious traditions and rituals.  But unlike the Greek, the Hebrew does not proceed with this demand of justification through the use of reason but rather by a direct encounter of the person as a whole (Job, and his violence, passion, and all the rest) with an unknowable and awe-inspiring God.  Job doesn’t solve his problem with any rational resolution, but rather by changing his entire character.  His relation to God involves a mutual confrontation of two beings in their entirety; not just a rational facet of each being looking for a reasonable explanation from one another, but each complete being facing one another, prepared to defend their entire character and identity.

Barrett mentions the Jewish philosopher Martin Buber here to help clarify things a little, by noting that this relation between Job and God is a relation between an I and a Thou (or a You).  Since Barrett doesn’t explain Buber’s work in much detail, I’ll briefly digress here to explain a few key points.  For those unfamiliar with Buber’s work, the I-Thou relation is a mode of living that is contrasted with another mode centered on the connection between an I and an It.  Both modes of living, the I-Thou and the I-It, are, according to Buber, the two ways that human beings can address existence; the two modes of living required for a complete and fulfilled human being.  The I-It mode encompasses the world of experience and sensation; treating entities as discrete objects to know about or to serve some use.  The I-Thou mode on the other hand encompasses the world of relations itself, where the entities involved are not separated by some discrete boundary, and where a living relationship is acknowledged to exist between the two; this mode of existence requires one to be an active participant rather than merely an objective observer.

It is Buber’s contention that modern life has entirely ignored the I-Thou relation, which has led to a feeling of unfulfillment and alienation from the world around us.   The first mode of existence, that of the I-It, involves our acquiring data from the world, analyzing and categorizing it, and then theorizing about it; and this leads to a subject-object separation, or an objectification of what is being experienced.  According to Buber, modern society almost exclusively uses this mode to engage with the world.  In contrast, with the I-Thou relation the I encounters the object or entity such that both are transformed by the relation; and there is a type of holism at play here where the You or Thou is not simply encountered as a sum of its parts but in its entirety.  To put it another way, it’s as if the You encountered were the entire universe, or that somehow the universe existed through this You.

Since this concept is a little nebulous, I think the best way to summarize Buber’s main philosophical point here is to say that we human beings find meaning in our lives through our relationships, and so we need to find ways of engaging the world such as to maximize our relationships with it; and this is not limited to forming relationships with fellow human beings, but with other animals, inanimate objects, etc., even if these relationships differ from one another in any number of ways.

I actually find some relevance between Buber’s “I and Thou” conception and Nietzsche’s idea of eternal recurrence: the idea that given an infinite amount of time and a finite number of ways that matter and energy can be arranged, anything that has ever happened or that ever will happen, will recur an infinite number of times.  In Nietzsche’s The Gay Science, he mentions how the concept of eternal recurrence was, to him, horrifying and paralyzing.  But, he also concluded that the desire for an eternal return or recurrence would show the ultimate affirmation of one’s life:

“What, if some day or night a demon were to steal after you into your loneliest loneliness and say to you: ‘This life as you now live it and have lived it, you will have to live once more and innumerable times more’ … Would you not throw yourself down and gnash your teeth and curse the demon who spoke thus?  Or have you once experienced a tremendous moment when you would have answered him: ‘You are a god and never have I heard anything more divine.

The reason I find this relevant to Buber’s conception is two-fold: first of all, the fact that the universe is causally connected makes the universe inseparable to some degree, where each object or entity could be seen to, in some sense, represent the rest of the whole; and secondly, if one is to wish for the eternal recurrence, then they have an attitude toward the world that is non-resistant, and that can learn to accept the things that are out of one’s control.  The universe effectively takes on the character of fate itself, and offers an opportunity to a being such as ourselves, to have a kind of “faith in fate”; to have a relation of trust with the universe as it is, as it once was, and as it will be in the future.

Now the kind of faith I’m speaking of here isn’t a brand that contradicts reason or evidence, but rather is simply a form of optimism and acceptance that colors one’s expectations and overall experience.  And since we are a part of this universe too, our attitude towards it should in many ways reflect our overall relation to it; which brings us back to Buber, where any encounter I might have with the universe (or any “part” of it) is an encounter with a You, that is to say, it is an I-Thou relation.

This “faith in fate” concept I just alluded to is a good segue to return back to the relation between Job and his God within Hebraism, as it was a relation of never-ending faith.  But importantly, as Barrett points out, this faith of Job’s takes on many shapes including that of anger, dismay, revolt, and confusion.  For Job says, “Though he slay me, yet will I trust in him…but I will maintain my own ways before him.”  So Job’s faith is his maintaining a form of trust in his Creator, even though he says that he will also retain his own identity, his dispositions, and his entire being while doing so.  And this trust ultimately forms the relation between the two.  Barrett describes the kind of faith at play here as more fundamental and primary than that which many modern-day religious proponents would lay claim to.

“Faith is trust before it is belief-belief in the articles, creeds, and tenets of a Church with which later religious history obscures this primary meaning of the word.  As trust, in the sense of the opening up of one being toward another, faith does not involve any philosophical problem about its position relative to faith and reason.  That problem comes up only later when faith has become, so to speak, propositional, when it has expressed itself in statements, creeds, systems.  Faith as a concrete mode of being of the human person precedes faith as the intellectual assent to a proposition, just as truth as a concrete mode of human being precedes the truth of any proposition.”

Although I see faith as belief as fundamentally flawed and dangerous, I can certainly respect the idea of faith as trust, and consequently I can respect the idea of faith as a concrete mode of being; where this mode of being is effectively an attitude of openness taken towards another.  But, whereas I agree with Barrett’s claim that truth as a concrete mode of human being precedes the truth of any proposition, in the sense that a basis and desire for truth are needed prior to evaluating the truth value of any proposition, I don’t believe one can ever justifiably make the succession from faith as trust to faith as belief for the simple reason that if one has good reason to trust another, then they don’t need faith to mediate any beliefs stemming from that trust.  And of course, what matters most here is describing and evaluating what the “Hebrew man of faith” consists of, rather than criticizing the concept of faith itself.

Another interesting historical development stemming from Hebraism pertains to the concept of faith as it evolved within Protestantism.  As Barrett tells us:

“Protestantism later sought to revive this face-to-face confrontation of man with his God, but could produce only a pallid replica of the simplicity, vigor, and wholeness of this original Biblical faith…Protestant man would never have dared confront God and demand an accounting of His ways.  That era in history had long since passed by the time we come to the Reformation.”

As an aside, it’s worth mentioning here that Christianity actually developed as a syncretism between Hebraism and Hellenism; the two very cultures under analysis in this chapter.  By combining Jewish elements (e.g. monotheism, the substitutionary atonement of sins through blood-magic, apocalyptic-messianic resurrection, interpretation of Jewish scriptures, etc.) with Hellenistic religious elements (e.g. dying-and-rising savior gods, virgin birth of a deity, fictive kinship, etc.), the cultural diffusion that occurred resulted in a religion that was basically a cosmopolitan personal salvation cult.

But eventually Christianity became a state religion (after the age of Constantine), resulting in a theocracy that heavily enforced a particular conception of God onto the people.  Once this occurred, the religion was now fully contained within a culture that made it very difficult to question or confront anyone about these conceptions, in order to seek justification for them.  And it may be that the intimidation propagated by the prevailing religious authorities became conflated with an attribute of God; where a fear of questioning any conception of God became integrated in a theological belief about God.

Perhaps it was because the primary conceptions of God, once Christianity entered the Medieval Period, were more externally imposed on everybody rather than discovered in a more personal and introspective way (even if unavoidably initiated and reinforced by the external culture), thus externalizing the attributes of God such that they became more heavily influenced by the perceptibly unquestionable claims of those in power.  Either way, the historical contingencies surrounding the advent of Christianity involved sectarian battles with the winning sect using their dogmatic authority to suppress the views (and the Christian Gospels/scriptures) of the other sects.  And this may have inhibited people from ever questioning their God or demanding justification for what they interpreted to be God’s actions.

One final point in this section that I’d like to highlight is in regard to the concept of knowledge as it relates to Hebraism.  Barrett distinguishes this knowledge from that of the Greeks:

“We have to insist on a noetic content in Hebraism: Biblical man too had his knowledge, though it is not the intellectual knowledge of the Greek.  It is not the kind of knowledge that man can have through reason alone, or perhaps not through reason at all; he has it rather through body and blood, bones and bowels, through trust and anger and confusion and love and fear; through his passionate adhesion in faith to the Being whom he can never intellectually know.  This kind of knowledge a man has only through living, not reasoning, and perhaps in the end he cannot even say what it is he knows; yet it is knowledge all the same, and Hebraism at its source had this knowledge.”

I may not entirely agree with Barrett here, but any disagreement is only likely to be a quibble over semantics, relating to how he and I define noetic and how we each define knowledge.  The word noetic actually derives from the Greek adjective noētikos which means “intellectual”, and thus we are presumably considering the intellectual knowledge found in Hebraism.  Though this knowledge may not be derived from reason alone, I don’t think (as Barrett implies above) that it’s even possible to have any noetic content without at least some minimal use of reason.  It may be that the kind of intellectual content he’s alluding to is that which results from a kind of synthesis between reason and emotion or intuition, but it would seem that reason would still have to be involved, even if it isn’t as primary or dominant as in the intellectual knowledge of the Greek.

With regard to how Barrett and I are each defining knowledge, I must say that just as most other philosophers have done, including those going all the way back to Plato, one must distinguish between knowledge and all other beliefs because knowledge is merely a subset of all of one’s beliefs (otherwise one would be saying that any belief whatsoever is considered knowledge).  To distinguish knowledge as a subset of one’s beliefs, my definition of knowledge can be roughly defined as:

“Recognized patterns of causality that are stored into memory for later recall and use, that positively and consistently correlate with reality, and for which that correlation has been validated by having made successful predictions and/or successfully accomplishing goals through the use of said recalled patterns.”

My conception of knowledge also takes what one might call unconscious knowledge into account (which may operate more automatically and less explicitly than conscious knowledge); as long as it is acquired information that allows you to accomplish goals and make successful predictions about the causal structure of your experience (whether internal feelings or other mental states, or perceptions of the external world), it counts as knowledge nevertheless.  Now there may be different degrees of reliability and usefulness of different kinds of knowledge (such as intuitive knowledge versus rational or scientific knowledge), but those distinctions don’t need to be parsed out here.

What Barrett seems to be describing here as a Hebraistic form of knowledge is something that is deeply embodied in the human being; in the ways that one lives their life that don’t involve, or aren’t dominated by, reason or conscious abstractions.  Instead, there seems to be a more organic process involved of viewing the world and interacting with it in a manner relying more heavily on intuition and emotionally-driven forms of expression.  But, in order to give rise to beliefs that cohere with one another to some degree, to form some kind of overarching narrative, reason (I would argue) is still employed.  There’s still some logical structure to the web of beliefs found therein, even if reason and logic could be used more rigorously and explicitly to turn many of those beliefs on their head.

And when it comes to the Hebraistic conception of God, including the passionate adhesion in faith to a Being whom one can never intellectually know (as Barrett puts it), I think this can be better explained by the fact that we do have reason and rationality, unlike most other animals, as well as cognitive biases such as hyperactive agency detection.  It seems to me that the Hebraistic God concept is more or less derived from an agglomeration of the unknown sources of one’s emotional experiences (especially those involving an experience of the transcendent) and the unpredictable attributes of life itself, then ascribing agency to that ensemble of properties, and (to use Buber’s terms) establishing a relationship with that perceived agency; and in this case, the agent is simply referred to as God (Yahweh).

But in order to infer the existence of such an ensemble, it would seem to require a process of abstracting from particular emotional and unpredictable elements and instances of one’s experience to conclude some universal source for all of them.  Perhaps if this process is entirely unconscious we can say that reason wasn’t at all involved in it, but I suspect that the ascription of agency to something that is on par with the universe itself and its large conjunction of properties which vary over space and time, including its unknowable character, is more likely mediated by a combination of cognitive biases, intuition, emotion, and some degree of rational conscious inference.  But Barrett’s point still stands that the noetic content in Hebraism isn’t dominated by reason as in the case of the Greeks.

2. Greek Reason

Even though existential philosophy is largely a rebellion against the Platonic ideas that have permeated Western thought, Barrett reminds us that there is still some existential aspect to Plato’s works.  Perhaps this isn’t surprising once one realizes that he actually began his adult life aspiring to be a dramatic poet.  Eventually he abandoned this dream undergoing a sort of conversion and decided to dedicate the rest of his life toward a search for wisdom as per the inspiration of Socrates.  Even though he engaged in a life long war against the poets, he still maintained a piece of his poetic character in his writings, including up to the end when he wrote about the great creation myth, Timaeus.

By far, Plato’s biggest contribution to Western thought was his differentiating rational consciousness from the rest of our human psyche.  Prior to this move, rationality was in some sense subsumed under the same umbrella as emotion and intuition.  And this may be one of the biggest changes to how we think, and one that is so often taken for granted.  Barrett describes the position we’re in quite well:

“We are so used today to taking our rational consciousness for granted, in the ways of our daily life we are so immersed in its operations, this it is hard at first for us to imagine how momentous was this historical happening among the Greeks.  Steeped as our age is in the ideas of evolution, we have not yet become accustomed to the idea that consciousness itself is something that has evolved through long centuries and that even today, with us, is still evolving.  Only in this century, through modern psychology, have we learned how precarious a hold consciousness may exert upon life, and we are more acutely aware therefore what a precious deal of history, and of effort, was required for its elaboration, and what creative leaps were necessary at certain times to extend it beyond its habitual territory.”

Barrett’s mention of evolution is important here, because I think we ought to distinguish between the two forms of evolution that have affected how we think and experience the world.  On the one hand, we have our biological evolution to consider, where our brains have undergone dramatic changes in terms of the level of consciousness we actually experience and the degree of causal complexity that the brain can model; and on the other hand, we have our cultural evolution to consider, where our brains have undergone a number of changes in terms of the kinds of “programs” we run on it, how those programs are run and prioritized, and how we process and store information in various forms of external hardware.

In regard to biological evolution, long before our ancestors evolved into humans, the brain had nothing more than a protoself representation of our body and self (to use the neuroscientist Antonio Damasio’s terminology); it had nothing more than an unconscious state that served as a sort of basic map in the brain tracking the overall state of the body as a single entity in order to accomplish homeostasis.  Then, beyond this protoself there evolved a more sophisticated level of core consciousness where the organism became conscious of the feelings and emotions associated with the body’s internal states, and also became conscious that her feelings and thoughts were her own, further enhancing the sense of self, although still limited to the here-and-now or the present moment.  Finally, beyond this layer of self there evolved an extended consciousness: a form of consciousness that required far more memory, but which brought the organism into an awareness of the past and future, forming an autobiographical self with a perceptual simulator (imagination) that transcended space and time in some sense.

Once humans developed language, then we began to undergo our second form of evolution, namely culture.  After cultural evolution took off, human civilization led to a plethora of new words, concepts, skills, interests, and ways of looking at and structuring the world.  And this evolutionary step was vastly accelerated by written language, the scientific method, and eventually the invention of computers.  But in the time leading up to the scientific revolution, the industrial revolution, and finally the advent of computers and the information age, it was arguably Plato’s conceptualization of rational consciousness that paved the way forward to eventually reach those technological and epistemological feats.

It was Plato’s analysis of the human psyche and his effectively distilling the process of reason and rationality from the rest of our thought processes that allowed us to manipulate it, to enhance it, and to explore the true possibilities of this markedly human cognitive capacity.  Aristotle and many others since have merely built upon Plato’s monumental work, developing formal logic, computation, and other means of abstract analysis and information processing.  With the explicit use of reason, we’ve been able to overcome many of our cognitive biases (serving as a kind of “software patch” to our cognitive bugs) in order to discover many true facts about the world, allowing us to unlock a wealth of knowledge that had previously been inaccessible to us.  And it’s important to recognize that Plato’s impact on the history of philosophy has highlighted, more than anything else, our overall psychic evolution as a species.

Despite all the benefits that culture has brought us, there has been one inherent problem with the cultural evolution we’ve undergone: a large desynchronization between our cultural and biological evolution.  That is to say, our culture has evolved far, far faster than our biology ever could, and thus our biology hasn’t kept up with, or adapted us to, the cultural environment we’ve been creating.  And I believe this is a large source of our existential woes; for we have become a “fish out of water” (so to speak) where modern civilization and the way we’ve structured our day-to-day lives is incredibly artificial, filled with a plethora of supernormal stimuli and other factors that aren’t as compatible with our natural psychology.  It makes perfect sense then that many people living in the modern world have had feelings of alienation, loneliness, meaninglessness, anxiety, and disorientation.  And in my opinion, there’s no good or pragmatic way to fix this aside from engineering our genes such that our biology is able to catch up to our ever-changing cultural environment.

It’s also important to recognize that Plato’s idea of the universal, explicated in his theory of Forms, was one of the main impetuses for contemporary existential philosophy; not for its endorsement but rather because the idea that a universal such as “humanity” was somehow more real than any actual individual person fueled a revolt against such notions.  And it wasn’t until Kierkegaard and Nietzsche appeared on the scene in the nineteenth century where we saw an explicit attempt to reverse this Platonic idea; where the individual was finally given precedence and priority over the universal, and where a philosophy of existence was given priority over one of essence.  But one thing the existentialists maintained, as derived from Plato, was his conception that philosophizing was a personal means of salvation, transformation, and a concrete way of living (though this was, according to Plato, best exemplified by his teacher Socrates).

As for the modern existentialists, it was Kierkegaard who eventually brought the figure of Socrates back to life, long after Plato’s later portrayal of Socrates, which had effectively dehumanized him:

“The figure of Socrates as a living human presence dominates all the earlier dialogues because, for the young Plato, Socrates the man was the very incarnation of philosophy as a concrete way of life, a personal calling and search.  It is in this sense too that Kierkegaard, more than two thousand years later, was to revive the figure of Socrates-the thinker who lived his thought and was not merely a professor in an academy-as his precursor in existential thinking…In the earlier, so-called “Socratic”, dialogues the personality of Socrates is rendered in vivid and dramatic strokes; gradually, however, he becomes merely a name, a mouthpiece for Plato’s increasingly systematic views…”

By the time we get to Plato’s student, Aristotle, philosophy had become a purely theoretical undertaking, effectively replacing the subjective qualities of a self-examined life with a far less visceral objective analysis.  Indeed, by this point in time, as Barrett puts it: “…the ghost of the existential Socrates had at last been put to rest.”

As in all cases throughout history, we must take the good with the bad.  And we very likely wouldn’t have the sciences today had it not been for Plato detaching reason from the poetic, religious, and mythological elements of culture and thought, thus giving reason its own identity for the first time in history.  Whatever may have happened, we need to try and understand Greek rationalism as best we can such that we can understand the motivations of those that later objected to it, especially within modern existentialism.

When we look back to Aristotle’s Nicomachean Ethics, for example, we find a fairly balanced perspective of human nature and the many motivations that drive our behavior.  But, in evaluating all the possible goods that humans can aim for, in order to derive an answer to the ethical question of what one ought to do above all else, Aristotle claimed that the highest life one could attain was the life of pure reason, the life of the philosopher and the theoretical scientist.  Aristotle thought that reason was the highest part of our personality, where one’s capacity for reason was treated as the center of one’s real self and the core of their identity.  It is this stark description of rationalism that diffused through Western philosophy until bumping heads with modern existentialist thought.

Aristotle’s rationalism even permeated the Christianity of the Medieval period, where it maintained an albeit uneasy relationship between faith and reason as the center of the human personality.  And the quest for a complete view of the cosmos further propagated a valuing of reason as the highest human function.  The inherent problems with this view simply didn’t surface until some later thinkers began to see human existence and the potential of our reason as having a finite character with limitations.  Only then was the grandiose dream of having a complete knowledge of the universe and of our existence finally shattered.  Once this goal was seen as unattainable, we were left alone on an endless road of knowledge with no prospects for any kind of satisfying conclusion.

We can certainly appreciate the value of theoretical knowledge, and even develop a passion for discovering it, but we mustn’t lose sight of the embodied, subjective qualities of our human nature; nor can we successfully argue any longer that the latter can be dismissed due to a goal of reaching some absolute state of knowledge or being.  That goal is not within our reach, and so trying to make arguments that rely on its possibility is nothing more than an exercise of futility.

So now we must ask ourselves a very important question:

“If man can no longer hold before his mind’s eye the prospect of the Great Chain of Being, a cosmos rationally ordered and accessible from top to bottom to reason, what goal can philosophers set themselves that can measure up to the greatness of that old Greek ideal of the bios theoretikos, the theoretical life, which has fashioned the destiny of Western man for millennia?”

I would argue that the most important goal for philosophy has been and always will be the quest for discovering as many moral facts as possible, such that we can attain eudaimonia as Aristotle advocated for.  But rather than holding a life of reason as the greatest good to aim for, we should simply aim for maximal life fulfillment and overall satisfaction with one’s life, and not presuppose what will accomplish that.  We need to use science and subjective experience to inform us of what makes us happiest and the most fulfilled (taking advantage of the psychological sciences), rather than making assumptions about what does this best and simply arguing from the armchair.

And because our happiness and life fulfillment are necessarily evaluated through subjectivity, we mustn’t make the same mistakes of positivism and simply discard our subjective experience.  Rather we should approach morality with the recognition that we are each an embodied subject with emotions, intuitions, and feelings that motivate our desires and our behavior.  But we should also ensure that we employ reason and rationality in our moral analysis, so that our irrational predispositions don’t lead us farther away from any objective moral truths waiting to be discovered.  I’m sympathetic to a quote of Kierkegaard’s that I mentioned at the beginning of this post:

“What I really need is to get clear about what I must do, not what I must know, except insofar as knowledge must precede every act.”

I agree with Kierkegaard here, in that moral imperatives are the most important topic in philosophy, and should be the most important driving force in one’s life, rather than simply a drive for knowledge for it’s own sake.  But of course, in order to get clear about what one must do, one first has to know a number of facts pertaining to how any moral imperatives are best accomplished, and what those moral imperatives ought to be (as well as which are most fundamental and basic).  I think the most basic axiomatic moral imperative within any moral system that is sufficiently motivating to follow is going to be that which maximizes one’s life fulfillment; that which maximizes one’s chance of achieving eudaimonia.  I can’t imagine any greater goal for humanity than that.

Here is the link for part 5 of this post series.

Irrational Man: An Analysis (Part 1, Chapter 3: “The Testimony of Modern Art”)

leave a comment »

In the previous post in this series on William Barrett’s Irrational Man, I explored Part 1, Chapter 2: The Encounter with Nothingness, where Barrett gives an overview of some of the historical contingencies that have catalyzed the advent of existentialism: namely, the decline of religion, the rational ordering of society through capitalism and industrialization, and the finitude found within science and mathematics.  In this post, I want to explore Part I, Chapter 3: The Testimony of Modern Art.  Let’s begin…

Ch. 3 – The Testimony of Modern Art

In this chapter, Barrett expands the scope of existentialism, its drives and effects, on the content of modern art.  As he sees it, existentialist anxiety, discontent, and facing certain truths resulting from our modern understanding of the world we live in have heavily influenced if not predominated the influence on modern art.  Many find modern art to be, as he puts it:

“…too bare and bleak, too negative or nihilistic, too shocking or scandalous; it dishes out unpalatable truths.”

Perhaps it should come as no surprise that these kinds of qualities in much of modern art are but a product of existentialist angst, feelings of solitude, and an outright clash between traditional norms and narratives about human life and the views of those who have accepted much of what modernity has brought to light, however difficult and uncomfortable that acceptance is.

We might also be tempted to ask ourselves if modern art represents something more generally about our present state.  Barrett sheds some light on this question when he says:

“..Modern art thus begins, and sometimes ends, as a confession of spiritual poverty.  That is its greatness and its triumph, but also the needle it jabs into the Philistine’s sore spot, for the last thing he wants to be reminded of is his spiritual poverty.  In fact, his greatest poverty is not to know how impoverished he is, and so long as he mouths the empty ideals or religious phrases of the past he is but as tinkling brass.”

I can certainly see a lot of modern art as being an expression or manifestation of the spiritual poverty of our modern age.  It’s true that religion no longer serves the same stabilizing role for our society as it once did, nor can we deny that the knowledge we’ve gained since the Enlightenment has caused a compartmentalizing effect on our psyche with respect to reason and religious belief (with the latter being eliminated for many if the compartmentalization is insufficient to overcome any existing cognitive dissonance).  We can also honestly say that many in the modern world have lost a sense of meaning and purpose in their lives, and feel a loss of connection to their community or to the rest of humanity in general, largely as a result of the way society (and in turn, how each life within that society) has become structured.

But, as Barrett says, the fact that many people don’t realize just how impoverished they are, is the greatest form of poverty realized by many living in modernity.  And we could perhaps summarize this spiritual poverty as simply the lack of having a well-rounded expression of one’s entire psyche.  It seems to me that this qualitative state is tied to another aspect of the overall process: in particular, our degree of critical self-reflection which affects our vision of our own personal growth, our ethical development, and ultimately our ability to define meaning for our lives on our own individual terms.

One could describe a kind of trade-off that has occurred during humanity’s transition to modernity: we once had a more common religious structure that pervaded one’s entire life and which was shared by most everyone else living in pre-modern society, and this was replaced by a secular society that encouraged new forms of conformity aside from religion; and we once had a religious structure that allowed one to connect to some of the deeper layers of their inner self, and this was replaced with more of an industrialized, consumerist structure involving psychological externalization which lended itself to the powers of conformity already present in the collective social sphere of our lives.

Since artistic expression serves as a kind of window into the predominating psychology of the people and artists living at any particular time, Barrett makes a very good point when he says:

“Even if existential philosophy had not been formulated, we would know from modern art that a new and radical conception of man was at work in this period.”

And within the modern art movement, we can see a kind of compensatory effect occurring where the externalization in modern society is countered with a vast supply of subjectivity including the creation of very unique and highly imaginative abstractions.  But, underneath or within many of these abstractions lies a fundamental perspective of modern humans living as a kind of stranger to the world, surrounded by an alien environment, with a yearning to feel a sense of belonging and familiarity.

We’ve seen similar changes in artistic expression within literature as well.  Whereas literature had historically been created under the assumption of a linear temporality operating within the bounds of a well-defined beginning, middle, and end, it was beginning to show more chaotic or unpredictable qualities in its temporal structure, less intuitive plot progressions, and in many cases leaving the reader with what appeared to be an open or unresolved ending, and even a feeling of discontent or shock.  This is what we’d expect to occur if we realize the Greek roots of Western civilization, ultimately based on a culture that believed the universe to have a logical structure, with a teleological, anthropomorphic and anthropocentric order of events that cohered into an intelligible whole.  Once this view of the universe changed to one that saw the world as less predictable and indifferent to human wants and needs, the resultant psychological changes coincided with a change in literary style and expression.

In all these cases, we can see that modern art has no clear-cut image of what it means to be human or what exactly a human being is, for the simple reason that it sees human beings as lacking any fixed essence or nature; it sees humans as transcending any pre-defined identity or mold.  Lacking any fixed essence, I think that modern conceptions of humanity entail a radical form of freedom to define ourselves if we choose to do so, even though this worthwhile goal is often difficult, uncomfortable, and a project that never really ends until we die.  Actually striving to make use of this freedom is needed now more than ever, given the level of conformity and the increasingly abstract ways of living that modern society foists upon us.

Another interesting quote of Barrett’s regards the relationship between modern art and conceptions of the meaningless:

“Modern art has discarded the traditional assumptions of rational form.  The modern artist sees man not as the rational animal, in the sense handed down to the West by the Greeks, but as something else.  Reality, too, reveals itself to the artist not as the Great Chain of Being, which the tradition of Western rationalism had declared intelligible down to its smallest link and in its totality, but as much more refractory: as opaque, dense, concrete, and in the end inexplicable.  At the limits of reason one comes face to fact with the meaningless; and the artist today shows us the absurd, the inexplicable, the meaningless in our daily life.

This is interesting, especially given Barrett’s previous claim (in chapter 1) about existentialism’s opposition to the positivist position that “…the whole surrounding area in which ordinary men live from day to day and have their dealings with other men is consigned to the outer darkness of the meaningless.”  Barrett’s more recent claim above, while not necessarily in contradiction with the previous claim, suggests (at the very least) an interesting nuance within existentialist thought.  It suggests that positivism wants to keep silent about the meaningless, whereas existentialism does not; but it also suggests that there’s some agreement between positivism’s claim of what is meaningless and that of existentialism.  Both supposedly contrary schools of thought make claims to what is meaningless either implicitly or explicitly, and both have some agreement as to what falls under the umbrella of the meaningless; it’s just that existentialism accepts and promulgates this meaninglessness as a fundamental part of our human existence whereas positivism more or less rejects this as not even worth talking about, let alone worth using to help construct one’s world view.

Barrett finishes this chapter with a brief reminder of the immense technological progress we’ve made in modern times and the massive externalization of our lives that accompanied this change.  But there is a growing disparity between this external power and our inner poverty; an irony that modern art wants to expose.  Tying this all together, he says:

“The bomb reveals the dreadful and total contingency of human existence.  Existentialism is the philosophy of the atomic age.”

And that pretty much says it all.  Originally, life on this planet (eventually including our own species) was born from the sun, in terms of its elements and its ultimate source of energy.  Now we live in an age where we’ve harnessed the power that drives the sun itself (nuclear fusion); the very power that may one day lead to the end of our own existence.  I find this situation to be far more ironic than the disparity between our inner and outer lives as Barrett points out, as we are on the brink of wiping ourselves out by the very mechanism that allowed us to exist in the first place.  Nothing could be a more poetic example of the contingency of our own existence.

In the next post in this series, I’ll explore Irrational Man, Part 2: The Sources of Existentialism in the Western Tradition, Chapter 4: Hebraism and Hellenism.

Irrational Man: An Analysis (Part 1, Chapter 2: “The Encounter with Nothingness”)

leave a comment »

In the first post in this series on William Barrett’s Irrational Man, I explored Part 1, Chapter 1: The Advent of Existentialism, where Barrett gives a brief description of what he believes existentialism to be, and the environment it evolved within.  In this post, I want to explore Part I, Chapter 2: The Encounter with Nothingness.

Ch. 2 – The Encounter With Nothingness

Barrett talks about the critical need for self-analysis, despite the fact that many feel that this task has already been accomplished and that we’ve carried out this analysis exhaustively.  But Barrett sees this as contemporary society’s running away from the facts of our own ignorance.  Modern humankind, it seems to Barrett, is in even more need to question their identity for we seem to understand ourselves even less than when we first began to question who we are as a species.

1. The Decline of Religion

Ever since the end of the Middle Ages, religion as a whole has been on the decline.  This decrease in religiosity (particularly in Western civilization) became most prominent during the Enlightenment.  As science began to take off, the mechanistic structure and qualities of the universe (i.e. its laws of nature) began to reveal themselves in more and more detail.  This in turn led to a replacement of a large number of superstitious and supernatural religious beliefs about the causes for various phenomena with scientific explanations that could be empirically verified and tested.  Throughout this process, as theological explanations became replaced more and more with naturalistic explanations, the presumed role of God and the Church began to evaporate.  Thus, at the purely intellectual level, we underwent a significant change in terms of how we viewed the world and subsequently how we viewed the nature of human beings and our place in the world.

But, as Barrett points out:

“The waning of religion is a much more concrete and complex fact than a mere change in conscious outlook; it penetrates the deepest strata of man’s total psychic life…Religion to medieval man was not so much a theological system as a solid psychological matrix surrounding the individual’s life from birth to death, sanctifying and enclosing all its ordinary and extraordinary occasions in sacrament and ritual.”

We can see here how the role of religion has changed to some degree from medieval times to the present day.  Rather than simply being a set of beliefs that identified a person with a particular group and which had soteriological, metaphysical, and ethical significance to the believer (as it is more so in modern times), it used to be a complete system or a total solution for how one was to live their life.  And it also provided a means of psychological stability and coherence by providing a ready-made narrative of the essence of man; a sense of familiarity and a pre-defined purpose and structure that didn’t have to be constructed from scratch by the individual.

While the loss of the Church involved losing an entire system of dogmatic teachings, symbols, and various rites and sacraments, the most important loss according to Barrett was the loss of a concrete connection to a transcendent realm of being.  We were now set free such that we had to grapple with the world on our own, with all its precariousness, and to deal head-on with the brute facts of our own existence.

What I find most interesting in this chapter is when Barrett says:

“The rationalism of the medieval philosophers was contained by the mysteries of faith and dogma, which were altogether beyond the grasp of human reason, but were nevertheless powerfully real and meaningful to man as symbols that kept the vital circuit open between reason and emotion, between the rational and non-rational in the human psyche.”

And herein lies the crux of the matter; for Barrett believes that religion’s greatest function historically was its serving as a bridge between the rational and non-rational elements of our psychology, and also its serving as a barrier that limited the effective reach and power of our rationality over the rest of our psyches and our view of the world.  I would go even further to suggest that it may have allowed our emotional expression to more harmoniously co-exist and work with our reason instead of primarily being at odds with it.

I agree with Barrett’s point here in that religion often promulgates ideas and practices that appeal to many of our emotional dispositions and intuitions, thus allowing people to express certain emotional states and to maintain comforting intuitions that might otherwise be hindered or subjugated by reason and rationality.  And it has also provided a path for reason to connect to the unreasonable to some degree; as a means of minimizing the need to compartmentalize rationality from the emotional or irrational influences on a person’s belief systems.  By granting people an opportunity to combine reason and emotion in some way, where this reason could be used to try and make some sense of emotion and to give it some kind of validation without having to reject reason completely, religion has been effective (historically anyway) in helping people to avoid the discomfort of rejecting beliefs that they know to be reasonable (many of these beliefs at least) while also being able to avoid the discomfort of inadequate emotional/non-rational expression.

Once religion began to go by the wayside, due in large part to the accumulated knowledge acquired through reason and scientific progress, it became increasingly difficult to square the evidence and arguments that were becoming more widely known with many of the claims that religion and the Church had been propagating for centuries.  Along with this growing invalidation or loss of credibility came the increased need to compartmentalize reason and rationality from emotionally and irrationally-derived beliefs and experiences.  And this difficulty led to a decline in religiosity for many, which was accompanied with the loss in any emotional and irrational/non-rational expression that religion had once offered the masses.  Once reason and rationality expanded beyond a certain threshold, it effectively popped the religious bubble that had previously contained it, causing many to begin to feel homeless, out of place, and in many ways incomplete in the new world they now found themselves living in.

2. The Rational Ordering of Society

The organization of our lives has been, historically at least, a relatively organic process where it had a kind of self-guiding, pragmatic, and intuitive structure and evolution.  But once we approached the modern age, as Barrett points out, we saw a drastic shift toward an increasingly rational form of organization, where efficiency and a sort of technical precision began to dominate the overall direction of society and the lives of each individual.  The rise of capitalism was a part of this cultural evolutionary process (as was, I would argue, the Industrial Revolution), and only further enhanced the power and influence of reason and rationality over our day-to-day lives.

The collectivization and distribution of labor involved in the mass production of commodities and various products had taken us entirely out of our agrarian and hunter-gatherer roots.  We no longer lived off of the land so to speak, and were no longer ensconced within the kinds of natural scenery while performing the types of day-to-day tasks that our species had adapted to over its long-term evolutionary history.  And with this collectivization, we also lost a large component of our individuality; a component that is fairly important in human psychology.

Barrett comments on how we’ve accepted modern society as normal, relatively unaware of our ancestral roots and our previous way of life:

“We are so used to the fact that we forget it or fail to perceive that the man of the present day lives on a level of abstraction altogether beyond the man of the past.”

And he goes on to talk about how our ratcheting forward in terms of technological progress and any mechanistic societal re-structuring is what gives us our incredible power over our environment but at the cost of feeling rootless and without any concrete sense of feeling, when it’s needed now more than ever.

Perhaps a more interesting point he makes is with respect to how our increased mechanization and collectivization has changed a fundamental part of how our psyche and identity operate:

“Not only can the material wants of the masses be satisfied to a degree greater than ever before, but technology is fertile enough to generate new wants that it can also satisfy…All of this makes for an extraordinary externalization of life in our time. “

And it is this externalization of our identity and psychology, manifested in ever-growing and ever-changing sets of material objects and information flow, that is interesting to ponder over.  It reminds me somewhat of Richard Dawkins’ concept of an extended phenotype, where the effects of an organism’s genes aren’t merely limited to the organism’s body, but rather they extend into how the organism structures its environment. While this term is technically limited to behaviors that have a direct bearing on the organism’s survival, I prefer to think of this extended phenotype as encompassing everything the organism creates and the totality of its behaviors.

The reason I mention this concept is because I think it makes for a useful analogy here.  For in the earlier evolution of organisms on earth, the genes’ effects or the resulting phenotypes were primarily manifested as the particular body and bodily behavior of the organism, and as organisms became more complex (particularly those that evolved brains and a nervous system), that phenotype began to extend itself into the abilities of an organism to make external structures out of raw materials found in its environment.  And more and more genetic resources were allotted to the organism’s brain which made this capacity for environmental manipulation possible.  As this change occurred, the previous boundaries that defined the organism vanished as the external constructions effectively became an extension of the organism’s body.  But with this new capacity came a loss of intimacy in the sense that the organism wasn’t connected to these external structures in the same way it was connected to its own feelings and internal bodily states; and these external structures also lacked the privacy and hidden qualities inherent in an organism’s thoughts, feelings, and overall subjective experience.

Likewise, as we’ve evolved culturally, eventually gaining the ability to construct and mass-produce a plethora of new material goods, we began to dedicate a larger proportion of our attention on these different external objects, wanting more and more of them well past what we needed for survival.  And we began to invest or incorporate more of ourselves, including our knowledge and information, in these externalities, forcing us to compensate by investing less and less in our internal, private states and locally stored knowledge.  Now it would be impractical if not impossible for an organism to perform increasingly complex behaviors and to continuously increase its ability to manipulate its own environment without this kind of trade-off occurring in terms of its identity, and how it distributes its limited psychological resources.

And herein lies the source of our seemingly fractured psyche: the natural selection of curiosity, knowledge accumulation, and behavioral complexity for survival purposes has become co-opted for just about any purpose imaginable, since the hardware and schema required for the former has a capacity that transcends its evolutionary purpose and that transcends the finite boundaries, the guiding constraints, and the essential structure of the lives we once had in our evolutionary past.  Now we’ve moved beyond what used to be a kind of essential quality and highly predictable trajectory of our lives and of our species, and we’ve moved into the unknown; from the realm of the finite and the familiar to the seemingly infinite realm of the unknown.

A big part of this externalization has manifested itself in the new ways we acquire, store, and share information, such as with the advent of mass media.  As Barrett puts it:

“…journalism enables people to deal with life more and more at second hand.  Information usually consists of half-truths, and “knowledgability” becomes a substitute for real knowledge.  Moreover, popular journalism has by now extended its operations into what were previously considered the strongholds of culture-religion, art, philosophy…It becomes more and more difficult to distinguish the secondhand from the real thing, until most people end by forgetting there is such a distinction.”

I think this ties well into what Barrett mentioned previously when he talked about how modern civilization is built on increasing levels of abstraction.  The very information we’re absorbing, in order to make sense of and deal with a large aspect of our contemporary world, is second hand at best.  The information we rely on has become increasingly abstracted, manipulated, reinterpreted, and distorted.  The origin of so much of this information is now at least one level away from our immediate experience, giving it a quality that is disconnected, less important, and far less real than it otherwise would be.  But we often forget that there’s any epistemic difference between our first-hand lived experience and the information that arises from our mass media.

To add to Barrett’s previous description of existentialism as a reaction against positivism, he also mentions Karl Jaspers’ views of existentialism, which he described as:

“…a struggle to awaken in the individual the possibilities of an authentic and genuine life, in the fact of the great modern drift toward a standardized mass society.”

Though I concur with Jaspers’ claim that modernity has involved a shift toward a standardized mass society in a number of ways, I also think that it has provided the means for many more ways of being unique, many more possible talents and interests for one to explore, and many more kinds of goals to choose from for one’s life project(s).  Collectivization and distribution of labor and the technology that has precipitated from it have allowed many to avoid spending all day hunting and gathering food, making or cleaning their clothing, and other tasks that had previously consumed most of one’s time.

Now many people (in the industrialized world at least) have the ability to accumulate enough free time to explore many other types of experiences, including reading and writing, exploring aspects of our existence with highly-focused introspective effort (as in philosophy), creating or enjoying vast quantities of music and other forms of art, listening to and telling stories, playing any number of games, and thousands of other activities.  And even though some of these activities have been around for millennia, many of them have not (or there was little time for them), and of those that have been around the longest, there were still far fewer choices than what we have on offer today.  So we mustn’t forget that many people develop a life-long passion for at least some of these experiences that would never have been made possible without our modern society.

The issue I think lies in the balance or imbalance between standardization and collectivization on the one hand (such that we reap the benefits of more free time and more recreational choices), and opportunities for individualistic expression on the other.  And of the opportunities that exist for individualistic expression, there is still the need to track the psychological consequences that result from them so we can pick more psychologically fulfilling choices; so that we can pick choices that better allow us to keep open that channel between reason and emotion and between the rational and the non-rational/irrational that religion once provided, as Barrett mentioned earlier.

We also have to accept the fact that the findings in science have largely dehumanized or inhibited the anthropomorphization of nature, instead showing us that the universe is indifferent to us and to our goals; that humans and life in general are more of an aberration than anything else within a vast cosmos that is inhospitable to life.  Only after acknowledging the situation we’re in can we fully appreciate the consequences that modernity has had on upending the comforting structure that religion once gave to humans throughout their lives.  As Barrett tell us:

“Science stripped nature of its human forms and presented man with a universe that was neutral, alien, in its vastness and force, to his human purposes.  Religion, before this phase set in, had been a structure that encompassed man’s life, providing him with a system of images and symbols by which he could express his own aspirations toward psychic wholeness.  With the loss of this containing framework man became not only a dispossessed but a fragmentary being.”

Although we can’t unlearn the knowledge that has caused religion to decline including that which has had a bearing on the questions dealing with our ultimate meaning and purpose, we can certainly find new ways of filling the psychological void felt by many as a result of this decline.  The modern world has many potential opportunities for psychologically fulfilling projects in life, and these opportunities need to be more thoroughly explored.  But, existentialist thought rightly reminds us of how the fruits of our rational and enlightened philosophy have been less capable of providing as satisfying an answer to the question “What are human beings?” as religion once gave.  Along with the fruits of the Enlightenment came a lack of consolation, and a number of painful truths pertaining to the temporal and contingent nature of our existence, previously thought to possess both an eternal and necessary character.  Overall, this cultural change and accumulation of knowledge effectively forced humanity out of its comfort zone.  Barrett described the situation quite well when he said:

“In the end, he [modern man] sees each man as solitary and unsheltered before his own death.  Admittedly, these are painful truths, but the most basic things are always learned with pain, since our inertia and complacent love of comfort prevent us from learning them until they are forced upon us.”

Modern humanity then, has become alienated from God, from nature, and from the complex social machinery that produces the goods and services that he both wants and needs.  Even worse yet however, is the alienation from one’s own self that has occurred as humans have found themselves living in a society that expects each of them to perform some specific function in life (most often not of their choosing), and this leads to society effectively identifying each person as this function, forgetting or ignoring the real person buried underneath.

3.  Science and Finitude

In this last section of chapter two, Barrett discusses some of the ultimate limitations in our use of reason and in the scope of knowledge we can obtain from our scientific and mathematical methods of inquiry.  Reason itself is described as the product of a creature “whose psychic roots still extend downward into the primeval soil,” and is thus a creation from an animal that is still intimately connected to an irrational foundation; an animal still possessing a set of instincts arising from its evolutionary origins.

We see this core presence of irrationality in our day-to-day lives whenever we have trouble trying to employ reason, such as when it conflicts with our emotions and intuitions.  And Barrett brings up the optimism in the confirmed rationalist and their belief that they may still be able to one day overcome all of these obstacles of irrationality by simply employing reason in a more clever way than before.  Clearly Barrett doesn’t share this optimism of the rationalist and he tries to support his pessimism by pointing out a few limitations of reason as suggested within the work of Immanuel Kant, modern physics, and mathematics.

In Kant’s Critique of Pure Reason, he lays out a substantial number of claims and concepts relating to metaphysics and epistemology, where he discusses the limitations of both reason and the senses.  Among other things, he claims that our reason is limited by certain a priori intuitional forms or predispositions about space and time (for example) that allow us to cognize any thing at all.  And so if there are any “things in themselves”, that is, real objects underlying whatever appears to us in the external world, where these objects have qualities and attributes that are independent of our experience, then we can never know anything of substance about these underlying features.  We can never see an object or understand it in any way without incorporating a large number of intuitional forms and assumptions in order to create that experience at all; we can never see the world without seeing it through the limited lens of our minds and our body’s sensory machinery.

For Kant, this also means that we may often use reason erroneously to make unjustified claims of knowledge pertaining to the transcendent, particularly within metaphysics and theology.  When reason is applied to ideas that can’t be confirmed through sensory experience, or that lie outside of our realm of possible experience (such as the idea that cause and effect laws govern every interaction in the universe, something we can never know through experience), it leads to knowledge claims that it can’t justify.  Another limitation of reason, according to Kant, is that it operates through an a priori assumption of unification in our experience, and so the categories and concepts that we infer to exist based on reason are limited by this underlying unifying principle.  Science has added to the rigidity and specificity of this unification, by going beyond what we’ve unified through our unmodified experience (i.e. seeing the sun “rise” and “set” every day), that is, experience without the use of any instruments, telescopes, microscopes, etc. (where the use of these instruments has helped give us more data showing that the earth rotates on an axis rather than the sun revolving around the earth).  Nevertheless, unification is the ultimate goal of reason whether applied with or without a strict scientific method.

Then within physics, we find another limitation in terms of our possible understanding of the world.  Heisenberg’s Uncertainty Principle showed us that we are unable to know complementary parameters pertaining to a particle with an arbitrarily high level of precision.  We eventually hit a limit where, for example, if we know the position of a particle (such as an electron) with a high degree of accuracy at some particular time, then we’re unable to know the momentum of that particle with the same accuracy.  The more we know about one complementary parameter, the less we know about the other.  Barrett describes this discovery in physics as showing that nature may be irrational and chaotic at its most fundamental level and then he says:

“What is remarkable is that here, at the very farthest reaches of precise experimentation, in the most rigorous of the natural sciences, the ordinary and banal fact of our human limitations emerges.”

This is indeed interesting because for a long while many rationalists and positivists held that our potential knowledge was unlimited (or finite, yet complete) and that science was the means to gain this open-ended or complete knowledge of the universe.  Then quantum physics delivered a major blow to this assumption, showing us instead that our knowledge is inherently limited based on how the universe is causally structured.  However, it’s worth pointing out that this is not a human limitation per se, but a limitation of any conscious system acquiring knowledge in our universe.  It wouldn’t matter if humans had different brains, better technology, or used artificial intelligence to perform the computations or measurements.  There is simply a limit on what can ever be measured; a limit on what can ever be known about the state of any system that is being studied.

Godel’s Incompleteness Theorem likewise showed how there were inherent limitations in any mathematical formulation of number theory that was rich enough where, no matter how large or seemingly complete its set of axioms are, if it is consistent, then there will be statements that can’t be proven or disproven within that formulation.  Likewise, the consistency of any formulation of number theory can’t be proven by the formulation itself; rather, it depends on assumptions that lie outside of that formulation.  However, once again, contrary to what Barrett claims, I don’t think this should be taken to be a human limitation of reason, but rather a limitation of mathematics generally.

Regardless, I concede Barrett’s overarching point that, in all of these cases (Kant, physics, and mathematics), we are still running into scenarios where we are unable to do something or to know something that we may have previously thought we could do or thought we could know, at least in principle.  And so these discoveries did run counter to the beliefs of many that thought that humans were inherently unstoppable in these endeavors of knowledge accumulation and in our ability to create technology capable of solving any problem whatsoever.  We can’t do this.  We are finite creatures with finite brains, but perhaps more importantly, our capacities are also limited by what knowledge is theoretically available to any conscious system trying to better understand the universe.  The universe is inherently unknowable in at least some respects, which means it is unpredictable in at least some respects.  And unpredictability doesn’t sit well with humans intent on eliminating uncertainty, eliminating the unknown, and trying to have a coherent narrative to explain everything encountered throughout one’s existence.

In the next post in this series, I’ll explore Irrational Man, Part 1, Chapter 3: The Testimony of Modern Art.

Irrational Man: An Analysis (Part 1, Chapter 1: “The Advent of Existentialism”)

leave a comment »

William Barrett’s Irrational Man is a nice exposition on existential philosophy which begins by exploring the state of modern humanity and philosophy and tracing its roots from ancient Greece, its development through the Medieval period and the Enlightenment, all the way to the mid-twentieth century.  He explores what he believes to be the primary cultural sources of existentialism and then surveys the contributions of perhaps the four most prominent existential philosophers: namely, Kierkegaard, Nietzsche, Heidegger, and Sartre.  I’d like to explore Barrett’s book here in more detail and I’m going to break this down into an analysis of every section and chapter, with each chapter analyzed within a separate blog post.  Below is the first post of this series; Part 1, Chapter 1: The Advent of Existentialism.

Part I: “The Present Age”

Ch. 1 – The Advent of Existentialism

Early on, Barrett gives a brief description of positivism, which he describes as a philosophical theory which holds that science is not only what distinguishes our post-Enlightenment civilization from all others, but it also claims that science should be the ultimate ruler of human life, to which Barrett remarks that science has never held this role before nor could it given the details of our psychology as human beings.  It’s true that science has never held this role before and it’s also true that the way we generally use science is ill-suited for the job of guiding our day-to-day lives in order to meet all of our psychological needs.

However, I think it would be mistaken to say that the scientific method, and empirical methods generally, can’t be used (even in principle) to determine (or to help determine) the choices one ought to make in one’s life.  While science as an enterprise isn’t generally used in this way (we tend to use it to solve more specific technical challenges and to determine well-defined mechanisms underlying various phenomena), we shouldn’t simply assume that the knowledge we’re able to gain from it will never include information pertaining to our decision-making, our preferences and values, and our ultimate goals in life.  On top of this, if one wanted to know whether or not a life “ruled by” science could meet all of one’s psychological needs, one could only test this hypothesis by employing (at the very least) an informal version of the scientific method.  So in some rudimentary sense, science and its methods (of testing hypotheses and building upon the results of such testing) are unavoidable as they pervade our lives and are inseparable from any falsifiable inquiry that arises therein.

On the flip-side, we shouldn’t assume that science on its own is capable of anything at all, let alone meeting all of our needs as a species.  What I mean by this, and one thing that I’m sure Barrett would have agreed with, is that the use of science itself and the desire to use it for some particular aim first requires an underlying set of philosophical views such as some kind of an epistemology, an ethics, etc.  This also means that science as a concept and as an instrument for gaining knowledge shouldn’t be criticized if it leads to undesirable consequences; rather it is the philosophical views of the scientist(s) undertaking some research project, and/or the philosophical views of the people that use that knowledge once it has been discovered, that should be criticized accordingly.

Barrett goes on to say:

“Positivist man is a curious creature who dwells in the tiny island of light composed of what he finds scientifically ‘meaningful,’ while the whole surrounding area in which ordinary men live from day to day and have their dealings with other men is consigned to the outer darkness of the ‘meaningless.’ “

And I couldn’t agree more that this kind of positivist thinking is flawed and incomplete as we need to take introspection, intuition, and raw experience into any complete account of our reality.  The German theoretical physicist Werner Heisenberg actually echoed similar sentiments in his later life where he said:

“The positivists have a simple solution: the world must be divided into that which we can say clearly and the rest, which we had better pass over in silence. But can any one conceive of a more pointless philosophy, seeing that what we can say clearly amounts to next to nothing? If we omitted all that is unclear we would probably be left with completely uninteresting and trivial tautologies.”

In Heisenberg’s quote here we can see the relevance of thinkers like Wittgenstein and Nietzsche, and how they explored different conceptions of meaning as well as the importance of (what Nietzsche called) perspectivism, or striving to look at the world as a whole or at any particular phenomena from as many viewpoints as possible without becoming trapped in the constraints of our language and culture.  In order to avoid dogmatism, we must be willing to at least consider different ontologies and different ways of looking at our own existence, our place in the world, and what is most important to us.  And although science shouldn’t be excluded from our sources of meaning or from our methods of determining what is and what is not meaningful, people shouldn’t expect these concepts to be restricted to the domain of science.

So what is existentialism then, according to Barrett?  Well, he sees it as a philosophical movement (and a kind of revolt) against the oversimplification of man (human beings) as assumed within positivism.  It seeks to replace this fractured view of man and instead gather all the facets of the human condition and assemble them into one coherent picture of man.  And it does so even when it requires acknowledging the darker and more questionable parts of our nature and existence; by exploring and accepting the uglier side of humanity that many in the Enlightenment tried to discount and leave by the wayside.

This post-Enlightenment view of man, which pictured man as inherently rational, went largely unchallenged for more than a hundred years (until Kierkegaard), and aside from Kierkegaard’s works which Barrett explores, I think we could also perhaps credit the work of Charles Darwin and his On the Origin of Species as well as his The Descent of Man, for firmly challenging any prevailing doubts about our animalistic and irrational origins.  Once it became apparent that human beings were the distant cousins of other primates and the more distant cousins of fish and reptiles and so on, it became that much harder to distance ourselves from the irrationality that pervades the rest of the animal kingdom.  And so it became harder to deny that we still had some level of irrationality at the core of our being, even if it was accompanied with a capacity for reason and rationality.

In the next post in this series, I’ll explore Irrational Man, Part 1, Chapter 2: The Encounter with Nothingness.

It’s Time For Some Philosophical Investigations

leave a comment »

Ludwig Wittgenstein’s Philosophical Investigations is a nice piece of work where he attempts to explain his views on language and the consequences of this view on various subjects like logic, semantics, cognition, and psychology.  I’ve mentioned some of his views very briefly in a couple of earlier posts, but I wanted to delve into his work in a little more depth here and comment on what strikes me as most interesting.  Lately, I’ve been looking back at some of the books I’ve read from various philosophers and have been wanting to revisit them so I can explore them in more detail and share how they connect to some of my own thoughts.  Alright…let’s begin.

Language, Meaning, & Their Probabilistic Attributes

He opens his Philosophical Investigations with a quote from St. Augustine’s Confessions that describes how a person learns a language.  St. Augustine believed that this process involved simply learning the names of objects (for example, by someone else pointing to the objects that are so named) and then stringing them together into sentences, and Wittgenstein points out that this is true to some trivial degree but it overlooks a much more fundamental relationship between language and the world.  For Wittgenstein, the meaning of words can not simply be attached to an object like a name can.  The meaning of a word or concept has much more of a fuzzy boundary as it depends on a breadth of context or associations with other concepts.  He analogizes this plurality in the meanings of words with the relationship between members of a family.  While there may be some resemblance between different uses of a word or concept, we aren’t able to formulate a strict definition to fully describe this resemblance.

One problem then, especially within philosophy, is that many people assume that the meaning of a word or concept is fixed with sharp boundaries (just like the fixed structure of words themselves).  Wittgenstein wants to dispel people of this false notion (much as Nietzsche tried to do before him) so that they can stop misusing language, as he believed that this misuse was the cause of many (if not all) of the major problems that had cropped up in philosophy over the centuries, particularly in metaphysics.  Since meaning is actually somewhat fluid and can’t be accounted for by any fixed structure, Wittgenstein thinks that any meaning that we can attach to these words is ultimately going to be determined by how those words are used.  Since he ties meaning with use, and since this use is something occurring in our social forms of life, it has an inextricably external character.  Thus, the only way to determine if someone else has a particular understanding of a word or concept is through their behavior, in response to or in association with the use of the word(s) in question.  This is especially important in the case of ambiguous sentences, which Wittgenstein explores to some degree.

Probabilistic Shared Understanding

Some of what Wittgenstein is trying to point out here are what I like to refer to as the inherently probabilistic attributes of language.  And it seems to me to be probabilistic for a few different reasons, beyond what Wittgenstein seems to address.  First, there is no guarantee that what one person means by a word or concept exactly matches the meaning from another person’s point of view, but at the very least there is almost always going to be some amount of semantic overlap (and possibly 100% in some cases) between the two individual’s intended meanings, and so there is going to be some probability that the speaker and the listener do in fact share a complete understanding.  It seems reasonable to argue that simpler concepts will have a higher probability of complete semantic overlap whereas more complex concepts are more likely to leave a gap in that shared understanding.  And I think this is true even if we can’t actually calculate what any of these probabilities are.

Now my use of the word meaning here differs from Wittgenstein’s because I am referring to something that is not exclusively shared by all parties involved and I am pointing to something that is internal (a subjective understanding of a word) rather than external (the shared use of a word).  But I think this move is necessary if we are to capture all of the attributes that people explicitly or implicitly refer to with a concept like meaning.  It seems better to compromise with Wittgenstein’s thinking and refer to the meaning of a word as a form of understanding that is intimately connected with its use, but which involves elements that are not exclusively external.

We can justify this version of meaning through an example.  If I help teach you how to ride a bike and explain that this activity is called biking or to bike, then we can use Wittgenstein’s conception of meaning and it will likely account for our shared understanding, and so I have no qualms about that, and I’d agree with Wittgenstein that this is perhaps the most important function of language.  But it may be the case that you have an understanding that biking is an activity that can only happen on Tuesdays, because that happened to be the day that I helped teach you how to ride a bike.  Though I never intended for you to understand biking in this way, there was no immediate way for me to infer that you had this misunderstanding on the day I was teaching you this.  I could only learn of this fact if you explicitly explained to me your understanding of the term with enough detail, or if I had asked you additional questions like whether or not you’d like to bike on a Wednesday (for example), with you answering “I don’t know how to do that as that doesn’t make any sense to me.”  Wittgenstein doesn’t account for this gap in understanding in his conception of meaning and I think this makes for a far less useful conception.

Now I think that Wittgenstein is still right in the sense that the only way to determine someone else’s understanding (or lack thereof) of a word is through their behavior, but due to chance as well as where our attention is directed at any given moment, we may never see the right kinds of behavior to rule out any number of possible misunderstandings, and so we’re apt to just assume that these misunderstandings don’t exist because language hides them to varying degrees.  But they can and in some cases do exist, and this is why I prefer a conception of meaning that takes these misunderstandings into account.  So I think it’s useful to see language as probabilistic in the sense that there is some probability of a complete shared understanding underlying the use of a word, and thus there is a converse probability of some degree of misunderstanding.

Language & Meaning as Probabilistic Associations Between Causal Relations

A second way of language being probabilistic is due to the fact that the unique meanings associated with any particular use of a word or concept as understood by an individual are derived from probabilistic associations between various inferred causal relations.  And I believe that this is the underlying cause of most of the problems that Wittgenstein was trying to address in this book.  He may not have been thinking about the problem in this way, but it can account for the fuzzy boundary problem associated with the task of trying to define the meaning of words since this probabilistic structure underlies our experiences, our understanding of the world, and our use of language such that it can’t be represented by static, sharp definitions.  When a person is learning a concept, like redness, they experience a number of observations and infer what is common to all of those experiences, and then they extract a particular subset of what is common and associate it with a word like red or redness (as opposed to another commonality like objectness, or roundness, or what-have-you).  But in most cases, separate experiences of redness are going to be different instantiations of redness with different hues, textures, shapes, etc., which means that redness gets associated with a large range of different qualia.

If you come across a new qualia that seems to more closely match previous experiences associated with redness rather than orangeness (for example), I would argue that this is because the brain has assigned a higher probability to that qualia being an instance of redness as opposed to, say, orangeness.  And the brain may very well test a hypothesis of the qualia matching the concept of redness versus the concept of orangeness, and depending on your previous experiences of both, and the present context of the experience, your brain may assign a higher probability of orangeness instead.  Perhaps if a red-orange colored object is mixed in with a bunch of unambiguously orange-colored objects, it will be perceived as a shade of orange (to match it with the rest of the set), but if the case were reversed and it were mixed in with a bunch of unambiguously red-colored objects, it will be perceived as a shade of red instead.

Since our perception of the world depends on context, then the meanings we assign to words or concepts also depends on context, but not only in the sense of choosing a different use of a word (like Wittgenstein argues) in some language game or other, but also by perceiving the same incoming sensory information as conceptually different qualia (like in the aforementioned case of a red-orange colored object).  In that case, we weren’t intentionally using red or orange in a different way but rather were assigning one word or the other to the exact same sensory information (with respect to the red-orange object) which depended on what else was happening in the scene that surrounded that subset of sensory information.  To me, this highlights how meaning can be fluid in multiple ways, some of that fluidity stemming from our conscious intentions and some of it from unintentional forces at play involving our prior expectations within some context which directly modify our perceived experience.

This can also be seen through Wittgenstein’s example of what he calls a duckrabbit, an ambiguous image that can be perceived as a duck or a rabbit.  I’ve taken the liberty of inserting this image here along with a copy of it which has been rotated in order to more strongly invoke the perception of a rabbit.  The first image no doubt looks more like a duck and the second image, more like a rabbit.

Now Wittgenstein says that when one is looking at the duckrabbit and sees a rabbit, they aren’t interpreting the picture as a rabbit but are simply reporting what they see.  But in the case where a person sees a duck first and then later sees a rabbit, Wittgenstein isn’t sure what to make of this.  However, he claims to be sure that whatever it is, it can’t be the case that the external world stays the same while an internal cognitive change takes place.  Wittgenstein was incorrect on this point because the external world doesn’t change (in any relevant sense) despite our seeing the duck or seeing the rabbit.  Furthermore, he never demonstrates why two different perceptions would require a change in the external world.  The fact of the matter is, you can stare at this static picture and ask yourself to see a duck or to see a rabbit and it will affect your perception accordingly.  This is partially accomplished by you mentally rotating the image in your imagination and seeing if that changes how well it matches one conception or the other, and since it matches both conceptions to a high degree, you can easily perceive it one way or the other.  Your brain is simply processing competing hypotheses to account for the incoming sensory information, and the top-down predictions of rabbitness or duckness (which you’ve acquired over past experiences) actually changes the way you perceive it with no change required in the external world (despite Wittgenstein’s assertion to the contrary).

To give yet another illustration of the probabilistic nature of language, just imagine the head of a bald man and ask yourself, if you were to add one hair at a time to this bald man’s head, at what point does he lose the property of baldness?  If hairs were slowly added at random, and you could simply say “Stop!  Now he’s no longer bald!” at some particular time, there’s no doubt in my mind that if this procedure were repeated (even if the hairs were added non-randomly), you would say “Stop!  Now he’s no longer bald!” at a different point in this transition.  Similarly if you were looking at a light that was changing color from red to orange, and were asked to say when the color has changed to orange, you would pick a point in the transition that is within some margin of error but it wouldn’t be an exact, repeatable point in the transition.  We could do this thought experiment with all sorts of concepts that are attached to words, like cat and dog and, for example, use a computer graphic program to seamlessly morph a picture of a cat into a picture of a dog and ask at what point did the cat “turn into” a dog?  It’s going to be based on a probability of coincident features that you detect which can vary over time.  Here’s a series of pictures showing a chimpanzee morphing into Bill Clinton to better illustrate this point:

At what point do we stop seeing a picture of a chimpanzee and start seeing a picture of something else?  When do we first see Bill Clinton?  What if I expanded this series of 15 images into a series of 1000 images so that this transition happened even more gradually?  It would be highly unlikely to pick the exact same point in the transition two times in a row if the images weren’t numbered or arranged in a grid.  We can analogize this phenomenon with an ongoing problem in science, known as the species problem.  This problem can be described as the inherent difficulty of defining exactly what a species is, which is necessary if one wants to determine if and when one species evolves into another.  This problem occurs because the evolutionary processes giving rise to new species are relatively slow and continuous whereas sorting those organisms into sharply defined categories involves the elimination of that generational continuity and replacing it with discrete steps.

And we can see this effect in the series of images above, where each picture could represent some large number of generations in an evolutionary timeline, where each picture/organism looks roughly like the “parent” or “child” of the picture/organism that is adjacent to it.  Despite this continuity, if we look at the first picture and the last one, they look like pictures of distinct species.  So if we want to categorize the first and last picture as distinct species, then we create a problem when trying to account for every picture/organism that lies in between that transition.  Similarly words take on an appearance of strict categorization (of meaning) when in actuality, any underlying meaning attached is probabilistic and dynamic.  And as Wittgenstein pointed out, this makes it more appropriate to consider meaning as use so that the probabilistic and dynamic attributes of meaning aren’t lost.

Now you may think you can get around this problem of fluidity or fuzzy boundaries with concepts that are simpler and more abstract, like the concept of a particular quantity (say, a quantity of four objects) or other concepts in mathematics.  But in order to learn these concepts in the first place, like quantity, and then associate particular instances of it with a word, like four, one had to be presented with a number of experiences and infer what was common to all of those experiences (as was the case with redness mentioned earlier).  And this inference (I would argue) involves a probabilistic process as well, it’s just that the resulting probability of our inferring particular experiences as an instance of four objects is incredibly high and therefore repeatable and relatively unambiguous.  Therefore that kind of inference is likely to be sustained no matter what the context, and it is likely to be shared by two individuals with 100% semantic overlap (i.e. it’s almost certain that what I mean by four is exactly what you mean by four even though this is almost certainly not the case for a concept like love or consciousness).  This makes mathematical concepts qualitatively different from other concepts (especially those that are more complex or that more closely map on to reality), but it doesn’t negate their having a probabilistic attribute or foundation.

Looking at the Big Picture

Though this discussion of language and meaning is not an exhaustive analysis of Wittgenstein’s Philosophical Investigations, it represents an analysis of the main theme present throughout.  His main point was to shed light on the disparity between how we often think of language and how we actually use it.  When we stray away from the way it is actually used in our everyday lives, in one form of social life or other, and instead misuse it such as in philosophy, this creates all sorts of problems and unwarranted conclusions.  He also wants his readers to realize that the ultimate goal of philosophy should not be to try and make metaphysical theories and deep explanations underlying everyday phenomena, since these are often born out of unwarranted generalizations and other assumptions stemming from how our grammar is structured.  Instead we ought to subdue these temptations to generalize and subdue our temptations to be dogmatic and instead use philosophy as a kind of therapeutic tool to keep our abstract thinking in check and to better understand ourselves and the world we live in.

Although I disagree with some of Wittgenstein’s claims about cognition (in terms of how intimately it is connected to the external world) and take some issue with his arguably less useful conception of meaning, he makes a lot of sense overall.  Wittgenstein was clearly hitting upon a real difference between the way actual causal relations in our experience are structured and how those relations are represented in language.  Personally, I think that work within philosophy is moving in the right direction if the contributions made therein lead us to make more successful predictions about the causal structure of the world.  And I believe this to be so even if this progress includes generalizations that may not be exactly right.  As long as we can structure our thinking to make more successful predictions, then we’re moving forward as far as I’m concerned.  In any case, I liked the book overall and thought that the interlocutory style gave the reader a nice break from the typical form seen in philosophical argumentation.  I highly recommend it!

Looking Beyond Good & Evil

leave a comment »

Nietzche’s Beyond Good and Evil serves as a thorough overview of his philosophy and is definitely one of his better works (although I think Thus Spoke Zarathustra is much more fun to read).  It’s definitely worth reading (if you haven’t already) to put a fresh perspective on many widely held philosophical assumptions that are often taken for granted.  While I’m not interested in all of the claims he makes (and he makes many, in nearly 300 aphorisms spread over nine chapters), there are at least several ideas that I think are worth analyzing.

One theme presented throughout the book is Nietzsche’s disdain for classical conceptions of truth and any form of absolutism or dogmatism.  With regard to truth or his study on the nature of truth (what we could call his alethiology), he subscribes to a view that he coined as perspectivism.  For Nietzsche, there are no such things as absolute truths but rather there are only different perspectives about reality and our understanding of it.  So he insists that we shouldn’t get stuck in the mud of dogmatism, and should instead try to view what is or isn’t true with an open mind and from as many points of view as possible.

Nietzsche’s view here is in part fueled by his belief that the universe is in a state of constant change, as well as his belief in the fixity of language.  Since the universe is in a state of constant change, language generally fails to capture this dynamic essence.  And since philosophy is inherently connected to the use of language, false inferences and dogmatic conclusions will often manifest from it.  Wittgenstein belonged to a similar school of thought (likely building off of Nietzsche’s contributions) where he claimed that most of the problems in philosophy had to do with the limitations of language, and therefore, that those philosophical problems could only be solved (if at all) through an in-depth evaluation of the properties of language and how they relate to its use.

This school of thought certainly has merit given the facts of language having more of a fixed syntactic structure yet also having a dynamic or probabilistic semantic structure.  We need language to communicate our thoughts to one another, and so this requires some kind of consistency or stability in its syntactic structure.  But the meaning behind the words we use is something that is established through use, through context, and ultimately through associations between probabilistic conceptual structures and relatively stable or fixed visual and audible symbols (written or spoken words).  Since the semantic structure of language has fuzzy boundaries, and yet is attached to relatively fixed words and grammar, it produces the illusion of a reality that is essentially unchanging.  And this results in the kinds of philosophical problems and dogmatic thinking that Nietzsche warns us of.

It’s hard to disagree with Nietzsche’s view that dogmatism is to be avoided at all costs, and that absolute truths are generally specious at best (let alone dangerous), and philosophy owes a lot to Nietzsche for pointing out the need to reject this kind of thinking.  Nietzsche’s rejection of absolutism and dogmatism is made especially clear in his views on the common conceptions of God and morality.  He points out how these concepts have changed a lot over the centuries (where, for example, the meaning of good has undergone a complete reversal throughout its history), and this is despite the fact that throughout that time, the proponents of those particular views of God or morality believe that these concepts have never changed and will never change.

Nietzsche believes that all change is ultimately driven by a will to power, where this will is a sort of instinct for autonomy, and which also consists of a desire to impose one’s will onto others.  So the meaning of these concepts (such as God or morality) and countless others have only changed because they’ve been re-appropriated by different or conflicting wills to power.  As such, he thinks that the meaning and interpretation of a concept illustrate the attributes of the particular will making use of those concepts, rather than some absolute truth about reality.  I think this idea makes a lot of sense if we regard the will to power as not only encompassing the desires belonging to any individual (most especially their desire for autonomy), but also the inferences they’ve made about reality, it’s apparent causal relations, etc., which provide the content for those desires.  So any will to power is effectively colored by the world view held by the individual, and this world view or set of inferred causal relations includes one’s inferences pertaining to language and any meaning ascribed to the words and concepts that one uses.

Even more interesting to me is the distinction Nietzsche makes between what he calls an unrefined use or version of the will to power and one that is refined.  The unrefined form of a will to power takes the desire for autonomy and directs it outward (perhaps more instinctually) in order to dominate the will of others.  The refined version on the other hand takes this desire for autonomy and directs it inward toward oneself, manifesting itself as a kind of cruelty which causes a person to constantly struggle to make themselves stronger, more independent, and to provide them with a deeper character and perhaps even a deeper understanding of themselves and their view of the world.  Both manifestations of the will to power seem to try and simply maximize one’s power over as much as possible, but the latter refined version is believed by Nietzsche to be superior and ultimately a more effective form of power.

We can better illustrate this view by considering a person who tries to dominate others to gain power in part because they lack the ability to gain power over their own autonomy, and then compare this to a person who gains control over their own desires and autonomy and therefore doesn’t need to compensate for any inadequacy by dominating others.  A person who feels a need to dominate others is in effect dependent on those subordinates (and dependence implies a certain lack of power), but a person who increases their power over themselves gains more independence and thus a form of freedom that is otherwise not possible.

I like this internal/external distinction that Nietzsche makes, but I’d like to build on it a little and suggest that both expressions of a will to power can be seen as complementary strategies to fulfill one’s desire for maximal autonomy, but with the added caveat that this autonomy serves to fulfill a desire for maximal causal power by harnessing as much control over our experience and understanding of the world as possible.  On the one hand, we can try and change the world in certain ways to fulfill this desire (including through the domination of other wills to power), or we can try and change ourselves and our view of the world (up to and including changing our desires if we find them to be misdirecting us away from our greatest goal).  We may even change our desires such that they are compatible with an external force attempting to dominate us, thus rendering the external domination powerless (or at least less powerful than it was), and then we could conceivably regain a form of power over our experience and understanding of the world.

I’ve argued elsewhere that I think that our view of the world as well as our actions and desires can be properly described as predictions of various causal relations (this is based on my personal view of knowledge combined with a Predictive Processing account of brain function).  Reconciling this train of thought with Nietzsche’s basic idea of a will to power, I think we could say that our will to power depends on our predictions of the world and its many apparent causal relations.  We could equate maximal autonomy with maximal predictive success (including the predictions pertaining to our desires). Looking at autonomy and a will to power in this way, we can see that one is more likely to make successful predictions about the actions of another if they subjugate the other’s will to power by their own.  And one can also increase the success of their predictions by changing them in the right ways, including increasing their complexity to better match the causal structure of the world, and by changing our desires and actions as well.

Another thing to consider about a desire for autonomy is that it is better formulated if it includes whatever is required for its own sustainability.  Dominating other wills to power will often serve to promote a sustainable autonomy for the dominator because then those other wills aren’t as likely to become dominators themselves and reverse the direction of dominance, and this preserves the autonomy of the dominating will to power.  This shows how this particular external expression of a will to power could be naturally selected for (under certain circumstances at least) which Nietzsche himself even argued (though in an anti-Darwinian form since genes are not the unit of selection here, but rather behaviors).  This type of behavioral selection would explain it’s prevalence in the animal kingdom including in a number of primate species aside from human beings.  I do think however that we’ve found many ways of overcoming the need or impulse to dominate and it has a lot to do with having adopted social contract theory, since in my view it provides a way of maximizing the average will to power for all parties involved.

Coming back to Nietzsche’s take on language, truth, and dogmatism, we can also see that an increasingly potent will to power is more easily achievable if it is able to formulate and test new tentative predictions about the world, rather than being locked in to some set of predictions (which is dogmatism at it’s core).  Being able to adapt one’s predictions is equivalent to considering and adopting a new point of view, a capability which Nietzsche described as inherent in any free spirit.  It also amounts to being able to more easily free ourselves from the shackles of language, just as Nietzsche advocated for, since new points of view affect the meaning that we ascribe to words and concepts.  I would add to this, the fact that new points of view can also increase our chances of making more successful predictions that constitute our understanding of the world (and ourselves), because we can test them against our previous world view and see if this leads to more or less error, better parsimony, and so on.

Nietzsche’s hope was that one day all philosophy would be flexible enough to overcome its dogmatic prejudices, its claims of absolute truths, including those revolving around morality and concepts like good and evil.  He sees these concepts as nothing more than superficial expressions of one particular will to power or another, and thus he wants philosophy to eventually move itself beyond good and evil.  Personally, I am a proponent of an egoistic goal theory of morality, which grounds all morality on what maximizes the satisfaction and life fulfillment of the individual (which includes cultivating virtues such as compassion, honesty, and reasonableness), and so I believe that good and evil, when properly defined, are more than simply superficial expressions.

But I agree with Nietzsche in part, because I think these concepts have been poorly defined and misused such that they appear to have no inherent meaning.  And I also agree with Nietzsche in that I reject moral absolutism and its associated dogma (as found in religion most especially), because I believe morality to be dynamic in various ways, contingent on the specific situations we find ourselves in and the ever-changing idiosyncrasies of our human psychology.  Human beings often find themselves in completely novel situations and cultural evolution is making this happen more and more frequently.  And our species is also changing because of biological evolution as well.  So even though I agree that moral facts exist (with an objective foundation), and that a subset of these facts are likely to be universal due to the overlap between our biology and psychology, I do not believe that any of these moral facts are absolute because there are far too many dynamic variables for an absolute morality to work even in principle.  Nietzsche was definitely on the right track here.

Putting this all together, I’d like to think that our will to power has an underlying impetus, namely a drive for maximal satisfaction and life fulfillment.  If this is true then our drive for maximal autonomy and control over our experience and understanding of the world serves to fulfill what we believe will maximize this overall satisfaction and fulfillment.  However, when people are irrational, dogmatic, and/or are not well-informed on the relevant facts (and this happens a lot!), this is more likely to lead to an unrefined will to power that is less conducive to achieving that goal, where dominating the wills of others and any number of other immoral behaviors overtakes the character of the individual.  Our best chance of finding a fulfilling path in life (while remaining intellectually honest) is going to require an open mind, a willingness to criticize our own beliefs and assumptions, and a concerted effort to try and overcome our own limitations.  Nietzsche’s philosophy (much of it at least) serves as a powerful reminder of this admirable goal.