Irrational Man: An Analysis (Part 4, Chapter 11: The Place of the Furies)

In my last post in this series on William Barrett’s Irrational Man, I examined some of the work of existential philosopher Jean-Paul Sartre, which concluded part 3 of Barrett’s book.  The final chapter, Ch. 11: The Place of the Furies, will be briefly examined here and this will conclude my eleven part post-series.  I’ve enjoyed this very much, but it’s time to move on to new areas of interest, so let’s begin.

1. The Crystal Palace Unmanned

“The fact is that a good dose of intellectualism-genuine intellectualism-would be a very helpful thing in American life.  But the essence of the existential protest is that rationalism can pervade a whole civilization, to the point where the individuals in that civilization do less and less thinking, and perhaps wind up doing none at all.  It can bring this about by dictating the fundamental ways and routines by which life itself moves.  Technology is one material incarnation of rationalism, since it derives from science; bureaucracy is another, since it aims at the rational control and ordering of social life; and the two-technology and bureaucracy-have come more and more to rule our lives.”

Regarding the importance and need for more intellectualism in our society, I think this can be better described as the need for more critical thinking skills and the need for people to be able to discern fact from fiction, to recognize their own cognitive biases, and to respect the adage that extraordinary claims require extraordinary evidence.  At the same time, in order to appreciate the existentialist’s concerns, we ought to recognize that there are aspects of human psychology including certain psychological needs that are inherently irrational, including with respect to how we structure our lives, how we express our emotions and creativity, how we maintain a balance in our psyche, etc.  But, since technology is not only a material incarnation of rationalism, but also an outlet for our creativity, there has to be a compromise here where we shouldn’t want to abandon technology, but simply to keep it in check such that it doesn’t impede our psychological health and our ability to live a fulfilling life.

“But it is not so much rationalism as abstractness that is the existentialists’ target; and the abstractness of life in this technological and bureaucratic age is now indeed something to reckon with.  The last gigantic step forward in the spread of technologism has been the development of mass art and mass media of communication: the machine no longer fabricates only material products; it also makes minds. (stereotypes, etc.).”

Sure enough, we’re living in a world where many of our occupations are but one of many layers of abstraction constituting our modern “machine of civilization”.  And the military industrial complex that has taken over the modern world has certainly gone beyond the mass production of physical stuff to be consumed by the populace, and now includes the mass production and viral dissemination of memes as well.  Ideas can spread like viruses and in our current globally interconnected world (which Barrett hadn’t yet seen to the same degree when writing this book), the spread of these ideas is much faster and influential on culture than ever before.  The degree of indoctrination, and the perpetuated cycles of co-dependence between citizens and the corporatocratic, sociopolitical forces ruling our lives from above, have resulted in making our way of life and our thinking much more collective and less personal than at any other time in human history.

“Kierkegaard condemned the abstractness of his time, calling it an Age of Reflection, but what he seems chiefly to have had in mind was the abstractness of the professorial intellectual, seeing not real life but the reflection of it in his own mind.”

Aside from the increasingly abstract nature of modern living then, there’s also the abstractness that pervades our thinking about life, which detracts from our ability to actually experience life.  Kierkegaard had a legitimate point here, by pointing out the fact that theory cannot be a complete substitute for practice; that thought cannot be a complete substitute for action.  We certainly don’t want the reverse either, since action without sufficient forethought leads to foolishness and bad consequences.

I think the main point here is that we don’t want to miss out on living life by thinking about it too much.  Since living a philosophically examined life is beneficial, it remains an open question exactly what balance is best for any particular individual to live the most fulfilling life.  In the mean time we ought to simply recognize that there is the potential for an imbalance, and to try our best to avoid it.

“To be rational is not the same as to be reasonable.  In my time I have heard the most hair-raising and crazy things from very rational men, advanced in a perfectly rational way; no insight or feelings had been used to check the reasoning at any point.”

If you ignore our biologically-grounded psychological traits, or ignore the fact that there’s a finite range of sociological conditions for achieving human psychological health and well-being, then you can’t develop any theory that’s supposed to apply to humans and expect it to be reasonable or tenable.  I would argue that ignoring this subjective part of ourselves when making theories that are supposed to guide our behavior in any way is irrational, at least within the greater context of aiming to have not only logically valid arguments but logically sound arguments as well.  But, if we’re going to exclude the necessity for logical soundness in our conception of rationality, then the point is well taken.  Rationality is a key asset in responsible decision making but it should be used in a way that relies on or seeks to rely on true premises, taking our psychology and sociology into account.

“The incident (making hydrogen bombs without questioning why) makes us suspect that, despite the increase in the rational ordering of life in modern times, men have not become the least bit more reasonable in the human sense of the word.  A perfect rationality might not even be incompatible with psychosis; it might, in fact, even lead to the latter.”

Again, I disagree with Barrett’s use or conception of rationality here, but semantics aside, his main point still stands.  I think that we’ve been primarily using our intelligence as a species to continue to amplify our own power and maximize our ability to manipulate the environment in any way we see fit.  But as our technological capacity advances, our cultural evolution is getting increasingly out of sync with our biological evolution, and we haven’t been anywhere close to sufficient in taking our psychological needs and limitations into account as we continue to engineer the world of tomorrow.

What we need is rationality that relies on true or at least probable premises, as this combination should actually lead a person to what is reasonable or likely to be reasonable.  I have no doubt that without making use of a virtue such as reasonableness, rationality can become dangerous and destructive, but this is the case with every tool we use whether it’s rationality or other capacities both mental and material; tools can always be harmful when misused.

“If, as the Existentialists hold, an authentic life is not handed to us on a platter but involves our own act of self-determination (self-finitization) within our time and place, then we have got to know and face up to that time, both in its (unique) threats and its promises.”

And our use of rationality on an individual level should be used to help reach this goal of self-finitization, so that we can balance the benefits of the collective with the freedom of each individual that makes up that collective.

“I for one am personally convinced that man will not take his next great step forward until he has drained to the lees the bitter cup of his own powerlessness.”

And when a certain kind of change is perceived as something to be avoided, the only way to go through with it is by perceiving the status quo as something that needs to be avoided even more so, so that a foray out of our comfort zone is perceived as an actual improvement to our way of life.  But as long as we see ourselves as already all powerful and masters over our domain, we won’t take any major leap in a new direction of self and collective improvement.  We need to come to terms with our current position in the modern world so that we can truly see what needs repair.  Whether or not we succumb to one or both of the two major existential threats facing our species, climate change and nuclear war, is contingent on whether or not we set our eyes on a new prize.

“Sartre recounts a conversation he had with an American while visiting in this country.  The American insisted that all international problems could be solved if men would just get together and be rational; Sartre disagreed and after a while discussion between them became impossible.  “I believe in the existence of evil,” says Sartre, “and he does not.” What the American has not yet become aware of is the shadow that surrounds all human Enlightenment.”

Once again, if rationality is accompanied with true premises that take our psychology into account, then international problems could be solved (or many of them at least), but Sartre is also right insofar as there are bad ideas that exist, and people that have cultivated their lives around them.  It’s not enough to have people thinking logically, nor is some kind of rational idealism up to the task of dealing with human emotion, cognitive biases, psychopathy, and other complications in human behavior that exist.

The crux of the matter is that some ideas hurt us and other ideas help us, with our perception of these effects being the main arbiter driving our conception of what is good and evil; but there’s also a disparity between what people think is good or bad for them and what is actually good or bad for them.  I think that one could very plausibly argue that if people really knew what was good or bad for them, then applying rationality (with true or plausible premises) would likely work to solve a number of issues plaguing the world at large.

“…echoing the Enlightenment’s optimistic assumption that, since man is a rational animal, the only obstacles to his fulfillment must be objective and social ones.”

And here’s an assumption that’s certainly difficult to ground since it’s based on false premises, namely that humans are inherently rational.  We are unique in the animal kingdom in the sense that we are the only animal (or one of only a few animals) that have the capacity for rational thought, foresight, and the complex level of organization made possible from its use.  I also think that the obstacles to fulfillment are objective since they can be described as facts pertaining to our psychology, sociology, and biology, even if our psychology (for example) is instantiated in a subjective way.  In other words, our subjectivity and conscious experiences are grounded on or describable in objective terms relating to how our particular brains function, how humans as a social species interact with one another, etc.  But, fulfillment can never be reached let alone maximized without taking our psychological traits and idiosyncrasies into account, for these are the ultimate constraints on what can make us happy, satisfied, fulfilled, and so on.

“Behind the problem of politics, in the present age, lies the problem of man, and this is what makes all thinking about contemporary problems so thorny and difficult…anyone who wishes to meddle in politics today had better come to some prior conclusions as to what man is and what, in the end, human life is all about…The speeches of our politicians show no recognition of this; and yet in the hands of these men, on both sides of the Atlantic, lies the catastrophic power of atomic energy.”

And as of 2018, we’ve seen the Doomsday clock now reach two minutes to midnight, having inched one minute closer to our own destruction since 2017.  The dominance hierarchy being led and reinforced by the corporatocratic plutocracy are locked into a narrow form of tunnel vision, hell bent on maintaining if not exacerbating the wealth and power disparities that plague our country and the world as a whole, despite the fact that this is not sustainable in the long run, nor best for the fulfillment of those promoting it.

We the people do share a common goal of trying to live a good, fulfilling life; to have our basic needs met, and to have no fewer rights than anybody else in society.  You’d hardly know that this common ground exists between us when looking at the state of our political sphere, likely as polarized now in the U.S. (if not more so) than even during the Civil War.  Clearly, we have a lot of work to do to reevaluate what our goals ought to be, what our priorities ought to be, and we need a realistic and informed view of what it means to be human before any of these goals can be realized.

“Existentialism is the counter-Enlightenment come at last to philosophic expression; and it demonstrates beyond anything else that the ideology of the Enlightenment is thin, abstract, and therefore dangerous.”

Yes, but the Enlightenment has also been one of the main driving forces leading us out of theocracy, out of scientific illiteracy, and towards an appreciation of reason and evidence (something the U.S. at least, is in short supply of these days), and thus it has been crucial in giving us the means for increasing our standard of living, and solving many of our problems.  While the technological advancements derived from the Enlightenment have also been a large contributor to many of our problems, the current existential threats we face including climate change and nuclear war are more likely to be solved by new technologies, not an abolition of technology nor an abolition of the Enlightenment-brand of thinking that led to technological progress.  We simply need to better inform our technological goals of the actual needs and constraints of human beings, our psychology, and so on.

“The finitude of man, as established by Heidegger, is perhaps the death blow to the ideology of the Enlightenment, for to recognize this finitude is to acknowledge that man will always exist in untruth as well as truth.  Utopians who still look forward to a future when all shadows will be dispersed and mankind will dwell in a resplendent Crystal Palace will find this recognition disheartening.  But on second thought, it may not be such a bad thing to free ourselves once and for all from the worship of the idol of progress; for utopianism-whether the brand of Marx or or Nietzsche-by locating the meaning of man in the future leaves human beings here and how, as well as all mankind up to this point, without their own meaning.  If man is to be given meaning, the Existentialists have shown us, it must be here and now; and to think this insight through is to recast the whole tradition of Western thought.”

And we ought to take our queue from Heidegger, at the very least, to admit that we are finite, our knowledge is limited, and it always will be.  We will not be able to solve every problem, and we would do ourselves and the world a lot better if we admitted our own limitations.  But to avoid being overly cynical and simply damning progress altogether, we need to look for new ways of solving our existential problems.  Part of the solution that I see for humanity moving forward is going to be a combination of advancements in a few different fields.

By making better use of genetic engineering, we’ll one day have the ability to change ourselves in remarkable ways in order to become better adapted to our current world.  We will be able to re-sync our biological evolution with our cultural evolution so we no longer feel uprooted, like a fish out of water.  Continuing research in neuroscience will allow us to learn more about how our brains function and how to optimize that functioning.  Finally, the strides we make in computing and artificial intelligence should allow us to vastly improve our simulation power and arm us with greater intelligence for solving all the problems that we face.

Overall, I don’t see progress as the enemy, but rather that we have an alignment problem between our current measures of progress, and what will actually lead to maximally fulfilling lives.

“The realization that all human truth must not only shine against an enveloping darkness, but that such truth is even shot through with its own darkness may be depressing, and not only to utopians…But it has the virtue of restoring to man his sense of the primal mystery surrounding all things, a sense of mystery from which the glittering world of his technology estranges him, but without which he is not truly human.”

And if we actually use technology to change who we are as human beings, by altering the course of natural selection and our ongoing evolution (which is bound to happen with or without our influence, for better or worse), then it’s difficult to say what the future really holds for us.  There are cultural and technological forces that are leading to transhumanism, and this may mean that one day “human beings” (or whatever name is chosen for the new species that replaces us) will be inherently rational, or whatever we’d like our future species to be.  We’ve stumbled upon the power to change our very nature, and so it’s far too naive, simplistic, unimaginative, and short-sighted to say that humans will “always” or “never” be one way or another.  Even if this were true, it wouldn’t negate the fact that one day modern humans will be replaced by a superseding species which has different qualities than what we have now.

2. The Furies

“…Existentialism, as we have seen, seeks to bring the whole man-the concrete individual in the whole context of his everyday life, and in his total mystery and questionableness-into philosophy.”

This is certainly an admirable goal to combat simplistic abstractions of what it means to be human.  We shouldn’t expect to be able to abstract a certain capacity of human beings (such as rationality), consider it in isolation (no consideration of context), formulate a theory around that capacity, and then expect to get a result that is applicable to human beings as they actually exist in the world.  Furthermore, all of the uncertainties and complexities in our human experiences, no matter how difficult they may be to define or describe, should be given their due consideration in any comprehensive philosophical system.

“In modern philosophy particularly (philosophy since Descartes), man has figured almost exclusively as an epistemological subject-as an intellect that registers sense-data, makes propositions, reasons, and seeks the certainty of intellectual knowledge, but not as the man underneath all this, who is born, suffers, and dies…But the whole man is not whole without such unpleasant things as death, anxiety, guilt, fear and trembling, and despair, even though journalists and the populace have shown what they think of these things by labeling any philosophy that looks at such aspects of human life as “gloomy” or “merely a mood of despair.”  We are still so rooted in the Enlightenment-or uprooted in it-that these unpleasant aspects of life are like Furies for us: hostile forces from which we would escape (the easiest way is to deny that the Furies exist).”

My take on all of this is simply that multiple descriptions of human existence are needed to account for all of our experiences, thoughts, values, and behavior.  And it is what we value given our particular subjectivity that needs to be primary in these descriptions, and primary with respect to how we choose to engineer the world we live in.  Who we are as a species is a complex mixture of good and bad, lightness and darkness, and stability and chaos; and we shouldn’t deny any of these attributes nor repress them simply because they make us uncomfortable.  Instead, we would do much better to face who we are head on, and then try to make our lives better while taking our limitations into account.

“We are the children of an enlightenment, one which we would like to preserve; but we can do so only by making a pact with the old goddesses.  The centuries-long evolution of human reason is one of man’s greatest triumphs, but it is still in process, still incomplete, still to be.  Contrary to the rationalist tradition, we now know that it is not his reason that makes man man, but rather that reason is a consequence of that which really makes him man.  For it is man’s existence as a self-transcending self that has forged and formed reason as one of its projects.”

This is well said, although I’d prefer to couch this in different terms: it is ultimately our capacity to imagine that makes us human and able to transcend our present selves by pointing toward a future self and a future world.  We do this in part by updating our models of the world, simulating new worlds, and trying to better understand the world we live in by engaging with it, re-shaping it, and finding ways of better predicting its causal structure.  Reason and rationality have been integral in updating our models of the world, but they’ve also been high-jacked to some degree by a kind of super-normal stimuli reinforced by technology and our living in a world that is entirely alien to our evolutionary niche, and which have largely dominated our lives in a way that overlooks our individualism, emotions, and feelings.

Abandoning reason and rationality is certainly not the answer to this existential problem; rather, we just need to ensure that they’re being applied in a way that aligns with our moral imperatives and that they’re ultimately grounded on our subjective human psychology.

Advertisements

Irrational Man: An Analysis (Part 3, Chapter 8: Nietzsche)

In the last post on this series on William Barrett’s Irrational Man, we began exploring Part 3: The Existentialists, beginning with chapter 7 on Kierkegaard.  In this post, we’ll be taking a look at Nietzsche’s philosophy in more detail.

Ch. 8 – Nietzsche

Nietzsche shared many of the same concerns as Kierkegaard, given the new challenges of modernity brought about by the Enlightenment; but each of these great thinkers approached this new chapter of our history from perspectives that were in many ways, diametrically opposed.  Kierkegaard had a grand goal of sharing with others what he thought it meant to be a Christian, and he tried to revive Christianity within a society that he found to be increasingly secularized.  He also saw that of the Christianity that did exist, it was largely organized and depersonalized, and he felt the need to stress the importance of a much more personal form of the religion.  Nietzsche on the other hand, an atheist, rejected Christianity and stressed the importance of re-establishing our values given the fact that modernity was now living in a godless world, albeit a culturally Christianized world, but one that was now without a Christ.

Despite their differences, both philosophers felt that discovering the true meaning of our lives was paramount to living an authentic life, and this new quest for meaning was largely related to the fact that our view of ourselves had changed significantly:

“By the middle of the nineteenth century, as we have seen, the problem of man had begun to dawn on certain minds in a new and more radical form: Man, it was seen, is a stranger to himself and must discover, or rediscover, who he is and what his meaning is.”

Nietzsche also understood and vividly illustrated the crucial differences between mankind and the rest of nature, most especially our self-awareness:

“It was Nietzsche who showed in its fullest sense how thoroughly problematical is the nature of man: he can never be understood as an animal species within the zoological order of nature, because he has broken free of nature and has thereby posed the question of his own meaning-and with it the meaning of nature as well-as his destiny.”

And for Nietzsche, the meaning one was to find for their life included first abandoning what he saw as an oppressive and disempowering Christian morality, which he saw as severely inhibiting our potential as human beings.  He was much more sympathetic to Greek culture and morality, and even though (perhaps ironically) Christianity itself was derived from a syncretism between Judaism and Greek Hellenism, it’s moral and psychologically relevant differences were too substantial to avoid Nietzsche’s criticism.  He thought that a return to a more archaic form of Greek culture and mentality was the solution we needed to restore or at least re-validate our instincts:

“Dionysus reborn, Nietzsche thought, might become a savior-god for the whole race, which seemed everywhere to show symptoms of fatigue and decline.”

Nietzsche learned about the god Dionysus while he was studying Greek tragedy, and he gravitated toward the Dionysian symbolism, for it contained a kind of reconciliation between high culture and our primal instincts.  Dionysus was after all the patron god of the Greek tragic festivals and so was associated with beautiful works of art; but he was also the god of wine and ritual madness, bringing people together through the grape harvest and the joy of intoxication.  As Barrett describes it:

“This god thus united miraculously in himself the height of culture with the depth of instinct, bringing together the warring opposites that divided Nietzsche himself.”

It was this window into the hidden portions of our psyche that Nietzsche tried to look through and which became an integral basis for much of his philosophy.  But the chaotic nature of our species combined with our immense power to manipulate the environment also concerned him greatly; and he thought that the way modernity was repressing many of our instincts needed to be circumvented, lest we continue to build up this tension in our unconscious only to release itself in an explosive eruption of violence:

“It is no mere matter of psychological curiosity but a question of life and death for man in our time to place himself again in contact with the archaic life of his unconscious.  Without such contact he may become the Titan who slays himself.  Man, this most dangerous of the animals, as Nietzsche called him, now holds in his hands the dangerous power of blowing himself and his planet to bits; and it is not yet even clear that this problematic and complex being is really sane.”

Indeed our species has been removed from our natural evolutionary habitat; and we’ve built our modern lives around socio-cultural and technological constructs that have, in many ways at least, inhibited the open channel between our unconscious and conscious mind.  One could even say that the prefrontal cortex region of our brain, important as it is for conscious planning and decision making, has been effectively high-jacked by human culture (memetic selection) and now it’s constantly running on overtime, over-suppressing the emotional centers of the brain (such as the limbic system).

While we’ve come to realize that emotional suppression is sometimes necessary in order to function well as a cooperative social species that depends on complex long-term social relationships, we also need to maintain a healthy dose of emotional expression as well.  If we can’t release this more primal form of “psychological energy” (for lack of a better term), then it shouldn’t be surprising to see it eventually manifest into some kind of destructive behavior.

We ought not forget that we’re just like other animals in terms of our brain being best adapted to a specific balance of behaviors and cognitive functions; and if this balance is disrupted through hyperactive or hypoactive use of one region or another, doesn’t it stand to reason that we may wind up with either an impairment in brain function or less healthy behavior?  If we’re not making a sufficient use of our emotional capacities, shouldn’t we expect them to diminish or atrophy in some way or other?  And if some neurological version of the competitive exclusion principle exists, then this would further reinforce the idea that our suppression of emotion by other capacities may cause those other capacities to become permanently dominant.  We can only imagine the kinds of consequences that ensue when this psychological impairment falls upon a species with our level of power.

1.  Ecce Homo

“In the end one experiences only oneself,” Nietzsche observes in his Zarathustra, and elsewhere he remarks, in the same vein, that all the systems of the philosophers are just so many forms of personal confession, if we but had eyes to see it.”

Here Nietzsche is expressing one of his central ideas, by pointing out the fact that we can’t separate ourselves from our thoughts, and therefore the ideas put forward by any philosopher are really betraying some set of values they hold whether unconsciously or explicitly, and they also betray one’s individual perspective; an unavoidable perspective stemming from our subjective experience which colors our interpretation of those experiences.  Perspectivism, or the view that reality and our ideas about what’s true and what we value can be mapped onto many different possible conceptual schemes, was a big part of Nietzsche’s overall philosophy, and it was interwoven into his ideas on meaning, epistemology, and ontology.

As enlightening as Nietzsche was in giving us a much needed glimpse into some of the important psychological forces that give our lives the shape they have (for better or worse), his written works show little if any psychoanalytical insight applied to himself, in terms of the darker and less pleasant side of his own psyche.

“Nietzsche’s systematic shielding of himself from the other side is relevant to his explanation of the death of God: Man killed God, he says, because he could not bear to have anyone looking at his ugliest side.”

But even if he didn’t turn his psychoanalytical eye inward toward his own unconscious, he was still very effective in shining a light on the collective unconscious of humanity as a whole.  It’s not enough that we marvel over the better angels of our nature, even though it’s important to acknowledge and understand our capacities and our potential for bettering ourselves and the world we live in; we also have to acknowledge our flaws and shortcomings and while we may try to overcome these weaknesses in one way or another, some of them are simply a part of our nature and we should accept them as such even if we may not like them.

One of the prevailing theories stemming from Western Rationalism (mentioned earlier in Barrett’s book) was that human beings were inherently perfect and rational, and it was within the realm of possibility to realize that potential; but this kind of wishful thinking was relying on a fundamental denial of an essential part of the kind of animal we really are.  And Nietzsche did a good job of putting this fact out in the limelight, and explaining how we couldn’t truly realize our potential until we came to terms with this fairly ugly side of ourselves.

Barrett distinguishes between the attitudes stemming from various forms of atheism, including that of Nietzsche’s:

“The urbane atheism of Bertrand Russell, for example, presupposes the existence of believers against whom he can score points in an argument and get off some of his best quips.  The atheism of Sartre is a more somber affair, and indeed borrows some of its color from Nietzsche: Sartre relentlessly works out the atheistic conclusion that in a universe without God man is absurd, unjustified, and without reason, as Being itself is.”

In fact Nietzsche was more concerned with the atheists that hadn’t yet accepted the full ramifications of a Godless world, rather than believers themselves.  He felt that those who believed in God still had a more coherent belief system given many of their moral values and the meaning they attached to their lives, since they were largely derived from their religious beliefs (even if the religious beliefs were unwarranted or harmful).  Those that were atheists, many of them at least, hadn’t yet taken the time to build a new foundation for their previously held values and the meaning in their lives; and if they couldn’t rebuild this foundation then they’d have to change those values and what they find to be meaningful to reflect that shift in foundation.

Overall, Nietzsche even changes the game a little in terms of how the conception of God was to be understood in philosophy:

“If God is taken as a metaphysical object whose existence has to be proved, then the position held by scientifically minded philosophers like Russell must inevitably be valid: the existence of such an object can never be empirically proved.  Therefore, God must be a superstition held by primitive and childish minds.  But both these alternative views are abstract, whereas the reality of God is concrete, a thoroughly autonomous presence that takes hold of men but of which, of course, some men are more conscious than others.  Nietzsche’s atheism reveals the true meaning of God-and does so, we might add, more effectively than a good many official forms of theism.”

Even though God may not actually exist as a conscious being, the idea of God and the feelings associated with what one labels as an experience of God, is as real as any other idea or experience and so it should still be taken seriously even in a world where no gods exist.  As an atheist myself (formerly a born-again Protestant Christian), I’ve come to appreciate the real breadth of possible meaning attached to the term “God”.  Even though the God or gods described in various forms of theism do not track onto objective reality as was pointed out by Russell, this doesn’t negate the subjective reality underlying theism.  People ascribe the label “God” to a set of certain feelings and forces that they feel are controlling their lives, their values, and their overall vision of the world.  Nietzsche had his own vision of the world as well, and so one could label this as his “god”, despite the confusion that this may cause when discussing God on metaphysical terms.

2. What Happens In “Zarathustra”; Nietzsche as Moralist

One of the greatest benefits of producing art is our ability to tap into multiple perspectives that we might otherwise never explore.  And by doing this, we stand a better chance of creating a window into our own unconscious:

“[Thus Spoke Zarathustra] was Nietzsche’s poetic work and because of this he could allow the unconscious to take over in it, to break through the restraints imposed elsewhere by the philosophic intellect. [in poetry] One loses all perception of what is imagery and simile-that is to say, the symbol itself supersedes thought, because it is richer in meaning.”

By avoiding the common philosophical trap of submitting oneself to a preconceived structure and template for processing information, poetry and other works of art can inspire new ideas and novel perspectives through the use of metaphor and allegory, emotion, and the seemingly infinite powers of our own imagination.  Not only is this true for the artist herself, but also for the spectator who stands in a unique position to interpret what they see and to make sense of it the best they can.  And in the rarest of cases, one may be able to experience something in a work of art that fundamentally changes themselves in the process.  Either way, by presenting someone with an unusual and ambiguous stimulus that they are left to interpret, they can hope to gain at least some access into their unconscious and to inspire an expression of their creativity.

In his Thus Spoke Zarathustra, Nietzsche was focusing on how human beings could ever hope to regain a feeling of completeness; for modernity had fractured the individual and placed too much emphasis on collective specialization:

“For man, says Schiller, the problem is one of forming individuals.  Modern life has departmentalized, specialized, and thereby fragmented the being of man.  We now face the problem of putting the fragments together into a whole.  In the course of his exposition, Schiller even referred back, as did Nietzsche, to the example of the Greeks, who produced real individuals and not mere learned abstract men like those of the modern age.”

A part of this reassembly process would necessarily involve reincorporating what Nietzsche thought of as the various attributes of our human nature, which many of us are in denial of (to go back to the point raised earlier in this post).  For Nietzsche, this meant taking on some characteristics that may seem inherently immoral:

“Man must incorporate his devil or, as he put it, man must become better and more evil; the tree that would grow taller must send its roots down deeper.”

Nietzsche seems to be saying that if there are aspects of our psychology that are normal albeit seemingly dark or evil, then we ought to structure our moral values along the same lines rather than in opposition to these more primal parts of our psyche:

“(Nietzsche) The whole of traditional morality, he believed, had no grasp of psychological reality and was therefore dangerously one-sided and false.”

And there is some truth to this though I wouldn’t go as far as Nietzsche does, as I believe that there are many basic and cross-cultural moral prescriptions that do in fact take many of our psychological needs into account.  Most cultures have some place for various virtues such as compassion, honesty, generosity, forgiveness, commitment, integrity, and many others; and these virtues have been shown within the fields of moral psychology and sociology to help us flourish and lead psychologically fulfilling lives.  We evolved as a social species that operates within some degree of a dominance hierarchy, and sure enough our psychological traits are what we’d expect given our evolutionary history.

However, it’s also true that we ought to avoid slipping into some form of a naturalistic fallacy; for we don’t want to simply assume that all the behaviors that were more common prior to modernity let alone prior to civilization were beneficial to us and our psychology.  For example, raping and stealing were far more common per capita prior to humans establishing some kind of civil state or society.  And a number of other behaviors that we’ve discovered to be good or bad for our physical or psychological health have been the result of a long process of trial and error, cultural evolution, and the cultural transmission of newfound ideas from one generation to the next.  In any case, we don’t want to assume that just because something has been a tradition or even a common instinctual behavior for many thousands or even hundreds of thousands of years, that it is therefore a moral behavior.  Quite the contrary; instead we need to look closely at which behaviors lead to more fulfilling lives and overall maximal satisfaction and then aim to maximize those behaviors over those that detract from this goal.

Nietzsche did raise a good point however, and one that’s supported by modern moral psychology; namely, the fact that what we think we want or need isn’t always in agreement with our actual wants and needs, once we dive below the surface.  Psychology has been an indispensable tool in discovering many of our unconscious desires and the best moral theories are going to be those that take both our conscious and unconscious motivations and traits into account.  Only then can we have a moral theory that is truly going to be sufficiently motivating to follow in the long term as well as most beneficial to us psychologically.  If one is basing their moral theory on wishful thinking rather than facts about their psychology, then they aren’t likely to develop a viable moral theory.  Nietzsche realized this and promoted it in his writings, and this was an important contribution as it bridged human psychology with a number of important topics in philosophy:

“On this point Nietzsche has a perfectly sober and straightforward case against all those idealists, from Plato onward, who have set universal ideas over and above the individual’s psychological needs.  Morality itself is blind to the tangle of its own psychological motives, as Nietzsche showed in one of his most powerful book, The Genealogy of Morals, which traces the source of morality back to the drives of power and resentment.”

I’m sure that drives of power and resentment have played some role in certain moral prescriptions and moral systems, but I doubt that this is likely to be true generally speaking; and even in cases where a moral prescription or system arose out of one of these unconscious drives, such as resentment, this doesn’t negate it’s validity nor demonstrate whether or not it’s warranted or psychologically beneficial.  The source of morality isn’t as important as whether or not the moral system itself works, though the sources of morality can be both psychologically relevant and revealing.

Our ego certainly has a lot to do with our moral behavior and whether or not we choose to face our inner demons:

“Precisely what is hardest for us to take is the devil as the personification of the pettiest, paltriest, meanest part of our personality…the one that most cruelly deflates ones egotism.”

If one is able to accept both the brighter and darker sides of themselves, they’ll also be more likely to willingly accept Nietzsche’s idea of the Eternal Return:

“The idea of the Eternal Return thus expresses, as Unamuno has pointed out, Nietzsche’s own aspirations toward eternal and immortal life…For Nietzsche the idea of the Eternal Return becomes the supreme test of courage: If Nietzsche the man must return to life again and again, with the same burden of ill health and suffering, would it not require the greatest affirmation and love of life to say Yes to this absolutely hopeless prospect?”

Although I wouldn’t expect most people to embrace this idea, because it goes against our teleological intuitions of time and causation leading to some final state or goal, it does provide a valuable means of establishing whether or not someone is really able to accept this life for what it is with all of its absurdities, pains and pleasures.  If someone willingly accepts the idea, then by extension they likely accept themselves, their lives, and the rest of human history (and even the history of life on this planet, no less).  This doesn’t mean that by accepting the idea of the Eternal Return, that people have to put every action, event, or behavior on equal footing, for example, by condoning slavery or the death of millions as a result of countless wars, just because these events are to be repeated for eternity.  But one is still accepting that all of these events were necessary in some sense; that they couldn’t have been any other way, and this acceptance should translate to one’s attitude toward life reflecting this by some kind of overarching contentment.

I think that the more common reaction to this kind of idea is slipping into some kind of nihilism, where a person no longer cares about anything, and no longer finds any meaning or purpose in their lives.  And this reaction is perfectly understandable given our teleological intuitions; most people don’t or wouldn’t want to invest time and effort into some goal if they knew that the goal would never be accomplished or knew that it would eventually be undone.  But, oddly enough, even if the Eternal Return isn’t an actual fact of our universe, we still run into the same basic situation within our own lifetimes whereby we know that there are a number of goals that will never be achieved because of our own inevitable death.

There’s also the fact that even if we did achieve a number of goals, once we die, we can no longer reap any benefits from them.  There may be other people that can benefit from our own past accomplishments, but eventually they will die as well.  Eventually all life in the universe will expire, and this is bound to happen long before the inevitable heat death of the universe transpires; and once it happens, there won’t be anybody experiencing anything at all let alone reaping the benefits of a goal from someone’s distant past.  Once this is taken into account, Nietzsche’s idea doesn’t sound all that bad, does it?  If we knew that there would always be time and some amount of conscious experience for eternity, then there will always be some form of immortality for consciousness; and if it’s eternally recurring, then we end up becoming immortal in some sense.

Perhaps the idea of the Eternal Return (or Recurrence), even though this idea began many hundreds if not a few thousand years before Nietzsche, was motivated by Nietzsche’s own fear of death.  Even though he thought of the idea as horrifying and paralyzing (not least because of all the undesirable parts of our own personal existence), since he didn’t believe in a traditional afterlife, maybe the fear of death motivated him to adopt such an idea, even though he also saw it as a physically plausible consequence of probability and the laws of physics.  But either way, beyond it being far more plausible an idea compared to that of an eternal paradise after death, it’s also a very different idea; not only because one’s memory or identity isn’t preserved such that one actually experiences immortality when the “temporal loop” defining their lifetime starts all over again (i.e. the “movie” of one’s life is just played over and over again, unbeknownst to the person in the movie), but this idea is also different because it involves accepting the world the way it is for all eternity rather than an idea revolving around the way they wish it was.  The Eternal Return fully engages with the existentialist reality of human life with all its absurdities, contingencies, and the rest.

3. Power and Nihilism

Nietzsche, like Kierkegaard before him, was considered to be an unsystematic thinker by many, though Barrett points out that this view is mistaken when it comes to Nietzsche; a common view resulting from the form his writings took which were largely aphoristic.  Nietzsche himself even said that he was viewing science and philosophy through the eyes of art, but nevertheless his work eventually revealed a systematic structure:

“As thinking gradually took over the whole person, and everything else in his life being starved out, it was inevitable that this thought should tend to close itself off in a system.”

This systematization of his philosophy was revealed in the notes he was making for The Will to Power, a work he never finished but one that was more or less assembled and published posthumously by his sister Elisabeth.  The systematization was centered on the idea that a will to power was the underlying force that ultimately guided our behavior, striving to achieve the highest possible position in life as opposed to being fundamentally driven by some biological imperative to survive.  But it’s worth pointing out here that Nietzsche didn’t seem to be merely talking about a driving force behind human behavior, but rather he seemed to be referencing an even more fundamental driving force that was in some sense driving all of reality; and this more inclusive view would be analogous to Schopenhauer’s will to live.

It’s interesting to consider this perspective from the lens of biology, where if we try and reconcile this with a Darwinian account of evolution in which survival is key, we could surmise that a will to power is but a mechanism for survival.  If we grant Nietzsche’s position and the position of the various anti-Darwinian thinkers that influenced him, that is, that a will to survive or survival more generally is somehow secondary to a will to power, then we run into a problem: if survival is a secondary drive or goal, then we’d expect survival to be less important than power; but without survival you can’t have power and thus it makes far more evolutionary sense that a will to survive is more basic than a will to power.

However, I am willing to grant that as soon as organisms began evolving brains along with the rest of their nervous system, the will to power (or something analogous to it) became increasingly important; and by the time humans came on the scene with their highly complex brains, the will to power may have become manifest in our neurological drive to better predict our environment over time.  I wrote about this idea in a previous post where I had explored Nietzsche’s Beyond Good and Evil, where I explained what I saw as a correlation between Nietzsche’s will to power and the idea of the brain as a predictive processor:

“…I’d like to build on it a little and suggest that both expressions of a will to power can be seen as complementary strategies to fulfill one’s desire for maximal autonomy, but with the added caveat that this autonomy serves to fulfill a desire for maximal causal power by harnessing as much control over our experience and understanding of the world as possible.  On the one hand, we can try and change the world in certain ways to fulfill this desire (including through the domination of other wills to power), or we can try and change ourselves and our view of the world (up to and including changing our desires if we find them to be misdirecting us away from our greatest goal).  We may even change our desires such that they are compatible with an external force attempting to dominate us, thus rendering the external domination powerless (or at least less powerful than it was), and then we could conceivably regain a form of power over our experience and understanding of the world.

I’ve argued elsewhere that I think that our view of the world as well as our actions and desires can be properly described as predictions of various causal relations (this is based on my personal view of knowledge combined with a Predictive Processing account of brain function).  Reconciling this train of thought with Nietzsche’s basic idea of a will to power, I think we could…equate maximal autonomy with maximal predictive success (including the predictions pertaining to our desires). Looking at autonomy and a will to power in this way, we can see that one is more likely to make successful predictions about the actions of another if they subjugate the other’s will to power by their own.  And one can also increase the success of their predictions by changing them in the right ways, including increasing their complexity to better match the causal structure of the world, and by changing our desires and actions as well.”

On the other hand, if we treat the brain’s predictive activity as a kind of mechanistic description of how the brain works rather than a type of drive per se, and if we treat our underlying drives as something that merely falls under this umbrella of description, then we might want to consider whether or not any of the drives that are typically posited in psychological theory (such as power, sex, or a will to live) are actually more basic than any other or typically dominant over the others.  Barrett suggests an alternative, saying that we ought to look at these elements more holistically:

“What if the human psyche cannot be carved up into compartments and one compartment wedged in under another as being more basic?  What if such dichotomizing really overlooks the organic unity of the human psyche, which is such that a single impulse can be just as much an impulse toward love on the one hand as it is toward power on the other?”

I agree with Barrett at least in the sense that these drives seem to be operating together much of the time, and when they aren’t in unison, they often seem to be operating on the same level at least.  Another way to put this could be to say that as a social species, we’ve evolved a number of different modular behavioral strategies that are typically best suited for particular social circumstances, though some circumstances may be such that multiple strategies will work and thus multiple strategies may be used simultaneously without working against each other.

But I also think that our conscious attention plays a role as well where the primary drive(s) used to produce the subsequent behavior may be affected by thinking about certain things or in a certain way, such as how much you love someone, what you think you can gain from them, etc.  And this would mean that the various underlying drives for our behavior are individually selected or prioritized based on one’s current experiences and where their attention is directed at any particular moment.  It may still be the case that some drives are typically dominant over others, such as a will to power, even if certain circumstances can lead to another drive temporarily taking over, however short-lived that may be.

The will to power, even if it’s not primary, would still help to explain some of the enormous advancements we’ve made in our scientific and technological progress, while also explaining (at least to some degree) the apparent disparity between our standard of living and overall physio-psychological health on the one hand, and our immense power to manipulate the environment on the other:

“Technology in the twentieth century has taken such enormous strides beyond that of the nineteenth that it now bulks larger as an instrument of naked power than as an instrument for human well-being.”

We could fully grasp Barrett’s point here by thinking about the opening scene in Stanley Kubrick’s 2001: A Space Odyssey, where “The Dawn of Man” was essentially made manifest with the discovery of tools, specifically weapons.  Once this seemingly insignificant discovery was made, it completely changed the game for our species.  While it gave us a new means of defending ourselves and an ability to hunt other animals, thus providing us with the requisite surplus of protein and calories needed for brain development and enhanced intelligence, it also catalyzed an arms race.  And with the advent of human civilization, our historical trajectory around the globe became largely dominated by our differential means of destruction; whoever had the biggest stick, the largest army, the best projectiles, and ultimately the most concentrated form of energy, would likely win the battle, forever changing the fate of the parties involved.

Indeed war has been such a core part of human history, so it’s no wonder Nietzsche stumbled upon such an idea; and even if he hadn’t, someone else likely would have even if they wouldn’t have done so with the same level of creativity and intellectual rigor.  If our history has been so colored with war, which is a physical instantiation of a will to power in order to dominate another group, then we should expect many to see it as a fundamental part of who we are.  Personally, I don’t think we’re fundamentally war-driven creatures but rather that war results from our ability to engage in it combined with a perceived lack of and desire for resources, freedom, and stability in our lives; but many of these goods, freedom in particular, imply a particular level of power over oneself and their environment, so even a fight for freedom is in one way or another a kind of will to power as well.

And what are we to make of our quest for power in terms of the big picture?  Theoretically, there is an upper limit to the amount of power any individual or group can possibly attain, and if one gets to a point where they are seeking power for power’s sake, and using their newly acquired power to harness even more power, then what will happen when we eventually hit that ceiling?  If our values become centered around power, and we lose any anchor we once had to provide meaning for our lives, then it seems we would be on a direct path toward nihilism:

“For Nietzsche, the problem of nihilism arose out of the discovery that “God is dead.” “God” here means the historical God of the Christian faith.  But in a wider philosophical sense it means also the whole realm of supersensible reality-Platonic Ideas, the Absolute, or what not-that philosophy has traditionally posited beyond the sensible realm, and in which it has located man’s highest values.  Now that this other, higher, eternal realm is gone, Nietzsche declared, man’s highest values lose their value…The only value Nietzsche can set up to take the place of these highest values that have lost their value for contemporary man is: Power.”

As Barrett explains, Nietzsche seemed to think that power was the only thing that could replace the eternal realm that we valued so much; but of course this does nothing to alleviate the problem of nihilism in the long term since the infinite void still awaits those at the end of the finite road to maximal power.

“If this moment in Western history is but the fateful outcome of the fundamental ways of thought that lie at the very basis of our civilization-and particularly of that way of thought that sunders man from nature, sees nature as a realm of objects to be mastered and conquered, and can therefore end only with the exaltation of the will to power-then we have to find out how this one-sided and ultimately nihilistic emphasis upon the power over things may be corrected.”

I think the answer to this problem lies in a combination of strategies, but with the overarching goal of maximizing life fulfillment.  We need to reprogram our general attitude toward power such that it is truly instrumental rather than perceived as intrinsically valuable; and this means that our basic goal becomes accumulating power such that we can maximize our personal satisfaction and life fulfillment, and nothing more.  Among other things, the power we acquire should be centered around a power over ourselves and our own psychology; finding ways of living and thinking which are likely to include the fostering of a more respectful relationship with the nature that created us in the first place.

And now that we’re knee deep in the Information Age, we will be able to harness the potential of genetic engineering, artificial intelligence, and artificial reality; and within these avenues of research we will be able to change our own biology such that we can quickly adapt our psychology to the ever-changing cultural environment of the modern world, and we’ll be able to free up our time to create any world we want.  We just need to keep the ultimate goal in mind, fulfillment and contentment, and direct our increasing power towards this goal in particular instead of merely chipping away at it in the background as some secondary priority.  If we don’t prioritize it, then we may simply perpetuate and amplify the meaningless sources of distraction in our lives, eventually getting lost in the chaos, and forgetting about our ultimate potential and the imperatives needed to realize it.

I’ll be posting a link here to part 9 of this post-series, exploring Heidegger’s philosophy, once it has been completed.

Irrational Man: An Analysis (Part 3, Chapter 7: Kierkegaard)

Part III – The Existentialists

In the last post in this series on William Barrett’s Irrational Man, we examined how existentialism was influenced by a number of poets and novelists including several of the Romantics and the two most famous Russian authors, namely Dostoyevsky and Tolstoy.  In this post, we’ll be entering part 3 of Barrett’s book, and taking a look at Kierkegaard specifically.

Chapter 7 – Kierkegaard

Søren Kierkegaard is often considered to be the first existentialist philosopher and so naturally Barrett begins his exploration of individual existentialists here.  Kierkegaard was a brilliant man with a broad range of interests; he was a poet as well as a philosopher, exploring theology and religion (Christianity in particular), morality and ethics, and various aspects of human psychology.  The fact that he was also a devout Christian can be seen throughout his writings, where he attempted to delineate his own philosophy of religion with a focus on the concepts of faith, doubt, and also his disdain for any organized forms of religion.  Perhaps the most influential core of his work is the focus on individuality, subjectivity, and the lifelong search to truly know oneself.  In many ways, Kierkegaard paved the way for modern existentialism, and he did so with a kind of poetic brilliance that made clever use of both irony and metaphor.

Barrett describes Kierkegaard’s arrival in human history as the onset of an ironic form of intelligence intent on undermining itself:

“Kierkegaard does not disparage intelligence; quite the contrary, he speaks of it with respect and even reverence.  But nonetheless, at a certain moment in history this intelligence had to be opposed, and opposed with all the resources and powers of a man of brilliant intelligence.”

Kierkegaard did in fact value science as a methodology and as an enterprise, and he also saw the importance of objective knowledge; but he strongly believed that the most important kind of knowledge or truth was that which was derived from subjectivity; from the individual and their own concrete existence, through their feelings, their freedom of choice, and their understanding of who they are and who they want to become as an individual.  Since he was also a man of faith, this meant that he had to work harder than most to manage his own intellect in order to prevent it from enveloping the religious sphere of his life.  He felt that he had to suppress the intellect at least enough to maintain his own faith, for he couldn’t imagine a life without it:

“His intellectual power, he knew, was also his cross.  Without faith, which the intelligence can never supply, he would have died inside his mind, a sickly and paralyzed Hamlet.”

Kierkegaard saw faith as of the utmost importance to our particular period in history as well, since Western civilization had effectively become disconnected from Christianity, unbeknownst to the majority of those living in Kierkegaard’s time:

“The central fact for the nineteenth century, as Kierkegaard (and after him Nietzsche, from a diametrically opposite point of view) saw it, was that this civilization that had once been Christian was so no longer.  It had been a civilization that revolved around the figure of Christ, and was now, in Nietzsche’s image, like a planet detaching itself from its sun; and of this the civilization was not yet aware.”

Indeed, in Nietzsche’s The Gay Science, we hear of a madman roaming around one morning who makes such a pronouncement to a group of non-believers in a marketplace:

” ‘Where is God gone?’ he called out.  ‘I mean to tell you!  We have killed him, – you and I!  We are all his murderers…What did we do when we loosened this earth from its sun?…God is dead!  God remains dead!  And we have killed him!  How shall we console ourselves, the most murderous of all murderers?…’  Here the madman was silent and looked again at his hearers; they also were silent and looked at him in surprise.  At last he threw his lantern on the ground, so that it broke in pieces and was extinguished.  ‘I come too early,’ he then said ‘I am not yet at the right time.  This prodigious event is still on its way, and is traveling, – it has not yet reached men’s ears.’ “

Although Nietzsche will be looked at in more detail in part 8 of this post-series, it’s worth briefly mentioning what he was pointing out with this essay.  Nietzsche was highlighting the fact that at this point in our history after the Enlightenment, we had made a number of scientific discoveries about our world and our place in it, and this had made the concept of God somewhat superfluous.  As a result of the drastic rise in secularization, the proportion of people that didn’t believe in God rose substantially; and because Christianity no longer had the theistic foundation it relied upon, all of the moral systems, traditions, and the ultimate meaning in one’s life derived from Christianity had to be re-established if not abandoned altogether.

But many people, including a number of atheists, didn’t fully appreciate this fact and simply took for granted much of the cultural constructs that arose from Christianity.  Nietzsche had a feeling that most people weren’t intellectually fit for the task of re-grounding their values and finding meaning in a Godless world, and so he feared that nihilism would begin to dominate Western civilization.  Kierkegaard had similar fears, but as a man who refused to shake his own belief in God and in Christianity, he felt that it was his imperative to try and revive the Christian faith that he saw was in severe decline.

1.  The Man Himself

“The ultimate source of Kierkegaard’s power over us today lies neither in his own intelligence nor in his battle against the imperialism of intelligence-to use the formula with which we began-but in the religious and human passion of the man himself, from which the intelligence takes fire and acquires all its meaning.”

Aside from the fact that Kierkegaard’s own intelligence was primarily shaped and directed by his passion for love and for God, I tend to believe that intelligence is in some sense always subservient to the aims of one’s desires.  And if desires themselves are derivative of, or at least intimately connected to, feeling and passion, then a person’s intelligence is going to be heavily guided by subjectivity; not only in terms of the basic drives that attempt to lead us to a feeling of homeostasis but also in terms of the psychological contentment resulting from that which gives our lives meaning and purpose.

For Kierkegaard, the only way to truly discover what the meaning for one’s own life is, or to truly know oneself, is to endure the painful burden of choice eventually leading to the elimination of a number of possibilities and to the creation of a number of actualities.  One must make certain significant choices in their life such that, once those choices are made, they cannot be unmade; and every time this is done, one is increasingly committing oneself to being (or to becoming) a very specific self.  And Kierkegaard thought that renewing one’s choices daily in the sense of freely maintaining a commitment to them for the rest of one’s life (rather than simply forgetting that those choices were made and moving on to the next one) was the only way to give those choices any meaning, let alone any prolonged meaning.  Barrett mentions the relation of choice, possibility, and reality as it was experienced by Kierkegaard:

“The man who has chosen irrevocably, whose choice has once and for all sundered him from a certain possibility for himself and his life, is thereby thrown back on the reality of that self in all its mortality and finitude.  He is no longer a spectator of himself as a mere possibility; he is that self in its reality.”

As can be seen with Kierkegaard’s own life, some of these choices (such as breaking off his engagement with Regine Olsen) can be hard to live with, but the pain and suffering experienced in our lives is still ours; it is still a part of who we are as an individual and further affirms the reality of our choices, often adding an inner depth to our lives that we may otherwise never attain.

“The cosmic rationalism of Hegel would have told him his loss was not a real loss but only the appearance of loss, but this would have been an abominable insult to his suffering.”

I have to agree with Kierkegaard here that the felt experience one has is as real as anything ever could be.  To say otherwise, that is, to negate the reality of this or that felt experience is to deny the reality of any felt experience whatsoever; for they all precipitate from the same subjective currency of our own individual consciousness.  We can certainly distinguish between subjective reality and objective reality, for example, by evaluating which aspects of our experience can and cannot be verified through a third-party or through some kind of external instrumentation and measurement.  But this distinction only helps us in terms of fine-tuning our ability to make successful predictions about our world by better understanding its causal structure; it does not help us determine what is real and what is not real in the broadest sense of the term.  Reality is quite simply what we experience; nothing more and nothing less.

2. Socrates and Hegel; Existence and Reason

Barrett gives us an apt comparison between Kierkegaard and Socrates:

“As the ancient Socrates played the gadfly for his fellow Athenians stinging them into awareness of their own ignorance, so Kierkegaard would find his task, he told himself, in raising difficulties for the easy conscience of an age that was smug in the conviction of its own material progress and intellectual enlightenment.”

And both philosophers certainly made a lasting impact with their use of the Socratic method; Socrates having first promoted this method of teaching and argumentation, and Kierkegaard making heavy use of it in his first published work Either/Or, and in other works.

“He could teach only by example, and what Kierkegaard learned from the example of Socrates became fundamental for his own thinking: namely, that existence and a theory about existence are not one and the same, any more than a printed menu is as effective a form of nourishment as an actual meal.  More than that: the possession of a theory about existence may intoxicate the possessor to such a degree that he forgets the need of existence altogether.”

And this problem, of not being able to see the forest for the trees, plagues many people that are simply distracted by the details that are uncovered when they’re simply trying to better understand the world.  But living a life of contentment and deeply understanding life in general are two very different things and you can’t invest more in one without retracting time and effort from the other.  Having said that, there’s still some degree of overlap between these two goals; contentment isn’t likely to be maximally realized without some degree of in-depth understanding of the life you’re living and the experiences you’ve had.  As Socrates famously put it: “the unexamined life is not worth living.”  You just don’t want to sacrifice too many of the experiences just to formulate a theory about them.

How does reason, rationality, and thought relate to any theories that address what is actually real?  Well, the belief in a rational cosmos, such as that held by Hegel, can severely restrict one’s ontology when it comes to the concept of existence itself:

“When Hegel says, “The Real is rational, and the rational is real,” we might at first think that only a German idealist with his head in the clouds, forgetful of our earthly existence, could so far forget all the discords, gaps, and imperfections in our ordinary experience.  But the belief in a completely rational cosmos lies behind the Western philosophic tradition; at the very dawn of this tradition Parmenides stated it in his famous verse, “It is the same thing that can be thought and that can be.”  What cannot be thought, Parmenides held, cannot be real.  If existence cannot be thought, but only lived, then reason has no other recourse than to leave existence out of its picture of reality.”

I think what Parmenides said would have been more accurate (if not correct) had he rephrased it just a little differently: “It is the same thing that can be thought consciously experienced and that can be.”  Rephrasing it as such allows for a much more inclusive conception of what is considered real, since anything that is within the possibility of conscious experience is given an equal claim to being real in some way or another; which means that all the irrational thoughts or feelings, the gaps and lack of coherency in some of our experiences, are all taken into account as a part of reality. What else could we mean by saying that existence must be lived, other than the fact that existence must be consciously experienced?  If we mean to include the actions we take in the world, and thus the bodily behavior we enact, this is still included in our conscious experience and in the experiences of other conscious agents.  So once the entire experience of every conscious agent is taken into account, what else could there possibly be aside from this that we can truly say is necessary in order for existence to be lived?

It may be that existence and conscious experience are not identical, even if they’re always coincident with one another; but this is in part contingent on whether or not all matter and energy in the universe is conscious or not.  If consciousness is actually universal, then perhaps existence and some kind of experientiality or other (whether primitive or highly complex) are in fact identical with one another.  And if not, then it is still the existence of those entities which are conscious that should dominate our consideration since that is the only kind of existence that can actually be lived at all.

If we reduce our window of consideration from all conscious experience down to merely the subset of reason or rationality within that experience, then of course there’s going to be a problem with structuring theories about the world within such a limitation; for anything that doesn’t fit within its purview simply disappears:

“As the French scientist and philosopher Emile Meyerson says, reason has only one means of accounting for what does not come from itself, and that is to reduce it to nothingness…The process is still going on today, in somewhat more subtle fashion, under the names of science and Positivism, and without invoking the blessing of Hegel at all.”

Although, in order to be charitable to science, we ought to consider the upsurge of interest and development in the scientific fields of psychology and cognitive science in the decades following the writing of Barrett’s book; for consciousness and the many aspects of our psyche and our experience have taken on a much more important role in terms of what we are valuing in science and the kinds of phenomena we’re trying so hard to understand better.  If psychology and the cognitive and neurosciences include the entire goings-on of our brain and overall subjective experience, and if all the information that is processed therein is trying to be accounted for, then science should no longer be considered cut-off from, or exclusive to, that which lies outside of reason, rationality, or logic.

We mustn’t confuse or conflate reason with science even if science includes the use of reason, for science can investigate the unreasonable; it can provide us with ways of making more and more successful predictions about the causal structure of our experience even as they relate to emotions, intuition, and altered or transcendent states of consciousness like those stemming from meditation, religious experiences or psycho-pharmacological substances.  And scientific fields like moral and positive psychology are also better informing us of what kinds of lifestyles, behaviors, character traits and virtues lead to maximal flourishing and to the most fulfilling lives.  So one could even say that science has been serving as a kind of bridge between reason and existence; between rationality and the other aspects of our psyche that make life worth living.

Going back to the relation between reason and existence, Barrett mentions the conceptual closure of the former to the latter as argued by Kant:

“Kant declared, in effect, that existence can never be conceived by reason-though the conclusions he drew from this fact were very different from Kierkegaard’s.  “Being”, says Kant, “is evidently not a real predicate, or concept of something that can be added to the concept of a thing.”  That is, if I think of a thing, and then think of that thing as existing, my second concept does not add any determinate characteristic to the first…So far as thinking is concerned, there is no definite note or characteristic by which, in a concept, I can represent existence as such.”

And yet, we somehow manage to be able to distinguish between the world of imaginary objects and that of non-imaginary objects (at least most of the time); and we do this using the same physical means of perception in our brain.  I think it’s true to say that both imaginary and non-imaginary objects exist in some sense; for both exist physically as representations in our brains such that we can know them at all, even if they differ in terms of whether the representations are likely to be shared by others (i.e. how they map onto what we might call our external reality).

If I conceive of an apple sitting on a table in front of me, and then I conceive of an apple sitting on a table in front of me that you would also agree is in fact sitting on the table in front of me, then I’ve distinguished conceptually between an imaginary apple and one that exists in our external reality.  And since I can’t ever be certain whether or not I’m hallucinating that there’s an actual apple on the table in front of me (or any other aspect of my experienced existence), I must accept that the common thread of existence, in terms of what it really means for something to exist or not, is entirely grounded on its relation to (my own) conscious experience.  It is entirely grounded on our individual perception; on the way our own brains make predictions about the causes of our sensory input and so forth.

“If existence cannot be represented in a concept, he says (Kierkegaard), it is not because it is too general, remote, and tenuous a thing to be conceived of but rather because it is too dense, concrete, and rich.  I am; and this fact that I exist is so compelling and enveloping a reality that it cannot be reproduced thinly in any of my mental concepts, though it is clearly the life-and-death fact without which all my concepts would be void.”

I actually think Kierkegaard was closer to the mark than Kant was, for he claimed that it was not so much that reason reduces existence to nothingness, but rather that existence is so tangible, rich and complex that reason can’t fully encompass it.  This makes sense insofar as reason operates through the principle of reduction, abstraction, and the dissecting of a holistic experience into parts that relate to one another in a certain way in order to make sense of that experience.  If the holistic experience is needed to fully appreciate existence, then reason alone isn’t going to be up to the task.  But reason also seems to unify our experiences, and if this unification presupposes existence in order to make sense of that experience then we can’t fully appreciate existence without reason either.

3. Aesthetic, Ethical, Religious

Kierkegaard lays out three primary stages of living in his philosophy: namely, the aesthetic, the ethical, and the religious.  While there are different ways to interpret this “stage theory”, the most common interpretation treats these stages like a set of concentric circles or spheres where the aesthetic is in the very center, the ethical contains the aesthetic, and the religious subsumes both the ethical and the aesthetic.  Thus, for Kierkegaard, the religious is the most important stage that, in effect, supersedes the others; even though the religious doesn’t eliminate the other spheres, since a religious person is still capable of being ethical or having aesthetic enjoyment, and since an ethical person is still capable of aesthetic enjoyment, etc.

In a previous post where I analyzed Kierkegaard’s Fear and Trembling, I summarized these three stages as follows:

“…The aesthetic life is that of sensuous or felt experience, infinite potentiality through imagination, hiddenness or privacy, and an overarching egotism focused on the individual.  The ethical life supersedes or transcends this aesthetic way of life by relating one to “the universal”, that is, to the common good of all people, to social contracts, and to the betterment of others over oneself.  The ethical life, according to Kierkegaard, also consists of public disclosure or transparency.  Finally, the religious life supersedes the ethical (and thus also supersedes the aesthetic) but shares some characteristics of both the aesthetic and the ethical.

The religious, like the aesthetic, operates on the level of the individual, but with the added component of the individual having a direct relation to God.  And just like the ethical, the religious appeals to a conception of good and evil behavior, but God is the arbiter in this way of life rather than human beings or their nature.  Thus the sphere of ethics that Abraham might normally commit himself to in other cases is thought to be superseded by the religious sphere, the sphere of faith.  Within this sphere of faith, Abraham assumes that anything that God commands is Abraham’s absolute duty to uphold, and he also has faith that this will lead to the best ends…”

While the aesthete generally tends to live life in search of pleasure, always trying to flee away from boredom in an ever-increasing fit of desperation to continue finding moments of pleasure despite the futility of such an unsustainable goal, Kierkegaard also claims that the aesthete includes the intellectual who tries to stand outside of life; detached from it and only viewing it as a spectator rather than a participant, and categorizing each experience as either interesting or boring, and nothing more.  And it is this speculative detachment from life, which was an underpinning of Western thought, that Kierkegaard objected to; an objection that would be maintained within the rest of existential philosophy that was soon to come.

If the aesthetic is unsustainable or if someone (such as Kierkegaard) has given up such a life of pleasure, then all that remains is the other spheres of life.  For Kierkegaard, the ethical life, at least on its own, seemed to be insufficient for making up what was lost in the aesthetic:

“For a really passionate temperament that has renounced the life of pleasure, the consolations of the ethical are a warmed-over substitute at best.  Why burden ourselves with conscience and responsibility when we are going to die, and that will be the end of it?  Kierkegaard would have approved of the feeling behind Nietzsche’s saying, ‘God is dead, everything is permitted.’ ” 

One conclusion we might arrive at, after considering Kierkegaard’s attitude toward the ethical life, is that he may have made a mistake when he renounced the life of pleasure entirely.  Another conclusion we might make is that Kierkegaard’s conception of the ethical is incomplete or misguided.  If he honestly asks, in the hypothetical absence of God or any option for a religious life, why we ought to burden ourselves with conscience and responsibility, this seems to betray a fundamental flaw in his moral and ethical reasoning.  Likewise for Nietzsche, and Dostoyevsky for that matter, where they both echoed similar sentiments: “If God does not exist, then everything is permissible.”

As I’ve argued elsewhere (here, and here), the best form any moral theory can take (such that it’s also sufficiently motivating to follow) is going to be centered around the individual, and it will be grounded on the hypothetical imperative that maximizes their personal satisfaction and life fulfillment.  If some behaviors serve toward best achieving this goal and other behaviors detract from it, as a result of our human psychology, sociology, and thus as a result of the finite range of conditions that we thrive within as human beings, then regardless of whether a God exists or not, everything is most certainly not permitted.  Nietzsche was right however when he claimed that the “death of God” (so to speak) would require a means of re-establishing a ground for our morals and values, but this doesn’t mean that all possible grounds for doing so have an equal claim to being true nor will they all be equally efficacious in achieving one’s primary moral objective.

Part of the problem with Kierkegaard’s moral theorizing is his adoption of Kant’s universal maxim for ethical duty: “Act only according to that maxim whereby you can at the same time will that it should become a universal law.”  The problem is that this maxim or “categorical imperative” (the way it’s generally interpreted at least) doesn’t take individual psychological and circumstantial idiosyncrasies into account.  One can certainly insert these idiosyncrasies into Kant’s formulation, but that’s not how Kant intended it to be used nor how Kierkegaard interpreted it.  And yet, Kierkegaard seems to smuggle in such exceptions anyway, by having incorporated it into his conception of the religious way of life:

“An ethical rule, he says, expresses itself as a universal: all men under such-and-such circumstances ought to do such and such.  But the religious personality may be called upon to do something that goes against the universal norm.”

And if something goes against a universal norm, and one feels that they ought to do it anyway (above all else), then they are implicitly denying that a complete theory of ethics involves exclusive universality (a categorical imperative); rather, it must require taking some kind of individual exceptions into account.  Kierkegaard seems to be prioritizing the individual in all of his philosophy, and yet he didn’t think that a theory of ethics could plausibly account for such prioritization.

“The validity of this break with the ethical is guaranteed, if it ever is, by only one principle, which is central to Kierkegaard’s existential philosophy as well as to his Christian faith-the principle, namely, that the individual is higher than the universal. (This means also that the individual is always of higher value than the collective).”

I completely agree with Kierkegaard that the individual is always of higher value than the collective; and this can be shown by any utilitarian moral theory that forgets to take into account the psychological state of those individual actors and moral agents carrying out some plan to maximize happiness or utility for the greatest number of people.  If the imperative comes down to my having to do something such that I can no longer live with myself afterward, then the moral theory has failed miserably.  Instead, I should feel that I did the right thing in any moral dilemma (when thinking clearly and while maximally informed of the facts), even when every available option is less than optimal.  We have to be able to live with our own actions, and ultimately with the kind of person that those actions have made us become.  The concrete self that we alone have conscious access to takes on a priority over any abstract conception of other selves or any kind of universality that references only a part of our being.

“Where then as an abstract rule it commands something that goes against my deepest self (but it has to be my deepest self, and herein the fear and trembling of the choice reside), then I feel compelled out of conscience-a religious conscience superior to the ethical-to transcend that rule.  I am compelled to make an exception because I myself am an exception; that is, a concrete being whose existence can never be completely subsumed under any universal or even system of universals.”

Although I agree with Kierkegaard here for the most part, in terms of our giving the utmost importance to the individual self, I don’t think that a religious conscience is something one can distinguish from their conscience generally; rather, it would just be one’s conscience, albeit one altered by a set of religious beliefs, which may end up changing how one’s conscience operates but doesn’t change the fact that it is still their conscience nevertheless.

For example, if two people differ with respect to their belief in souls, where one has a religious belief that a fertilized egg has a soul and the other person only believes that people with a capacity for consciousness or a personality have souls (or have no soul at all), then that difference in belief may affect their conscience differently if both parties were to, for example, donate a fertilized egg to a group of stem-cell researchers.  The former may feel guilty afterward (knowing that the egg they believe to be inhabited by a soul may be destroyed), whereas the latter may feel really good about themselves for aiding important life-saving medical research.  This is why one can only make a proper moral assessment when they are taking seriously what is truly factual about the world (what is supported by evidence) and what is a cherished religious or otherwise supernatural belief.  If one’s conscience is primarily operating on true evidenced facts about the world (along with their intuition), then their conscience and what some may call their religious conscience should be referring to the exact same thing.

Despite the fact that viable moral theories should be centered around the individual rather than the collective, this doesn’t mean that we shouldn’t try and come up with sets of universal moral rules that ought to be followed most of the time.  For example, a rule like “don’t steal from others” is a good rule to follow most of the time, and if everyone in a society strives to obey such a rule, that society will be better off overall as a result; but if my child is starving and nobody is willing to help in any way, then my only option may be to steal a loaf of bread in order to prevent the greater moral crime of letting a child starve.

This example should highlight a fundamental oversight regarding Kant’s categorical imperative as well: you can build in any number of exceptions within a universal law to make it account for personal circumstances and even psychological idiosyncrasies, thus eliminating the kind of universality that Kant sought to maintain.  For example, if I willed it to be a universal law that “you shouldn’t take your own life,” I could make this better by adding an exception to it so that the law becomes “you shouldn’t take your own life unless it is to save the life of another,” or even more individually tailored to “you shouldn’t take your own life unless not doing so will cause you to suffer immensely or inhibit your overall life satisfaction and life fulfillment.”  If someone has a unique life history or psychological predisposition whereby taking their own life is the only option available to them lest they suffer needlessly, then they ought to take their own life regardless of whatever categorical imperative they are striving to uphold.

There is however still an important reason for adopting Kant’s universal maxim (at least generally speaking): the fact that universal laws like the Golden Rule provide people with an easy heuristic to quickly ascertain the likely moral status of any particular action or behavior.  If we try and tailor in all sorts of idiosyncratic exceptions (with respect to yourself as well as others), it makes the rules much more complicated and harder to remember; instead, one should use the universal rules most of the time and only when they see a legitimate reason to question it, should they consider if an exception should be made.

Another important point regarding ethical behavior or moral dilemmas is the factor of uncertainty in our knowledge of a situation:

“But even the most ordinary people are required from time to time to make decisions crucial for their own lives, and in such crises they know something of the “suspension of the ethical” of which Kierkegaard writes.  For the choice in such human situations is almost never between a good and an evil, where both are plainly as such and the choice therefore made in all the certitude of reason; rather it is between rival goods, where one is bound to do some evil either way, and where the ultimate outcome and even-of most of all-our own motives are unclear to us.  The terror of confronting oneself in such a situation is so great that most people panic and try to take cover under any universal rule that will apply, if only it will save them from the task of choosing themselves.”

And rather than making these tough decisions themselves, a lot of people would prefer for others to tell them what’s right and wrong behavior such as getting these answers from a religion, from one’s parents, from a community, etc.; but this negates the intimate consideration of the individual where each of these difficult choices made will lead them down a particular path in their life and shape who they become as a person.  The same fear drives a lot of people away from critical thinking, where many would prefer to have people tell them what’s true and false and not have to think about these things for themselves, and so they gravitate towards institutions that say they “have all the answers” (even if many that fear critical thinking wouldn’t explicitly say that this is the case, since it is primarily a manifestation of the unconscious).  Kierkegaard highly valued these difficult moments of choice and thought they were fundamental to being a true self living an authentic life.

But despite the fact that universal ethical rules are convenient and fairly effective to use in most cases, they are still far from perfect and so one will find themselves in a situation where they simply don’t know which rule to use or what to do, and one will just have to make a decision that they think will be the most easy to live with:

“Life seems to have intended it this way, for no moral blueprint has ever been drawn up that covers all the situations for us beforehand so that we can be absolutely certain under which rule the situation comes.  Such is the concreteness of existence that a situation may come under several rules at once, forcing us to choose outside any rule, and from inside ourselves…Most people, of course, do not want to recognize that in certain crises they are being brought face to face with the religious center of their existence.”

Now I wouldn’t call this the religious center of one’s existence but rather the moral center of one’s existence; it is simply the fact that we’re trying to distinguish between universal moral prescriptions (which Kierkegaard labels as “the ethical”) and those that are non-universal or dynamic (which Kierkegaard labels as “the religious”).  In any case, one can call this whatever they wish, as long as they understand the distinction that’s being made here which is still an important one worth making.  And along with acknowledging this distinction between universal and individual moral consideration, it’s also important that one engages with the world in a way where our individual emotional “palette” is faced head on rather than denied or suppressed by society or its universal conventions.

Barrett mentions how the denial of our true emotions (brought about by modernity) has inhibited our connection to the transcendent, or at least, inhibited our appreciation or respect for the causal forces that we find to be greater than ourselves:

“Modern man is farther from the truth of his own emotions than the primitive.  When we banish the shudder of fear, the rising of the hair of the flesh in dread, or the shiver of awe, we shall have lost the emotion of the holy altogether.”

But beyond acknowledging our emotions, Kierkegaard has something to say about how we choose to cope with them, most especially that of anxiety and despair:

“We are all in despair, consciously or unconsciously, according to Kierkegaard, and every means we have of coping with this despair, short of religion, is either unsuccessful or demoniacal.”

And here is another place where I have to part ways with Kierkegaard despite his brilliance in examining the human condition and the many complicated aspects of our psychology; for relying on religion (or more specifically, relying on religious belief) to cope with despair is but another distraction from the truth of our own existence.  It is an inauthentic way of living life since one is avoiding the way the world really is.  I think it’s far more effective and authentic for people to work on changing their attitude toward life and the circumstances they find themselves in without sacrificing a reliable epistemology in the process.  We need to provide ourselves with avenues for emotional expression, work to increase our mindfulness and positivity, and constantly strive to become better versions of ourselves by finding things we’re passionate about and by living a philosophically examined life.  This doesn’t mean that we have to rid ourselves of the rituals, fellowship, and meditative benefits that religion offer; but rather that we should merely dispense with the supernatural component and the dogma and irrationality that’s typically attached to and promoted by religion.

4. Subjective and Objective Truth

Kierkegaard ties his conception of the religious life to the meaning of truth itself, and he distinguishes this mode of living with the concept of religious belief.  While one can assimilate a number of religious beliefs just as one can do with non-religious beliefs, for Kierkegaard, religion itself is something else entirely:

“If the religious level of existence is understood as a stage upon life’s way, then quite clearly the truth that religion is concerned with is not at all the same as the objective truth of a creed or belief.  Religion is not a system of intellectual propositions to which the believer assents because he knows it to be true, as a system of geometry is true; existentially, for the individual himself, religion means in the end simply to be religious.  In order to make clear what it means to be religious, Kierkegaard has to reopen the whole question of the meaning of truth.”

And his distinction between objective and subjective truth is paramount to understanding this difference.  One could say perhaps that by subjective truth he is referring to a truth that must be embodied and have an intimate relation to the individual:

“But the truth of religion is not at all like (objective truth): it is a truth that must penetrate my own personal existence, or it is nothing; and I must struggle to renew it in my life every day…Strictly speaking, subjective truth is not a truth that I have, but a truth that I am.”

The struggle for renewal goes back to Kierkegaard’s conception of how the meaning a person finds in their life is in part dependent on some kind of personal commitment; it relies on making certain choices that one remakes day after day, keeping these personal choices in focus so as to not lose sight of the path we’re carving out for ourselves.  And so it seems that he views subjective truth as intimately connected to the meaning we give our lives, and to the kind of person that we are now.

Perhaps another way we can look at this conception, especially as it differs from objective truth, is to examine the relation between language, logic, and conscious reasoning on the one hand, and intuition, emotion, and the unconscious mind on the other.  Objective truth is generally communicable, it makes explicit predictions about the causal structure of reality, and it involves a way of unifying our experiences into some coherent ensemble; but subjective truth involves felt experience, emotional attachment, and a more automated sense of familiarity and relation between ourselves and the world.  And both of these facets are important for our ability to navigate the world effectively while also feeling that we’re psychologically whole or complete.

5. The Attack Upon Christendom

Kierkegaard points out an important shift in modern society that Barrett mentioned early on in this book; the move toward mass society, which has effectively eaten away at our individuality:

“The chief movement of modernity, Kierkegaard holds, is a drift toward mass society, which means the death of the individual as life becomes ever more collectivized and externalized.  The social thinking of the present age is determined, he says, by what might be called the Law of Large Numbers: it does not matter what quality each individual has, so long as we have enough individuals to add up to a large number-that is, to a crowd or mass.”

And false metrics of success like economic growth and population growth have definitely detracted from the quality each of our lives is capable of achieving.  And because of our inclinations as a social species, we are (perhaps unconsciously) drawn towards the potential survival benefits brought about by joining progressively larger and larger groups.  In terms of industrialization, we’ve been using technology to primarily allow us to support more people on the globe and to increase the output of each individual worker (to benefit the wealthiest) rather than substantially reducing the number of hours worked per week or eliminating poverty outright.  This has got to be the biggest failure of the industrial revolution and of capitalism (when not regulated properly), and one that’s so often taken for granted.

Because of the greed that’s consumed the moral compass of those at the top of our sociopolitical hierarchy, our lives have been funneled into a military-industrial complex that will only surrender our servitude when the rich eventually stop asking for more and more of the fruits of our labor.  And by the push of marketing and social pressure, we’re tricked into wanting to maintain society the way it is; to continue to buy more consumable garbage that we don’t really need and to be complacent with such a lifestyle.  The massive externalization of our psyche has led to a kind of, as Barrett put it earlier, spiritual poverty.  And Kierkegaard was well aware of this psychological degradation brought on by modernity’s unchecked collectivization.

Both Kierkegaard and Nietzsche also saw a problem with how modernity had effectively killed God; though Kierkegaard and Nietzsche differed in their attitudes toward organized religion, Christianity in particular:

“The Grand Inquisitor, the Pope of Popes, relieves men of the burden of being Christian, but at the same time leaves them the peace of believing they are Christians…Nietzsche, the passionate and religious atheist, insisted on the necessity of a religious institution, the Church, to keep the sheep in peace, thus putting himself at the opposite extreme from Kierkegaard; Dostoevski in his story of the Grand Inquisitor may be said to embrace dialectically the two extremes of Kierkegaard and Nietzsche.  The truth lies in the eternal tension between Christ and the Grand Inquisitor.  Without Christ the institution of religion is empty and evil, but without the institution as a means of mitigating it the agony in the desert of selfhood is not viable for most men.”

Modernity had helped produce organized religion, thus diminishing the personal, individualistic dimension of spirituality which Kierkegaard saw as indispensable; but modernity also facilitated the “death of God” making even organized religion increasingly difficult to adhere to since the underlying theistic foundation was destroyed for many.  Nietzsche realized the benefits of organized religion since so many people are unable to think critically for themselves, are unable to find an effective moral framework that isn’t grounded on religion, and are unable to find meaning or stability in their lives that isn’t grounded on belief in God or in some religion or other.  In short, most people aren’t able to deal with the burdens realized within existentialism.

Due to the fact that much of European culture and so many of its institutions had been built around Christianity, this made the religion much more collectivized and less personal, but it also made Christianity more vulnerable to being uprooted by new ideas that were given power from the very same collective.  Thus, it was the mass externalization of religion that made religion that much more accessible to the externalization of reason, as exemplified by scientific progress and technology.  Reason and religion could no longer co-exist in the same way they once had because the externalization drastically reduced our ability to compartmentalize the two.  And this made it that much harder to try and reestablish a more personal form of Christianity, as Kierkegaard had been longing for.

Reason also led us to the realization of our own finitude, and once this view was taken more seriously, it created yet another hurdle for the masses to maintain their religiosity; for once death is seen as inevitable, and immortality accepted as an impossibility, one of the most important uses for God becomes null and void:

“The question of death is thus central to the whole of religious thought, is that to which everything else in the religious striving is an accessory: ‘If there is no immortality, what use is God?’ “

The fear of death is a powerful motivator for adopting any number of beliefs that might help to manage the cognitive dissonance that results from it, and so the desire for eternal happiness makes death that much more frightening.  But once death is truly accepted, then many of the other beliefs that were meant to provide comfort or consolation are no longer necessary or meaningful.  For Kierkegaard, he didn’t think we could cope with the despair of an inevitable death without a religion that promised some way to overcome it and to transcend our life and existence in this world, and he thought Christianity was the best religion to accomplish this goal.

It is on this point in particular, how he fails to properly deal with death, that I find Kierkegaard to lack an important strength as an existentialist; as it seems that in order to live an authentic life, a person must accept death, that is, they must accept the finitude of our individual human existence.  As Heidegger would later go on to say, buried deep within the idea of death as the possibility of impossibility is a basis for affirming one’s own life, through the acceptance of our mortality and the uncertainty that surrounds it.

Click here for the next post in this series, part 8 on Nietzsche’s philosophy.

Irrational Man: An Analysis (Part 2, Chapter 4: “The Sources of Existentialism in the Western Tradition”)

In the previous post in this series on William Barrett’s Irrational Man, I explored Part 1, Chapter 3: The Testimony of Modern Art, where Barrett illustrates how existentialist thought is best exemplified in modern art.  The feelings of alienation, discontent, and meaninglessness pervade a number of modern expressions, and so as is so often the case throughout history, we can use artistic expression as a window to peer inside our evolving psyche and witness the prevailing views of the world at any point in time.

In this post, I’m going to explore Part II, Chapter 4: The Sources of Existentialism in the Western Tradition.  This chapter has a lot of content and is quite dense, and so naturally this fact is reflected in the length of this post.

Part II: “The Sources of Existentialism in the Western Tradition”

Ch. 4 – Hebraism and Hellenism

Barrett begins this chapter by pointing out two forces governing the historical trajectory of Western civilization and Western thought: Hebraism and Hellenism.  He mentions an excerpt of Matthew Arnold’s, in his book Culture and Anarchy; a book that was concerned with the contemporary situation occurring in nineteenth-century England, where Arnold writes:

“We may regard this energy driving at practice, this paramount sense of the obligation of duty, self-control, and work, this earnestness in going manfully with the best light we have, as one force.  And we may regard the intelligence driving at those ideas which are, after all, the basis of right practice, the ardent sense for all the new and changing combinations of them which man’s development brings with it, the indomitable impulse to know and adjust them perfectly, as another force.  And these two forces we may regard as in some sense rivals–rivals not by the necessity of their own nature, but as exhibited in man and his history–and rivals dividing the empire of the world between them.  And to give these forces names from the two races of men who have supplied the most splendid manifestations of them, we may call them respectively the forces of Hebraism and Hellenism…and it ought to be, though it never is, evenly and happily balanced between them.”

And while we may have felt a stronger attraction to one force over the other at different points in our history, both forces have played an important role in how we’ve structured our individual lives and society at large.  What distinguishes these two forces ultimately comes down to the difference between doing and knowing; between the Hebrew’s concern for practice and right conduct, and the Greek’s concern for knowledge and right thinking.  I can’t help but notice this Hebraistic influence in one of the earliest expressions of existentialist thought when Soren Kierkegaard (in one of his earlier journals) had said: “What I really need is to get clear about what I must do, not what I must know, except insofar as knowledge must precede every act”.  Here we see that some set of moral virtues are what form the fundamental substance and meaning of life within Hebraism (and by extension, Kierkegaard’s philosophy), in contrast with the Hellenistic subordination of the moral virtues to those of the intellectual variety.

I for one am tempted to mention Aristotle here, since he was a Greek philosopher, yet one who formulated and extolled moral virtues, forming the foundation for most if not all modern systems of virtue ethics; despite all of his work on science, logic and epistemology.  And sure enough, Arnold does mention Aristotle briefly, but he says that for Aristotle the moral virtues are “but the porch and access to the intellectual (virtues), and with these last is blessedness.”  So it’s still fair to say that the intellectual virtues were given priority over the moral virtues within Greek thought, even if moral virtues were an important element, serving as a moral means to a combined moral-intellectual end.  We’re still left then with a distinction of what is prioritized or what the ultimate teleology is for Hebraism and Hellenism: moral man versus intellectual man.

One perception of Arnold’s that colors his overall thesis is a form of uneasiness that he sees as lurking within the Hebrew conception of man; an uneasiness stemming from a conception of man that has been infused with the idea of sin, which is simply not found in Greek philosophy.  Furthermore, this idea of sin that pervades the Hebraistic view of man is not limited to one’s moral or immoral actions; rather it penetrates into the core being of man.  As Barrett puts it:

“But the sinfulness that man experiences in the Bible…cannot be confined to a supposed compartment of the individual’s being that has to do with his moral acts.  This sinfulness pervades the whole being of man: it is indeed man’s being, insofar as in his feebleness and finiteness as a creature he stands naked in the presence of God.”

So we have a predominantly moral conception of man within Hebraism, but one that is amalgamated with an essential finitude, an acknowledgement of imperfection, and the expectation of our being morally flawed human beings.  Now when we compare this to the philosophers of Ancient Greece, who had a more idealistic conception of man, where humans were believed to have the capacity to access and understand the universe in its entirety, then we can see the groundwork that was laid for somewhat rivalrous but nevertheless important motivations and contributions to the cultural evolution of Western civilization: science and philosophy from the Greeks, and a conception of “the Law” entrenched in religion and religious practice from the Hebrews.

1. The Hebraic Man of Faith

Barrett begins here by explaining how the Law, though important for its effects on having bound the Jewish community together for centuries despite their many tribulations as a people, the Law is not central to Hebraism but rather the basis of the Law is what lies at its center.  To see what this basis is, we are directed to reread the Book of Job in the Hebrew Bible:

“…reread it in a way that takes us beyond Arnold and into our own time, reread it with an historical sense of the primitive or primary mode of existence of the people who gave expression to this work.  For earlier man, the outcome of the Book of Job was not such a foregone conclusion as it is for us later readers, for whom centuries of familiarity and forgetfulness have dulled the violence of the confrontation between man and God that is central to the narrative.”

Rather than simply taking the commandments of one’s religion for granted and following them without pause, Job’s face-to-face confrontation with his Creator and his demand for justification was in some sense the first time the door had been opened to theological critique and reflection.  The Greeks did something similar where eventually they began to apply reason and rationality to examine religion, stepping outside the confines of simply blindly following religious traditions and rituals.  But unlike the Greek, the Hebrew does not proceed with this demand of justification through the use of reason but rather by a direct encounter of the person as a whole (Job, and his violence, passion, and all the rest) with an unknowable and awe-inspiring God.  Job doesn’t solve his problem with any rational resolution, but rather by changing his entire character.  His relation to God involves a mutual confrontation of two beings in their entirety; not just a rational facet of each being looking for a reasonable explanation from one another, but each complete being facing one another, prepared to defend their entire character and identity.

Barrett mentions the Jewish philosopher Martin Buber here to help clarify things a little, by noting that this relation between Job and God is a relation between an I and a Thou (or a You).  Since Barrett doesn’t explain Buber’s work in much detail, I’ll briefly digress here to explain a few key points.  For those unfamiliar with Buber’s work, the I-Thou relation is a mode of living that is contrasted with another mode centered on the connection between an I and an It.  Both modes of living, the I-Thou and the I-It, are, according to Buber, the two ways that human beings can address existence; the two modes of living required for a complete and fulfilled human being.  The I-It mode encompasses the world of experience and sensation; treating entities as discrete objects to know about or to serve some use.  The I-Thou mode on the other hand encompasses the world of relations itself, where the entities involved are not separated by some discrete boundary, and where a living relationship is acknowledged to exist between the two; this mode of existence requires one to be an active participant rather than merely an objective observer.

It is Buber’s contention that modern life has entirely ignored the I-Thou relation, which has led to a feeling of unfulfillment and alienation from the world around us.   The first mode of existence, that of the I-It, involves our acquiring data from the world, analyzing and categorizing it, and then theorizing about it; and this leads to a subject-object separation, or an objectification of what is being experienced.  According to Buber, modern society almost exclusively uses this mode to engage with the world.  In contrast, with the I-Thou relation the I encounters the object or entity such that both are transformed by the relation; and there is a type of holism at play here where the You or Thou is not simply encountered as a sum of its parts but in its entirety.  To put it another way, it’s as if the You encountered were the entire universe, or that somehow the universe existed through this You.

Since this concept is a little nebulous, I think the best way to summarize Buber’s main philosophical point here is to say that we human beings find meaning in our lives through our relationships, and so we need to find ways of engaging the world such as to maximize our relationships with it; and this is not limited to forming relationships with fellow human beings, but with other animals, inanimate objects, etc., even if these relationships differ from one another in any number of ways.

I actually find some relevance between Buber’s “I and Thou” conception and Nietzsche’s idea of eternal recurrence: the idea that given an infinite amount of time and a finite number of ways that matter and energy can be arranged, anything that has ever happened or that ever will happen, will recur an infinite number of times.  In Nietzsche’s The Gay Science, he mentions how the concept of eternal recurrence was, to him, horrifying and paralyzing.  But, he also concluded that the desire for an eternal return or recurrence would show the ultimate affirmation of one’s life:

“What, if some day or night a demon were to steal after you into your loneliest loneliness and say to you: ‘This life as you now live it and have lived it, you will have to live once more and innumerable times more’ … Would you not throw yourself down and gnash your teeth and curse the demon who spoke thus?  Or have you once experienced a tremendous moment when you would have answered him: ‘You are a god and never have I heard anything more divine.

The reason I find this relevant to Buber’s conception is two-fold: first of all, the fact that the universe is causally connected makes the universe inseparable to some degree, where each object or entity could be seen to, in some sense, represent the rest of the whole; and secondly, if one is to wish for the eternal recurrence, then they have an attitude toward the world that is non-resistant, and that can learn to accept the things that are out of one’s control.  The universe effectively takes on the character of fate itself, and offers an opportunity to a being such as ourselves, to have a kind of “faith in fate”; to have a relation of trust with the universe as it is, as it once was, and as it will be in the future.

Now the kind of faith I’m speaking of here isn’t a brand that contradicts reason or evidence, but rather is simply a form of optimism and acceptance that colors one’s expectations and overall experience.  And since we are a part of this universe too, our attitude towards it should in many ways reflect our overall relation to it; which brings us back to Buber, where any encounter I might have with the universe (or any “part” of it) is an encounter with a You, that is to say, it is an I-Thou relation.

This “faith in fate” concept I just alluded to is a good segue to return back to the relation between Job and his God within Hebraism, as it was a relation of never-ending faith.  But importantly, as Barrett points out, this faith of Job’s takes on many shapes including that of anger, dismay, revolt, and confusion.  For Job says, “Though he slay me, yet will I trust in him…but I will maintain my own ways before him.”  So Job’s faith is his maintaining a form of trust in his Creator, even though he says that he will also retain his own identity, his dispositions, and his entire being while doing so.  And this trust ultimately forms the relation between the two.  Barrett describes the kind of faith at play here as more fundamental and primary than that which many modern-day religious proponents would lay claim to.

“Faith is trust before it is belief-belief in the articles, creeds, and tenets of a Church with which later religious history obscures this primary meaning of the word.  As trust, in the sense of the opening up of one being toward another, faith does not involve any philosophical problem about its position relative to faith and reason.  That problem comes up only later when faith has become, so to speak, propositional, when it has expressed itself in statements, creeds, systems.  Faith as a concrete mode of being of the human person precedes faith as the intellectual assent to a proposition, just as truth as a concrete mode of human being precedes the truth of any proposition.”

Although I see faith as belief as fundamentally flawed and dangerous, I can certainly respect the idea of faith as trust, and consequently I can respect the idea of faith as a concrete mode of being; where this mode of being is effectively an attitude of openness taken towards another.  But, whereas I agree with Barrett’s claim that truth as a concrete mode of human being precedes the truth of any proposition, in the sense that a basis and desire for truth are needed prior to evaluating the truth value of any proposition, I don’t believe one can ever justifiably make the succession from faith as trust to faith as belief for the simple reason that if one has good reason to trust another, then they don’t need faith to mediate any beliefs stemming from that trust.  And of course, what matters most here is describing and evaluating what the “Hebrew man of faith” consists of, rather than criticizing the concept of faith itself.

Another interesting historical development stemming from Hebraism pertains to the concept of faith as it evolved within Protestantism.  As Barrett tells us:

“Protestantism later sought to revive this face-to-face confrontation of man with his God, but could produce only a pallid replica of the simplicity, vigor, and wholeness of this original Biblical faith…Protestant man would never have dared confront God and demand an accounting of His ways.  That era in history had long since passed by the time we come to the Reformation.”

As an aside, it’s worth mentioning here that Christianity actually developed as a syncretism between Hebraism and Hellenism; the two very cultures under analysis in this chapter.  By combining Jewish elements (e.g. monotheism, the substitutionary atonement of sins through blood-magic, apocalyptic-messianic resurrection, interpretation of Jewish scriptures, etc.) with Hellenistic religious elements (e.g. dying-and-rising savior gods, virgin birth of a deity, fictive kinship, etc.), the cultural diffusion that occurred resulted in a religion that was basically a cosmopolitan personal salvation cult.

But eventually Christianity became a state religion (after the age of Constantine), resulting in a theocracy that heavily enforced a particular conception of God onto the people.  Once this occurred, the religion was now fully contained within a culture that made it very difficult to question or confront anyone about these conceptions, in order to seek justification for them.  And it may be that the intimidation propagated by the prevailing religious authorities became conflated with an attribute of God; where a fear of questioning any conception of God became integrated in a theological belief about God.

Perhaps it was because the primary conceptions of God, once Christianity entered the Medieval Period, were more externally imposed on everybody rather than discovered in a more personal and introspective way (even if unavoidably initiated and reinforced by the external culture), thus externalizing the attributes of God such that they became more heavily influenced by the perceptibly unquestionable claims of those in power.  Either way, the historical contingencies surrounding the advent of Christianity involved sectarian battles with the winning sect using their dogmatic authority to suppress the views (and the Christian Gospels/scriptures) of the other sects.  And this may have inhibited people from ever questioning their God or demanding justification for what they interpreted to be God’s actions.

One final point in this section that I’d like to highlight is in regard to the concept of knowledge as it relates to Hebraism.  Barrett distinguishes this knowledge from that of the Greeks:

“We have to insist on a noetic content in Hebraism: Biblical man too had his knowledge, though it is not the intellectual knowledge of the Greek.  It is not the kind of knowledge that man can have through reason alone, or perhaps not through reason at all; he has it rather through body and blood, bones and bowels, through trust and anger and confusion and love and fear; through his passionate adhesion in faith to the Being whom he can never intellectually know.  This kind of knowledge a man has only through living, not reasoning, and perhaps in the end he cannot even say what it is he knows; yet it is knowledge all the same, and Hebraism at its source had this knowledge.”

I may not entirely agree with Barrett here, but any disagreement is only likely to be a quibble over semantics, relating to how he and I define noetic and how we each define knowledge.  The word noetic actually derives from the Greek adjective noētikos which means “intellectual”, and thus we are presumably considering the intellectual knowledge found in Hebraism.  Though this knowledge may not be derived from reason alone, I don’t think (as Barrett implies above) that it’s even possible to have any noetic content without at least some minimal use of reason.  It may be that the kind of intellectual content he’s alluding to is that which results from a kind of synthesis between reason and emotion or intuition, but it would seem that reason would still have to be involved, even if it isn’t as primary or dominant as in the intellectual knowledge of the Greek.

With regard to how Barrett and I are each defining knowledge, I must say that just as most other philosophers have done, including those going all the way back to Plato, one must distinguish between knowledge and all other beliefs because knowledge is merely a subset of all of one’s beliefs (otherwise one would be saying that any belief whatsoever is considered knowledge).  To distinguish knowledge as a subset of one’s beliefs, my definition of knowledge can be roughly defined as:

“Recognized patterns of causality that are stored into memory for later recall and use, that positively and consistently correlate with reality, and for which that correlation has been validated by having made successful predictions and/or successfully accomplishing goals through the use of said recalled patterns.”

My conception of knowledge also takes what one might call unconscious knowledge into account (which may operate more automatically and less explicitly than conscious knowledge); as long as it is acquired information that allows you to accomplish goals and make successful predictions about the causal structure of your experience (whether internal feelings or other mental states, or perceptions of the external world), it counts as knowledge nevertheless.  Now there may be different degrees of reliability and usefulness of different kinds of knowledge (such as intuitive knowledge versus rational or scientific knowledge), but those distinctions don’t need to be parsed out here.

What Barrett seems to be describing here as a Hebraistic form of knowledge is something that is deeply embodied in the human being; in the ways that one lives their life that don’t involve, or aren’t dominated by, reason or conscious abstractions.  Instead, there seems to be a more organic process involved of viewing the world and interacting with it in a manner relying more heavily on intuition and emotionally-driven forms of expression.  But, in order to give rise to beliefs that cohere with one another to some degree, to form some kind of overarching narrative, reason (I would argue) is still employed.  There’s still some logical structure to the web of beliefs found therein, even if reason and logic could be used more rigorously and explicitly to turn many of those beliefs on their head.

And when it comes to the Hebraistic conception of God, including the passionate adhesion in faith to a Being whom one can never intellectually know (as Barrett puts it), I think this can be better explained by the fact that we do have reason and rationality, unlike most other animals, as well as cognitive biases such as hyperactive agency detection.  It seems to me that the Hebraistic God concept is more or less derived from an agglomeration of the unknown sources of one’s emotional experiences (especially those involving an experience of the transcendent) and the unpredictable attributes of life itself, then ascribing agency to that ensemble of properties, and (to use Buber’s terms) establishing a relationship with that perceived agency; and in this case, the agent is simply referred to as God (Yahweh).

But in order to infer the existence of such an ensemble, it would seem to require a process of abstracting from particular emotional and unpredictable elements and instances of one’s experience to conclude some universal source for all of them.  Perhaps if this process is entirely unconscious we can say that reason wasn’t at all involved in it, but I suspect that the ascription of agency to something that is on par with the universe itself and its large conjunction of properties which vary over space and time, including its unknowable character, is more likely mediated by a combination of cognitive biases, intuition, emotion, and some degree of rational conscious inference.  But Barrett’s point still stands that the noetic content in Hebraism isn’t dominated by reason as in the case of the Greeks.

2. Greek Reason

Even though existential philosophy is largely a rebellion against the Platonic ideas that have permeated Western thought, Barrett reminds us that there is still some existential aspect to Plato’s works.  Perhaps this isn’t surprising once one realizes that he actually began his adult life aspiring to be a dramatic poet.  Eventually he abandoned this dream undergoing a sort of conversion and decided to dedicate the rest of his life toward a search for wisdom as per the inspiration of Socrates.  Even though he engaged in a life long war against the poets, he still maintained a piece of his poetic character in his writings, including up to the end when he wrote about the great creation myth, Timaeus.

By far, Plato’s biggest contribution to Western thought was his differentiating rational consciousness from the rest of our human psyche.  Prior to this move, rationality was in some sense subsumed under the same umbrella as emotion and intuition.  And this may be one of the biggest changes to how we think, and one that is so often taken for granted.  Barrett describes the position we’re in quite well:

“We are so used today to taking our rational consciousness for granted, in the ways of our daily life we are so immersed in its operations, this it is hard at first for us to imagine how momentous was this historical happening among the Greeks.  Steeped as our age is in the ideas of evolution, we have not yet become accustomed to the idea that consciousness itself is something that has evolved through long centuries and that even today, with us, is still evolving.  Only in this century, through modern psychology, have we learned how precarious a hold consciousness may exert upon life, and we are more acutely aware therefore what a precious deal of history, and of effort, was required for its elaboration, and what creative leaps were necessary at certain times to extend it beyond its habitual territory.”

Barrett’s mention of evolution is important here, because I think we ought to distinguish between the two forms of evolution that have affected how we think and experience the world.  On the one hand, we have our biological evolution to consider, where our brains have undergone dramatic changes in terms of the level of consciousness we actually experience and the degree of causal complexity that the brain can model; and on the other hand, we have our cultural evolution to consider, where our brains have undergone a number of changes in terms of the kinds of “programs” we run on it, how those programs are run and prioritized, and how we process and store information in various forms of external hardware.

In regard to biological evolution, long before our ancestors evolved into humans, the brain had nothing more than a protoself representation of our body and self (to use the neuroscientist Antonio Damasio’s terminology); it had nothing more than an unconscious state that served as a sort of basic map in the brain tracking the overall state of the body as a single entity in order to accomplish homeostasis.  Then, beyond this protoself there evolved a more sophisticated level of core consciousness where the organism became conscious of the feelings and emotions associated with the body’s internal states, and also became conscious that her feelings and thoughts were her own, further enhancing the sense of self, although still limited to the here-and-now or the present moment.  Finally, beyond this layer of self there evolved an extended consciousness: a form of consciousness that required far more memory, but which brought the organism into an awareness of the past and future, forming an autobiographical self with a perceptual simulator (imagination) that transcended space and time in some sense.

Once humans developed language, then we began to undergo our second form of evolution, namely culture.  After cultural evolution took off, human civilization led to a plethora of new words, concepts, skills, interests, and ways of looking at and structuring the world.  And this evolutionary step was vastly accelerated by written language, the scientific method, and eventually the invention of computers.  But in the time leading up to the scientific revolution, the industrial revolution, and finally the advent of computers and the information age, it was arguably Plato’s conceptualization of rational consciousness that paved the way forward to eventually reach those technological and epistemological feats.

It was Plato’s analysis of the human psyche and his effectively distilling the process of reason and rationality from the rest of our thought processes that allowed us to manipulate it, to enhance it, and to explore the true possibilities of this markedly human cognitive capacity.  Aristotle and many others since have merely built upon Plato’s monumental work, developing formal logic, computation, and other means of abstract analysis and information processing.  With the explicit use of reason, we’ve been able to overcome many of our cognitive biases (serving as a kind of “software patch” to our cognitive bugs) in order to discover many true facts about the world, allowing us to unlock a wealth of knowledge that had previously been inaccessible to us.  And it’s important to recognize that Plato’s impact on the history of philosophy has highlighted, more than anything else, our overall psychic evolution as a species.

Despite all the benefits that culture has brought us, there has been one inherent problem with the cultural evolution we’ve undergone: a large desynchronization between our cultural and biological evolution.  That is to say, our culture has evolved far, far faster than our biology ever could, and thus our biology hasn’t kept up with, or adapted us to, the cultural environment we’ve been creating.  And I believe this is a large source of our existential woes; for we have become a “fish out of water” (so to speak) where modern civilization and the way we’ve structured our day-to-day lives is incredibly artificial, filled with a plethora of supernormal stimuli and other factors that aren’t as compatible with our natural psychology.  It makes perfect sense then that many people living in the modern world have had feelings of alienation, loneliness, meaninglessness, anxiety, and disorientation.  And in my opinion, there’s no good or pragmatic way to fix this aside from engineering our genes such that our biology is able to catch up to our ever-changing cultural environment.

It’s also important to recognize that Plato’s idea of the universal, explicated in his theory of Forms, was one of the main impetuses for contemporary existential philosophy; not for its endorsement but rather because the idea that a universal such as “humanity” was somehow more real than any actual individual person fueled a revolt against such notions.  And it wasn’t until Kierkegaard and Nietzsche appeared on the scene in the nineteenth century where we saw an explicit attempt to reverse this Platonic idea; where the individual was finally given precedence and priority over the universal, and where a philosophy of existence was given priority over one of essence.  But one thing the existentialists maintained, as derived from Plato, was his conception that philosophizing was a personal means of salvation, transformation, and a concrete way of living (though this was, according to Plato, best exemplified by his teacher Socrates).

As for the modern existentialists, it was Kierkegaard who eventually brought the figure of Socrates back to life, long after Plato’s later portrayal of Socrates, which had effectively dehumanized him:

“The figure of Socrates as a living human presence dominates all the earlier dialogues because, for the young Plato, Socrates the man was the very incarnation of philosophy as a concrete way of life, a personal calling and search.  It is in this sense too that Kierkegaard, more than two thousand years later, was to revive the figure of Socrates-the thinker who lived his thought and was not merely a professor in an academy-as his precursor in existential thinking…In the earlier, so-called “Socratic”, dialogues the personality of Socrates is rendered in vivid and dramatic strokes; gradually, however, he becomes merely a name, a mouthpiece for Plato’s increasingly systematic views…”

By the time we get to Plato’s student, Aristotle, philosophy had become a purely theoretical undertaking, effectively replacing the subjective qualities of a self-examined life with a far less visceral objective analysis.  Indeed, by this point in time, as Barrett puts it: “…the ghost of the existential Socrates had at last been put to rest.”

As in all cases throughout history, we must take the good with the bad.  And we very likely wouldn’t have the sciences today had it not been for Plato detaching reason from the poetic, religious, and mythological elements of culture and thought, thus giving reason its own identity for the first time in history.  Whatever may have happened, we need to try and understand Greek rationalism as best we can such that we can understand the motivations of those that later objected to it, especially within modern existentialism.

When we look back to Aristotle’s Nicomachean Ethics, for example, we find a fairly balanced perspective of human nature and the many motivations that drive our behavior.  But, in evaluating all the possible goods that humans can aim for, in order to derive an answer to the ethical question of what one ought to do above all else, Aristotle claimed that the highest life one could attain was the life of pure reason, the life of the philosopher and the theoretical scientist.  Aristotle thought that reason was the highest part of our personality, where one’s capacity for reason was treated as the center of one’s real self and the core of their identity.  It is this stark description of rationalism that diffused through Western philosophy until bumping heads with modern existentialist thought.

Aristotle’s rationalism even permeated the Christianity of the Medieval period, where it maintained an albeit uneasy relationship between faith and reason as the center of the human personality.  And the quest for a complete view of the cosmos further propagated a valuing of reason as the highest human function.  The inherent problems with this view simply didn’t surface until some later thinkers began to see human existence and the potential of our reason as having a finite character with limitations.  Only then was the grandiose dream of having a complete knowledge of the universe and of our existence finally shattered.  Once this goal was seen as unattainable, we were left alone on an endless road of knowledge with no prospects for any kind of satisfying conclusion.

We can certainly appreciate the value of theoretical knowledge, and even develop a passion for discovering it, but we mustn’t lose sight of the embodied, subjective qualities of our human nature; nor can we successfully argue any longer that the latter can be dismissed due to a goal of reaching some absolute state of knowledge or being.  That goal is not within our reach, and so trying to make arguments that rely on its possibility is nothing more than an exercise of futility.

So now we must ask ourselves a very important question:

“If man can no longer hold before his mind’s eye the prospect of the Great Chain of Being, a cosmos rationally ordered and accessible from top to bottom to reason, what goal can philosophers set themselves that can measure up to the greatness of that old Greek ideal of the bios theoretikos, the theoretical life, which has fashioned the destiny of Western man for millennia?”

I would argue that the most important goal for philosophy has been and always will be the quest for discovering as many moral facts as possible, such that we can attain eudaimonia as Aristotle advocated for.  But rather than holding a life of reason as the greatest good to aim for, we should simply aim for maximal life fulfillment and overall satisfaction with one’s life, and not presuppose what will accomplish that.  We need to use science and subjective experience to inform us of what makes us happiest and the most fulfilled (taking advantage of the psychological sciences), rather than making assumptions about what does this best and simply arguing from the armchair.

And because our happiness and life fulfillment are necessarily evaluated through subjectivity, we mustn’t make the same mistakes of positivism and simply discard our subjective experience.  Rather we should approach morality with the recognition that we are each an embodied subject with emotions, intuitions, and feelings that motivate our desires and our behavior.  But we should also ensure that we employ reason and rationality in our moral analysis, so that our irrational predispositions don’t lead us farther away from any objective moral truths waiting to be discovered.  I’m sympathetic to a quote of Kierkegaard’s that I mentioned at the beginning of this post:

“What I really need is to get clear about what I must do, not what I must know, except insofar as knowledge must precede every act.”

I agree with Kierkegaard here, in that moral imperatives are the most important topic in philosophy, and should be the most important driving force in one’s life, rather than simply a drive for knowledge for it’s own sake.  But of course, in order to get clear about what one must do, one first has to know a number of facts pertaining to how any moral imperatives are best accomplished, and what those moral imperatives ought to be (as well as which are most fundamental and basic).  I think the most basic axiomatic moral imperative within any moral system that is sufficiently motivating to follow is going to be that which maximizes one’s life fulfillment; that which maximizes one’s chance of achieving eudaimonia.  I can’t imagine any greater goal for humanity than that.

Here is the link for part 5 of this post series.

The Experientiality of Matter

If there’s one thing that Nietzsche advocated for, it was to eliminate dogmatism and absolutism from one’s thinking.  And although I generally agree with him here, I do think that there is one exception to this rule.  One thing that we can be absolutely certain of is our own conscious experience.  Nietzsche actually had something to say about this (in Beyond Good and Evil, which I explored in a previous post), where he responds to Descartes’ famous adage “cogito ergo sum” (the very adage associated with my blog!), and he basically says that Descartes’ conclusion (“I think therefore I am”) shows a lack of reflection concerning the meaning of “I think”.  He wonders how he (and by extension, Descartes) could possibly know for sure that he was doing the thinking rather than the thought doing the thinking, and he even considers the possibility that what is generally described as thinking may actually be something more like willing or feeling or something else entirely.

Despite Nietzsche’s criticism against Descartes in terms of what thought is exactly or who or what actually does the thinking, we still can’t deny that there is thought.  Perhaps if we replace “I think therefore I am” with something more like “I am conscious, therefore (my) conscious experience exists”, then we can retain some core of Descartes’ insight while throwing out the ambiguities or uncertainties associated with how exactly one interprets that conscious experience.

So I’m in agreement with Nietzsche in the sense that we can’t be certain of any particular interpretation of our conscious experience, including whether or not there is an ego or a self (which Nietzsche actually describes as a childish superstition similar to the idea of a soul), nor can we make any certain inferences about the world’s causal relations stemming from that conscious experience.  Regardless of these limitations, we can still be sure that conscious experience exists, even if it can’t be ascribed to an “I” or a “self” or any particular identity (let alone a persistent identity).

Once we’re cognizant of this certainty, and if we’re able to crawl out of the well of solipsism and eventually build up a theory about reality (for pragmatic reasons at the very least), then we must remain aware of the priority of consciousness in any resultant theory we construct about reality, with regard to its structure or any of its other properties.  Personally, I believe that some form of naturalistic physicalism (a realistic physicalism) is the best candidate for an ontological theory that is the most parsimonious and explanatory for all that we experience in our reality.  However, most people that make the move to adopt some brand of physicalism seem to throw the baby out with the bathwater (so to speak), whereby consciousness gets eliminated by assuming it’s an illusion or that it’s not physical (therefore having no room for it in a physicalist theory, aside from its neurophysiological attributes).

Although I used to feel differently about consciousness (and it’s relationship to physicalism), where I thought it was plausible for it to be some kind of an illusion, upon further reflection I’ve come to realize that this was a rather ridiculous position to hold.  Consciousness can’t be an illusion in the proper sense of the word, because the experience of consciousness is real.  Even if I’m hallucinating where my perceptions don’t directly correspond with the actual incoming sensory information transduced through my body’s sensory receptors, then we can only say that the perceptions are illusory insofar as they don’t directly map onto that incoming sensory information.  But I still can’t say that having these experiences is itself an illusion.  And this is because consciousness is self-evident and experiential in that it constitutes whatever is experienced no matter what that experience consists of.

As for my thoughts on physicalism, I came to realize that positing consciousness as an intrinsic property of at least some kinds of physical material (analogous to a property like mass) allows us to avoid having to call consciousness non-physical.  If it is simply an experiential property of matter, that doesn’t negate its being a physical property of that matter.  It may be that we can’t access this property in such a way as to evaluate it with external instrumentation, like we can for all the other properties of matter that we know of such as mass, charge, spin, or what-have-you, but that doesn’t mean an experiential property should be off limits for any physicalist theory.  It’s just that most physicalists assume that everything can or has to be reducible to the externally accessible properties that our instrumentation can measure.  And this suggests that they’ve simply assumed that the physical can only include externally accessible properties of matter, rather than both internally and externally accessible properties of matter.

Now it’s easy to see why science might push philosophy in this direction because its methodology is largely grounded on third-party verification and a form of objectivity involving the ability to accurately quantify everything about a phenomenon with little or no regard for introspection or subjectivity.  And I think that this has caused many a philosopher to paint themselves into a corner by assuming that any ontological theory underlying the totality of our reality must be constrained in the same way that the physical sciences are.  To see why this is an unwarranted assumption, let’s consider a “black box” that can only be evaluated by a certain method externally.  It would be fallacious to conclude that just because we are unable to access the inside of the box, that the box must therefore be empty inside or that there can’t be anything substantially different inside the box compared to what is outside the box.

We can analogize this limitation of studying consciousness with our ability to study black holes within the field of astrophysics, where we’ve come to realize that accessing any information about their interior (aside from how much mass there is) is impossible to do from the outside.  And if we managed to access this information (if there is any) from the inside by leaping past its outer event horizon, it would be impossible for us to escape and share any of that information.  The best we can do is to learn what we can from the outside behavior of the black hole in terms of its interaction with surrounding matter and light and infer something about the inside, like how much matter it contains (e.g. we can infer the mass of a black hole from its outer surface area).  And we can learn a little bit more by considering what is needed to create or destroy a black hole, thus creating or destroying any interior qualities that may or may not exist.

A black hole can only form from certain configurations of matter, particularly aggregates that are above a certain mass and density.  And it can only be destroyed by starving it to death, by depriving it of any new matter, where it will slowly die by evaporating entirely into Hawking radiation, thus destroying anything that was on the inside in the process.  So we can infer that any internal qualities it does have, however inaccessible they may be, can be brought into and out of existence with certain physical processes.

Similarly, we can infer some things about consciousness by observing one’s external behavior including inferring some conditions that can create, modify, or destroy that type of consciousness, but we are unable to know what it’s like to be on the inside of that system once it exists.  We’re only able to know about the inside of our own conscious system, where we are in some sense inside our own black hole with nobody else able to access this perspective.  And I think it is easy enough to imagine that certain configurations of matter simply have an intrinsic, externally inaccessible experiential property, just as certain configurations of matter lead to the creation of a black hole with its own externally inaccessible and qualitatively unknown internal properties.  Despite the fact that we can’t access the black hole’s interior with a strictly external method, to determine its internal properties, this doesn’t mean we should assume that whatever properties may exist inside it are therefore fundamentally non-physical.  Just as we wouldn’t consider alternate dimensions (such as those predicted in M-theory/String-Theory) that we can’t physically access to be non-physical.  Perhaps one or more of these inaccessible dimensions (if they exist) is what accounts for an intrinsic experiential property within matter (though this is entirely speculative and need not be true for the previous points to hold, but it’s an interesting thought nevertheless).

Here’s a relevant quote from the philosopher Galen Strawson, where he outlines what physicalism actually entails:

Real physicalists must accept that at least some ultimates are intrinsically experience-involving. They must at least embrace micropsychism. Given that everything concrete is physical, and that everything physical is constituted out of physical ultimates, and that experience is part of concrete reality, it seems the only reasonable position, more than just an ‘inference to the best explanation’… Micropsychism is not yet panpsychism, for as things stand realistic physicalists can conjecture that only some types of ultimates are intrinsically experiential. But they must allow that panpsychism may be true, and the big step has already been taken with micropsychism, the admission that at least some ultimates must be experiential. ‘And were the inmost essence of things laid open to us’ I think that the idea that some but not all physical ultimates are experiential would look like the idea that some but not all physical ultimates are spatio-temporal (on the assumption that spacetime is indeed a fundamental feature of reality). I would bet a lot against there being such radical heterogeneity at the very bottom of things. In fact (to disagree with my earlier self) it is hard to see why this view would not count as a form of dualism… So now I can say that physicalism, i.e. real physicalism, entails panexperientialism or panpsychism. All physical stuff is energy, in one form or another, and all energy, I trow, is an experience-involving phenomenon. This sounded crazy to me for a long time, but I am quite used to it, now that I know that there is no alternative short of ‘substance dualism’… Real physicalism, realistic physicalism, entails panpsychism, and whatever problems are raised by this fact are problems a real physicalist must face.

— Galen Strawson, Consciousness and Its Place in Nature: Does Physicalism Entail Panpsychism?
.
While I don’t believe that all matter has a mind per se, because a mind is generally conceived as being a complex information processing structure, I think it is likely that all matter has an experiential quality of some kind, even if most instantiations of it are entirely unrecognizable as what we’d generally consider to be “consciousness” or “mentality”.  I believe that the intuitive gap here is born from the fact that the only minds we are confident exist are those instantiated by a brain, which has the ability to make an incredibly large number of experiential discriminations between various causal relations, thus giving it a capacity that is incredibly complex and not observed anywhere else in nature.  On the other hand, a particle of matter on its own would be hypothesized to have the capacity to make only one or two of these kinds of discriminations, making it unintelligent and thus incapable of brain-like activity.  Once a person accepts that an experiential quality can come in varying degrees from say one experiential “bit” to billions or trillions of “bits” (or more), then we can plausibly see how matter could give rise to systems that have a vast range in their causal power, from rocks that don’t appear to do much at all, to living organisms that have the ability to store countless representations of causal relations (memory) allowing them to behave in increasingly complex ways.  And perhaps best of all, this approach solves the mind-body problem by eliminating the mystery of how fundamentally non-experiential stuff could possibly give rise to experientiality and consciousness.

It’s Time For Some Philosophical Investigations

Ludwig Wittgenstein’s Philosophical Investigations is a nice piece of work where he attempts to explain his views on language and the consequences of this view on various subjects like logic, semantics, cognition, and psychology.  I’ve mentioned some of his views very briefly in a couple of earlier posts, but I wanted to delve into his work in a little more depth here and comment on what strikes me as most interesting.  Lately, I’ve been looking back at some of the books I’ve read from various philosophers and have been wanting to revisit them so I can explore them in more detail and share how they connect to some of my own thoughts.  Alright…let’s begin.

Language, Meaning, & Their Probabilistic Attributes

He opens his Philosophical Investigations with a quote from St. Augustine’s Confessions that describes how a person learns a language.  St. Augustine believed that this process involved simply learning the names of objects (for example, by someone else pointing to the objects that are so named) and then stringing them together into sentences, and Wittgenstein points out that this is true to some trivial degree but it overlooks a much more fundamental relationship between language and the world.  For Wittgenstein, the meaning of words can not simply be attached to an object like a name can.  The meaning of a word or concept has much more of a fuzzy boundary as it depends on a breadth of context or associations with other concepts.  He analogizes this plurality in the meanings of words with the relationship between members of a family.  While there may be some resemblance between different uses of a word or concept, we aren’t able to formulate a strict definition to fully describe this resemblance.

One problem then, especially within philosophy, is that many people assume that the meaning of a word or concept is fixed with sharp boundaries (just like the fixed structure of words themselves).  Wittgenstein wants to dispel people of this false notion (much as Nietzsche tried to do before him) so that they can stop misusing language, as he believed that this misuse was the cause of many (if not all) of the major problems that had cropped up in philosophy over the centuries, particularly in metaphysics.  Since meaning is actually somewhat fluid and can’t be accounted for by any fixed structure, Wittgenstein thinks that any meaning that we can attach to these words is ultimately going to be determined by how those words are used.  Since he ties meaning with use, and since this use is something occurring in our social forms of life, it has an inextricably external character.  Thus, the only way to determine if someone else has a particular understanding of a word or concept is through their behavior, in response to or in association with the use of the word(s) in question.  This is especially important in the case of ambiguous sentences, which Wittgenstein explores to some degree.

Probabilistic Shared Understanding

Some of what Wittgenstein is trying to point out here are what I like to refer to as the inherently probabilistic attributes of language.  And it seems to me to be probabilistic for a few different reasons, beyond what Wittgenstein seems to address.  First, there is no guarantee that what one person means by a word or concept exactly matches the meaning from another person’s point of view, but at the very least there is almost always going to be some amount of semantic overlap (and possibly 100% in some cases) between the two individual’s intended meanings, and so there is going to be some probability that the speaker and the listener do in fact share a complete understanding.  It seems reasonable to argue that simpler concepts will have a higher probability of complete semantic overlap whereas more complex concepts are more likely to leave a gap in that shared understanding.  And I think this is true even if we can’t actually calculate what any of these probabilities are.

Now my use of the word meaning here differs from Wittgenstein’s because I am referring to something that is not exclusively shared by all parties involved and I am pointing to something that is internal (a subjective understanding of a word) rather than external (the shared use of a word).  But I think this move is necessary if we are to capture all of the attributes that people explicitly or implicitly refer to with a concept like meaning.  It seems better to compromise with Wittgenstein’s thinking and refer to the meaning of a word as a form of understanding that is intimately connected with its use, but which involves elements that are not exclusively external.

We can justify this version of meaning through an example.  If I help teach you how to ride a bike and explain that this activity is called biking or to bike, then we can use Wittgenstein’s conception of meaning and it will likely account for our shared understanding, and so I have no qualms about that, and I’d agree with Wittgenstein that this is perhaps the most important function of language.  But it may be the case that you have an understanding that biking is an activity that can only happen on Tuesdays, because that happened to be the day that I helped teach you how to ride a bike.  Though I never intended for you to understand biking in this way, there was no immediate way for me to infer that you had this misunderstanding on the day I was teaching you this.  I could only learn of this fact if you explicitly explained to me your understanding of the term with enough detail, or if I had asked you additional questions like whether or not you’d like to bike on a Wednesday (for example), with you answering “I don’t know how to do that as that doesn’t make any sense to me.”  Wittgenstein doesn’t account for this gap in understanding in his conception of meaning and I think this makes for a far less useful conception.

Now I think that Wittgenstein is still right in the sense that the only way to determine someone else’s understanding (or lack thereof) of a word is through their behavior, but due to chance as well as where our attention is directed at any given moment, we may never see the right kinds of behavior to rule out any number of possible misunderstandings, and so we’re apt to just assume that these misunderstandings don’t exist because language hides them to varying degrees.  But they can and in some cases do exist, and this is why I prefer a conception of meaning that takes these misunderstandings into account.  So I think it’s useful to see language as probabilistic in the sense that there is some probability of a complete shared understanding underlying the use of a word, and thus there is a converse probability of some degree of misunderstanding.

Language & Meaning as Probabilistic Associations Between Causal Relations

A second way of language being probabilistic is due to the fact that the unique meanings associated with any particular use of a word or concept as understood by an individual are derived from probabilistic associations between various inferred causal relations.  And I believe that this is the underlying cause of most of the problems that Wittgenstein was trying to address in this book.  He may not have been thinking about the problem in this way, but it can account for the fuzzy boundary problem associated with the task of trying to define the meaning of words since this probabilistic structure underlies our experiences, our understanding of the world, and our use of language such that it can’t be represented by static, sharp definitions.  When a person is learning a concept, like redness, they experience a number of observations and infer what is common to all of those experiences, and then they extract a particular subset of what is common and associate it with a word like red or redness (as opposed to another commonality like objectness, or roundness, or what-have-you).  But in most cases, separate experiences of redness are going to be different instantiations of redness with different hues, textures, shapes, etc., which means that redness gets associated with a large range of different qualia.

If you come across a new qualia that seems to more closely match previous experiences associated with redness rather than orangeness (for example), I would argue that this is because the brain has assigned a higher probability to that qualia being an instance of redness as opposed to, say, orangeness.  And the brain may very well test a hypothesis of the qualia matching the concept of redness versus the concept of orangeness, and depending on your previous experiences of both, and the present context of the experience, your brain may assign a higher probability of orangeness instead.  Perhaps if a red-orange colored object is mixed in with a bunch of unambiguously orange-colored objects, it will be perceived as a shade of orange (to match it with the rest of the set), but if the case were reversed and it were mixed in with a bunch of unambiguously red-colored objects, it will be perceived as a shade of red instead.

Since our perception of the world depends on context, then the meanings we assign to words or concepts also depends on context, but not only in the sense of choosing a different use of a word (like Wittgenstein argues) in some language game or other, but also by perceiving the same incoming sensory information as conceptually different qualia (like in the aforementioned case of a red-orange colored object).  In that case, we weren’t intentionally using red or orange in a different way but rather were assigning one word or the other to the exact same sensory information (with respect to the red-orange object) which depended on what else was happening in the scene that surrounded that subset of sensory information.  To me, this highlights how meaning can be fluid in multiple ways, some of that fluidity stemming from our conscious intentions and some of it from unintentional forces at play involving our prior expectations within some context which directly modify our perceived experience.

This can also be seen through Wittgenstein’s example of what he calls a duckrabbit, an ambiguous image that can be perceived as a duck or a rabbit.  I’ve taken the liberty of inserting this image here along with a copy of it which has been rotated in order to more strongly invoke the perception of a rabbit.  The first image no doubt looks more like a duck and the second image, more like a rabbit.

Now Wittgenstein says that when one is looking at the duckrabbit and sees a rabbit, they aren’t interpreting the picture as a rabbit but are simply reporting what they see.  But in the case where a person sees a duck first and then later sees a rabbit, Wittgenstein isn’t sure what to make of this.  However, he claims to be sure that whatever it is, it can’t be the case that the external world stays the same while an internal cognitive change takes place.  Wittgenstein was incorrect on this point because the external world doesn’t change (in any relevant sense) despite our seeing the duck or seeing the rabbit.  Furthermore, he never demonstrates why two different perceptions would require a change in the external world.  The fact of the matter is, you can stare at this static picture and ask yourself to see a duck or to see a rabbit and it will affect your perception accordingly.  This is partially accomplished by you mentally rotating the image in your imagination and seeing if that changes how well it matches one conception or the other, and since it matches both conceptions to a high degree, you can easily perceive it one way or the other.  Your brain is simply processing competing hypotheses to account for the incoming sensory information, and the top-down predictions of rabbitness or duckness (which you’ve acquired over past experiences) actually changes the way you perceive it with no change required in the external world (despite Wittgenstein’s assertion to the contrary).

To give yet another illustration of the probabilistic nature of language, just imagine the head of a bald man and ask yourself, if you were to add one hair at a time to this bald man’s head, at what point does he lose the property of baldness?  If hairs were slowly added at random, and you could simply say “Stop!  Now he’s no longer bald!” at some particular time, there’s no doubt in my mind that if this procedure were repeated (even if the hairs were added non-randomly), you would say “Stop!  Now he’s no longer bald!” at a different point in this transition.  Similarly if you were looking at a light that was changing color from red to orange, and were asked to say when the color has changed to orange, you would pick a point in the transition that is within some margin of error but it wouldn’t be an exact, repeatable point in the transition.  We could do this thought experiment with all sorts of concepts that are attached to words, like cat and dog and, for example, use a computer graphic program to seamlessly morph a picture of a cat into a picture of a dog and ask at what point did the cat “turn into” a dog?  It’s going to be based on a probability of coincident features that you detect which can vary over time.  Here’s a series of pictures showing a chimpanzee morphing into Bill Clinton to better illustrate this point:

At what point do we stop seeing a picture of a chimpanzee and start seeing a picture of something else?  When do we first see Bill Clinton?  What if I expanded this series of 15 images into a series of 1000 images so that this transition happened even more gradually?  It would be highly unlikely to pick the exact same point in the transition two times in a row if the images weren’t numbered or arranged in a grid.  We can analogize this phenomenon with an ongoing problem in science, known as the species problem.  This problem can be described as the inherent difficulty of defining exactly what a species is, which is necessary if one wants to determine if and when one species evolves into another.  This problem occurs because the evolutionary processes giving rise to new species are relatively slow and continuous whereas sorting those organisms into sharply defined categories involves the elimination of that generational continuity and replacing it with discrete steps.

And we can see this effect in the series of images above, where each picture could represent some large number of generations in an evolutionary timeline, where each picture/organism looks roughly like the “parent” or “child” of the picture/organism that is adjacent to it.  Despite this continuity, if we look at the first picture and the last one, they look like pictures of distinct species.  So if we want to categorize the first and last picture as distinct species, then we create a problem when trying to account for every picture/organism that lies in between that transition.  Similarly words take on an appearance of strict categorization (of meaning) when in actuality, any underlying meaning attached is probabilistic and dynamic.  And as Wittgenstein pointed out, this makes it more appropriate to consider meaning as use so that the probabilistic and dynamic attributes of meaning aren’t lost.

Now you may think you can get around this problem of fluidity or fuzzy boundaries with concepts that are simpler and more abstract, like the concept of a particular quantity (say, a quantity of four objects) or other concepts in mathematics.  But in order to learn these concepts in the first place, like quantity, and then associate particular instances of it with a word, like four, one had to be presented with a number of experiences and infer what was common to all of those experiences (as was the case with redness mentioned earlier).  And this inference (I would argue) involves a probabilistic process as well, it’s just that the resulting probability of our inferring particular experiences as an instance of four objects is incredibly high and therefore repeatable and relatively unambiguous.  Therefore that kind of inference is likely to be sustained no matter what the context, and it is likely to be shared by two individuals with 100% semantic overlap (i.e. it’s almost certain that what I mean by four is exactly what you mean by four even though this is almost certainly not the case for a concept like love or consciousness).  This makes mathematical concepts qualitatively different from other concepts (especially those that are more complex or that more closely map on to reality), but it doesn’t negate their having a probabilistic attribute or foundation.

Looking at the Big Picture

Though this discussion of language and meaning is not an exhaustive analysis of Wittgenstein’s Philosophical Investigations, it represents an analysis of the main theme present throughout.  His main point was to shed light on the disparity between how we often think of language and how we actually use it.  When we stray away from the way it is actually used in our everyday lives, in one form of social life or other, and instead misuse it such as in philosophy, this creates all sorts of problems and unwarranted conclusions.  He also wants his readers to realize that the ultimate goal of philosophy should not be to try and make metaphysical theories and deep explanations underlying everyday phenomena, since these are often born out of unwarranted generalizations and other assumptions stemming from how our grammar is structured.  Instead we ought to subdue these temptations to generalize and subdue our temptations to be dogmatic and instead use philosophy as a kind of therapeutic tool to keep our abstract thinking in check and to better understand ourselves and the world we live in.

Although I disagree with some of Wittgenstein’s claims about cognition (in terms of how intimately it is connected to the external world) and take some issue with his arguably less useful conception of meaning, he makes a lot of sense overall.  Wittgenstein was clearly hitting upon a real difference between the way actual causal relations in our experience are structured and how those relations are represented in language.  Personally, I think that work within philosophy is moving in the right direction if the contributions made therein lead us to make more successful predictions about the causal structure of the world.  And I believe this to be so even if this progress includes generalizations that may not be exactly right.  As long as we can structure our thinking to make more successful predictions, then we’re moving forward as far as I’m concerned.  In any case, I liked the book overall and thought that the interlocutory style gave the reader a nice break from the typical form seen in philosophical argumentation.  I highly recommend it!

Predictive Processing: Unlocking the Mysteries of Mind & Body (Part VI)

This is the last post I’m going to write for this particular post-series on Predictive Processing (PP).  Here’s the links to parts 1, 2, 3, 4, and 5.  I’ve already explored a bit on how a PP framework can account for folk psychological concepts like beliefs, desires, and emotions, how it accounts for action, language and ontology, knowledge, and also perception, imagination, and reasoning.  In this final post for this series, I’m going to explore consciousness itself and how some theories of consciousness fit very nicely within a PP framework.

Consciousness as Prediction (Predicting The Self)

Earlier in this post-series I explored how PP treats perception (whether online or an offline form like imagination) as simply predictions pertaining to incoming visual information with varying degrees of precision weighting assigned to the resulting prediction error.  In a sense then, consciousness just is prediction.  At the very least, it is a subset of the predictions, likely those that are higher up in the predictive hierarchy.  This is all going to depend on which aspect or level of consciousness we are trying to explain, and as philosophers and cognitive scientists well know, consciousness is difficult to pin down and define in any way.

By consciousness, we could mean any kind of awareness at all, or we could limit this term to only apply to a system that is aware of itself.  Either way we have to be careful here if we’re looking to distinguish between consciousness generally speaking (consciousness in any form, which may be unrecognizable to us) and the unique kind of consciousness that we as human beings experience.  If an ant is conscious, it doesn’t likely have any of the richness that we have in our experience nor is it likely to have self-awareness like we do (even though a dolphin, which has a large neocortex and prefrontal cortex, is far more likely to).  So we have to keep these different levels of consciousness in mind in order to properly assess their being explained by any cognitive framework.

Looking through a PP lens, we can see that what we come to know about the world and our own bodily states is a matter of predictive models pertaining to various inferred causal relations.  These inferred causal relations ultimately stem from bottom-up sensory input.  But when this information is abstracted at higher and higher levels, eventually one can (in principle) get to a point where those higher level models begin to predict the existence of a unified prediction engine.  In other words, a subset of the highest-level predictive models may eventually predict itself as a self.  We might describe this process as the emergence of some level of self-awareness, even if higher levels of self-awareness aren’t possible unless particular kinds of higher level models have been generated.

What kinds of predictions might be involved with this kind of emergence?  Well, we might expect that predictions pertaining to our own autobiographical history, which is largely composed of episodic memories of our past experience, would contribute to this process (e.g. “I remember when I went to that amusement park with my friend Mary, and we both vomited!”).  If we begin to infer what is common or continuous between those memories of past experiences (even if only 1 second in the past), we may discover that there is a form of psychological continuity or identity present.  And if this psychological continuity (this type of causal relation) is coincident with an incredibly stable set of predictions pertaining to (especially internal) bodily states, then an embodied subject or self can plausibly emerge from it.

This emergence of an embodied self is also likely fueled by predictions pertaining to other objects that we infer existing as subjects.  For instance, in order to develop a theory of mind about other people, that is, in order to predict how other people will behave, we can’t simply model their external behavior as we can for something like a rock falling down a hill.  This can work up to a point, but eventually it’s just not good enough as behaviors become more complex.  Animal behavior, most especially that of humans, is far more complex than that of inanimate objects and as such it is going to be far more effective to infer some internal hidden causes for that behavior.  Our predictions would work well if they involved some kind of internal intentionality and goal-directedness operating within any animate object, thereby transforming that object into a subject.  This should be no less true for how we model our own behavior.

If this object that we’ve now inferred to be a subject seems to behave in ways that we see ourselves as behaving (especially if it’s another human being), then we can begin to infer some kind of equivalence despite being separate subjects.  We can begin to infer that they too have beliefs, desires, and emotions, and thus that they have an internal perspective that we can’t directly access just as they aren’t able to access ours.  And we can also see ourselves from a different perspective based on how we see those other subjects from an external perspective.  Since I can’t easily see myself from an external perspective, when I look at others and infer that we are similar kinds of beings, then I can begin to see myself as having both an internal and external side.  I can begin to infer that others see me similar to the way that I see them, thus further adding to my concept of self and the boundaries that define that self.

Multiple meta-cognitive predictions can be inferred based on all of these interactions with others, and from the introspective interactions with our brain’s own models.  Once this happens, a cognitive agent like ourselves may begin to think about thinking and think about being a thinking being, and so on and so forth.  All of these cognitive moves would seem to provide varying degrees of self-hood or self-awareness.  And these can all be thought of as the brain’s best guesses that account for its own behavior.  Either way, it seems that the level of consciousness that is intrinsic to an agent’s experience is going to be dependent on what kinds of higher level models and meta-models are operating within the agent.

Consciousness as Integrated Information

One prominent theory of consciousness is the Integrated Information Theory of Consciousness, otherwise known as IIT.  This theory, initially formulated by Giulio Tononi back in 2004 and which has undergone development ever since, posits that consciousness is ultimately dependent on the degree of information integration that is inherent in the causal properties of some system.  Another way of saying this is that the causal system specified is unified such that every part of the system must be able to affect and be affected by the rest of the system.  If you were to physically isolate one part of a system from the rest of it (and if this part was the least significant to the rest of the system), then the resulting change in the cause-effect structure of the system would quantify the degree of integration.  A large change in the cause-effect structure based on this part’s isolation from the system (that is, by having introduced what is called a minimum partition to the system) would imply a high degree of information integration and vice versa.  And again, a high degree of integration implies a high degree of consciousness.

Notice how this information integration axiom in IIT posits that a cognitive system that is entirely feed-forward will not be conscious.  So if our brain processed incoming sensory information from the bottom up and there was no top-down generative model feeding downward through the system, then IIT would predict that our brain wouldn’t be able to produce consciousness.  PP on the other hand, posits a feedback system (as opposed to feed-forward) where the bottom-up sensory information that flows upward is met with a downward flow of top-down predictions trying to explain away that sensory information.  The brain’s predictions cause a change in the resulting prediction error, and this prediction error serves as feedback to modify the brain’s predictions.  Thus, a cognitive architecture like that suggested by PP is predicted to produce consciousness according to the most fundamental axiom of IIT.

Additionally, PP posits cross-modal sensory features and functionality where the brain integrates (especially lower level) predictions spanning various spatio-temporal scales from different sensory modalities, into a unified whole.  For example, if I am looking at and petting a black cat lying on my lap and hearing it purr, PP posits that my perceptual experience and contextual understanding of that experience are based on having integrated the visual, tactile, and auditory expectations that I’ve associated to constitute such an experience of a “black cat”.  It is going to be contingent on a conjunction of predictions that are occurring simultaneously in order to produce a unified experience rather than a barrage of millions or billions of separate causal relations (let alone those which stem from different sensory modalities) or having millions or billions of separate conscious experiences (which would seem to necessitate separate consciousnesses if they are happening at the same time).

Evolution of Consciousness

Since IIT identifies consciousness with integrated information, it can plausibly account for why it evolved in the first place.  The basic idea here is that a brain that is capable of integrating information is more likely to exploit and understand an environment that has a complex causal structure on multiple time scales than a brain that has informationally isolated modules.  This idea has been tested and confirmed to some degree by artificial life simulations (animats) where adaptation and integration are both simulated.  The organism in these simulations was a Braitenberg-like vehicle that had to move through a maze.  After 60,000 generations of simulated brains evolving through natural selection, it was found that there was a monotonic relationship between their ability to get through the maze and the amount of simulated information integration in their brains.

This increase in adaptation was the result of an effective increase in the number of concepts that the organism could make use of given the limited number of elements and connections possible in its cognitive architecture.  In other words, given a limited number of connections in a causal system (such as a group of neurons), you can pack more functions per element if the level of integration with respect to those connections is high, thus giving an evolutionary advantage to those with higher integration.  Therefore, when all else is equal in terms of neural economy and resources, higher integration gives an organism the ability to take advantage of more regularities in their environment.

From a PP perspective, this makes perfect sense because the complex causal structure of the environment is described as being modeled at many different levels of abstraction and at many different spatio-temporal scales.  All of these modeled causal relations are also described as having a hierarchical structure with models contained within models, and with many associations existing between various models.  These associations between models can be accounted for by a cognitive architecture that re-uses certain sets of neurons in multiple models, so the association is effectively instantiated by some literal degree of neuronal overlap.  And of course, these associations between multiply-leveled predictions allows the brain to exploit (and create!) as many regularities in the environment as possible.  In short, both PP and IIT make a lot of practical sense from an evolutionary perspective.

That’s All Folks!

And this concludes my post-series on the Predictive Processing (PP) framework and how I see it as being applicable to a far more broad account of mentality and brain function, than it is generally assumed to be.  If there’s any takeaways from this post-series, I hope you can at least appreciate the parsimony and explanatory scope of predictive processing and viewing the brain as a creative and highly capable prediction engine.