Irrational Man: An Analysis (Part 4, Chapter 11: The Place of the Furies)

In my last post in this series on William Barrett’s Irrational Man, I examined some of the work of existential philosopher Jean-Paul Sartre, which concluded part 3 of Barrett’s book.  The final chapter, Ch. 11: The Place of the Furies, will be briefly examined here and this will conclude my eleven part post-series.  I’ve enjoyed this very much, but it’s time to move on to new areas of interest, so let’s begin.

1. The Crystal Palace Unmanned

“The fact is that a good dose of intellectualism-genuine intellectualism-would be a very helpful thing in American life.  But the essence of the existential protest is that rationalism can pervade a whole civilization, to the point where the individuals in that civilization do less and less thinking, and perhaps wind up doing none at all.  It can bring this about by dictating the fundamental ways and routines by which life itself moves.  Technology is one material incarnation of rationalism, since it derives from science; bureaucracy is another, since it aims at the rational control and ordering of social life; and the two-technology and bureaucracy-have come more and more to rule our lives.”

Regarding the importance and need for more intellectualism in our society, I think this can be better described as the need for more critical thinking skills and the need for people to be able to discern fact from fiction, to recognize their own cognitive biases, and to respect the adage that extraordinary claims require extraordinary evidence.  At the same time, in order to appreciate the existentialist’s concerns, we ought to recognize that there are aspects of human psychology including certain psychological needs that are inherently irrational, including with respect to how we structure our lives, how we express our emotions and creativity, how we maintain a balance in our psyche, etc.  But, since technology is not only a material incarnation of rationalism, but also an outlet for our creativity, there has to be a compromise here where we shouldn’t want to abandon technology, but simply to keep it in check such that it doesn’t impede our psychological health and our ability to live a fulfilling life.

“But it is not so much rationalism as abstractness that is the existentialists’ target; and the abstractness of life in this technological and bureaucratic age is now indeed something to reckon with.  The last gigantic step forward in the spread of technologism has been the development of mass art and mass media of communication: the machine no longer fabricates only material products; it also makes minds. (stereotypes, etc.).”

Sure enough, we’re living in a world where many of our occupations are but one of many layers of abstraction constituting our modern “machine of civilization”.  And the military industrial complex that has taken over the modern world has certainly gone beyond the mass production of physical stuff to be consumed by the populace, and now includes the mass production and viral dissemination of memes as well.  Ideas can spread like viruses and in our current globally interconnected world (which Barrett hadn’t yet seen to the same degree when writing this book), the spread of these ideas is much faster and influential on culture than ever before.  The degree of indoctrination, and the perpetuated cycles of co-dependence between citizens and the corporatocratic, sociopolitical forces ruling our lives from above, have resulted in making our way of life and our thinking much more collective and less personal than at any other time in human history.

“Kierkegaard condemned the abstractness of his time, calling it an Age of Reflection, but what he seems chiefly to have had in mind was the abstractness of the professorial intellectual, seeing not real life but the reflection of it in his own mind.”

Aside from the increasingly abstract nature of modern living then, there’s also the abstractness that pervades our thinking about life, which detracts from our ability to actually experience life.  Kierkegaard had a legitimate point here, by pointing out the fact that theory cannot be a complete substitute for practice; that thought cannot be a complete substitute for action.  We certainly don’t want the reverse either, since action without sufficient forethought leads to foolishness and bad consequences.

I think the main point here is that we don’t want to miss out on living life by thinking about it too much.  Since living a philosophically examined life is beneficial, it remains an open question exactly what balance is best for any particular individual to live the most fulfilling life.  In the mean time we ought to simply recognize that there is the potential for an imbalance, and to try our best to avoid it.

“To be rational is not the same as to be reasonable.  In my time I have heard the most hair-raising and crazy things from very rational men, advanced in a perfectly rational way; no insight or feelings had been used to check the reasoning at any point.”

If you ignore our biologically-grounded psychological traits, or ignore the fact that there’s a finite range of sociological conditions for achieving human psychological health and well-being, then you can’t develop any theory that’s supposed to apply to humans and expect it to be reasonable or tenable.  I would argue that ignoring this subjective part of ourselves when making theories that are supposed to guide our behavior in any way is irrational, at least within the greater context of aiming to have not only logically valid arguments but logically sound arguments as well.  But, if we’re going to exclude the necessity for logical soundness in our conception of rationality, then the point is well taken.  Rationality is a key asset in responsible decision making but it should be used in a way that relies on or seeks to rely on true premises, taking our psychology and sociology into account.

“The incident (making hydrogen bombs without questioning why) makes us suspect that, despite the increase in the rational ordering of life in modern times, men have not become the least bit more reasonable in the human sense of the word.  A perfect rationality might not even be incompatible with psychosis; it might, in fact, even lead to the latter.”

Again, I disagree with Barrett’s use or conception of rationality here, but semantics aside, his main point still stands.  I think that we’ve been primarily using our intelligence as a species to continue to amplify our own power and maximize our ability to manipulate the environment in any way we see fit.  But as our technological capacity advances, our cultural evolution is getting increasingly out of sync with our biological evolution, and we haven’t been anywhere close to sufficient in taking our psychological needs and limitations into account as we continue to engineer the world of tomorrow.

What we need is rationality that relies on true or at least probable premises, as this combination should actually lead a person to what is reasonable or likely to be reasonable.  I have no doubt that without making use of a virtue such as reasonableness, rationality can become dangerous and destructive, but this is the case with every tool we use whether it’s rationality or other capacities both mental and material; tools can always be harmful when misused.

“If, as the Existentialists hold, an authentic life is not handed to us on a platter but involves our own act of self-determination (self-finitization) within our time and place, then we have got to know and face up to that time, both in its (unique) threats and its promises.”

And our use of rationality on an individual level should be used to help reach this goal of self-finitization, so that we can balance the benefits of the collective with the freedom of each individual that makes up that collective.

“I for one am personally convinced that man will not take his next great step forward until he has drained to the lees the bitter cup of his own powerlessness.”

And when a certain kind of change is perceived as something to be avoided, the only way to go through with it is by perceiving the status quo as something that needs to be avoided even more so, so that a foray out of our comfort zone is perceived as an actual improvement to our way of life.  But as long as we see ourselves as already all powerful and masters over our domain, we won’t take any major leap in a new direction of self and collective improvement.  We need to come to terms with our current position in the modern world so that we can truly see what needs repair.  Whether or not we succumb to one or both of the two major existential threats facing our species, climate change and nuclear war, is contingent on whether or not we set our eyes on a new prize.

“Sartre recounts a conversation he had with an American while visiting in this country.  The American insisted that all international problems could be solved if men would just get together and be rational; Sartre disagreed and after a while discussion between them became impossible.  “I believe in the existence of evil,” says Sartre, “and he does not.” What the American has not yet become aware of is the shadow that surrounds all human Enlightenment.”

Once again, if rationality is accompanied with true premises that take our psychology into account, then international problems could be solved (or many of them at least), but Sartre is also right insofar as there are bad ideas that exist, and people that have cultivated their lives around them.  It’s not enough to have people thinking logically, nor is some kind of rational idealism up to the task of dealing with human emotion, cognitive biases, psychopathy, and other complications in human behavior that exist.

The crux of the matter is that some ideas hurt us and other ideas help us, with our perception of these effects being the main arbiter driving our conception of what is good and evil; but there’s also a disparity between what people think is good or bad for them and what is actually good or bad for them.  I think that one could very plausibly argue that if people really knew what was good or bad for them, then applying rationality (with true or plausible premises) would likely work to solve a number of issues plaguing the world at large.

“…echoing the Enlightenment’s optimistic assumption that, since man is a rational animal, the only obstacles to his fulfillment must be objective and social ones.”

And here’s an assumption that’s certainly difficult to ground since it’s based on false premises, namely that humans are inherently rational.  We are unique in the animal kingdom in the sense that we are the only animal (or one of only a few animals) that have the capacity for rational thought, foresight, and the complex level of organization made possible from its use.  I also think that the obstacles to fulfillment are objective since they can be described as facts pertaining to our psychology, sociology, and biology, even if our psychology (for example) is instantiated in a subjective way.  In other words, our subjectivity and conscious experiences are grounded on or describable in objective terms relating to how our particular brains function, how humans as a social species interact with one another, etc.  But, fulfillment can never be reached let alone maximized without taking our psychological traits and idiosyncrasies into account, for these are the ultimate constraints on what can make us happy, satisfied, fulfilled, and so on.

“Behind the problem of politics, in the present age, lies the problem of man, and this is what makes all thinking about contemporary problems so thorny and difficult…anyone who wishes to meddle in politics today had better come to some prior conclusions as to what man is and what, in the end, human life is all about…The speeches of our politicians show no recognition of this; and yet in the hands of these men, on both sides of the Atlantic, lies the catastrophic power of atomic energy.”

And as of 2018, we’ve seen the Doomsday clock now reach two minutes to midnight, having inched one minute closer to our own destruction since 2017.  The dominance hierarchy being led and reinforced by the corporatocratic plutocracy are locked into a narrow form of tunnel vision, hell bent on maintaining if not exacerbating the wealth and power disparities that plague our country and the world as a whole, despite the fact that this is not sustainable in the long run, nor best for the fulfillment of those promoting it.

We the people do share a common goal of trying to live a good, fulfilling life; to have our basic needs met, and to have no fewer rights than anybody else in society.  You’d hardly know that this common ground exists between us when looking at the state of our political sphere, likely as polarized now in the U.S. (if not more so) than even during the Civil War.  Clearly, we have a lot of work to do to reevaluate what our goals ought to be, what our priorities ought to be, and we need a realistic and informed view of what it means to be human before any of these goals can be realized.

“Existentialism is the counter-Enlightenment come at last to philosophic expression; and it demonstrates beyond anything else that the ideology of the Enlightenment is thin, abstract, and therefore dangerous.”

Yes, but the Enlightenment has also been one of the main driving forces leading us out of theocracy, out of scientific illiteracy, and towards an appreciation of reason and evidence (something the U.S. at least, is in short supply of these days), and thus it has been crucial in giving us the means for increasing our standard of living, and solving many of our problems.  While the technological advancements derived from the Enlightenment have also been a large contributor to many of our problems, the current existential threats we face including climate change and nuclear war are more likely to be solved by new technologies, not an abolition of technology nor an abolition of the Enlightenment-brand of thinking that led to technological progress.  We simply need to better inform our technological goals of the actual needs and constraints of human beings, our psychology, and so on.

“The finitude of man, as established by Heidegger, is perhaps the death blow to the ideology of the Enlightenment, for to recognize this finitude is to acknowledge that man will always exist in untruth as well as truth.  Utopians who still look forward to a future when all shadows will be dispersed and mankind will dwell in a resplendent Crystal Palace will find this recognition disheartening.  But on second thought, it may not be such a bad thing to free ourselves once and for all from the worship of the idol of progress; for utopianism-whether the brand of Marx or or Nietzsche-by locating the meaning of man in the future leaves human beings here and how, as well as all mankind up to this point, without their own meaning.  If man is to be given meaning, the Existentialists have shown us, it must be here and now; and to think this insight through is to recast the whole tradition of Western thought.”

And we ought to take our cue from Heidegger, at the very least, to admit that we are finite, our knowledge is limited, and it always will be.  We will not be able to solve every problem, and we would do ourselves and the world a lot better if we admitted our own limitations.  But to avoid being overly cynical and simply damning progress altogether, we need to look for new ways of solving our existential problems.  Part of the solution that I see for humanity moving forward is going to be a combination of advancements in a few different fields.

By making better use of genetic engineering, we’ll one day have the ability to change ourselves in remarkable ways in order to become better adapted to our current world.  We will be able to re-sync our biological evolution with our cultural evolution so we no longer feel uprooted, like a fish out of water.  Continuing research in neuroscience will allow us to learn more about how our brains function and how to optimize that functioning.  Finally, the strides we make in computing and artificial intelligence should allow us to vastly improve our simulation power and arm us with greater intelligence for solving all the problems that we face.

Overall, I don’t see progress as the enemy, but rather that we have an alignment problem between our current measures of progress, and what will actually lead to maximally fulfilling lives.

“The realization that all human truth must not only shine against an enveloping darkness, but that such truth is even shot through with its own darkness may be depressing, and not only to utopians…But it has the virtue of restoring to man his sense of the primal mystery surrounding all things, a sense of mystery from which the glittering world of his technology estranges him, but without which he is not truly human.”

And if we actually use technology to change who we are as human beings, by altering the course of natural selection and our ongoing evolution (which is bound to happen with or without our influence, for better or worse), then it’s difficult to say what the future really holds for us.  There are cultural and technological forces that are leading to transhumanism, and this may mean that one day “human beings” (or whatever name is chosen for the new species that replaces us) will be inherently rational, or whatever we’d like our future species to be.  We’ve stumbled upon the power to change our very nature, and so it’s far too naive, simplistic, unimaginative, and short-sighted to say that humans will “always” or “never” be one way or another.  Even if this were true, it wouldn’t negate the fact that one day modern humans will be replaced by a superseding species which has different qualities than what we have now.

2. The Furies

“…Existentialism, as we have seen, seeks to bring the whole man-the concrete individual in the whole context of his everyday life, and in his total mystery and questionableness-into philosophy.”

This is certainly an admirable goal to combat simplistic abstractions of what it means to be human.  We shouldn’t expect to be able to abstract a certain capacity of human beings (such as rationality), consider it in isolation (no consideration of context), formulate a theory around that capacity, and then expect to get a result that is applicable to human beings as they actually exist in the world.  Furthermore, all of the uncertainties and complexities in our human experiences, no matter how difficult they may be to define or describe, should be given their due consideration in any comprehensive philosophical system.

“In modern philosophy particularly (philosophy since Descartes), man has figured almost exclusively as an epistemological subject-as an intellect that registers sense-data, makes propositions, reasons, and seeks the certainty of intellectual knowledge, but not as the man underneath all this, who is born, suffers, and dies…But the whole man is not whole without such unpleasant things as death, anxiety, guilt, fear and trembling, and despair, even though journalists and the populace have shown what they think of these things by labeling any philosophy that looks at such aspects of human life as “gloomy” or “merely a mood of despair.”  We are still so rooted in the Enlightenment-or uprooted in it-that these unpleasant aspects of life are like Furies for us: hostile forces from which we would escape (the easiest way is to deny that the Furies exist).”

My take on all of this is simply that multiple descriptions of human existence are needed to account for all of our experiences, thoughts, values, and behavior.  And it is what we value given our particular subjectivity that needs to be primary in these descriptions, and primary with respect to how we choose to engineer the world we live in.  Who we are as a species is a complex mixture of good and bad, lightness and darkness, and stability and chaos; and we shouldn’t deny any of these attributes nor repress them simply because they make us uncomfortable.  Instead, we would do much better to face who we are head on, and then try to make our lives better while taking our limitations into account.

“We are the children of an enlightenment, one which we would like to preserve; but we can do so only by making a pact with the old goddesses.  The centuries-long evolution of human reason is one of man’s greatest triumphs, but it is still in process, still incomplete, still to be.  Contrary to the rationalist tradition, we now know that it is not his reason that makes man man, but rather that reason is a consequence of that which really makes him man.  For it is man’s existence as a self-transcending self that has forged and formed reason as one of its projects.”

This is well said, although I’d prefer to couch this in different terms: it is ultimately our capacity to imagine that makes us human and able to transcend our present selves by pointing toward a future self and a future world.  We do this in part by updating our models of the world, simulating new worlds, and trying to better understand the world we live in by engaging with it, re-shaping it, and finding ways of better predicting its causal structure.  Reason and rationality have been integral in updating our models of the world, but they’ve also been high-jacked to some degree by a kind of super-normal stimuli reinforced by technology and our living in a world that is entirely alien to our evolutionary niche, and which have largely dominated our lives in a way that overlooks our individualism, emotions, and feelings.

Abandoning reason and rationality is certainly not the answer to this existential problem; rather, we just need to ensure that they’re being applied in a way that aligns with our moral imperatives and that they’re ultimately grounded on our subjective human psychology.

Advertisement

Irrational Man: An Analysis (Part 3, Chapter 9: Heidegger)

In my previous post of this series on William Barrett’s Irrational Man, I explored some of Nietzsche’s philosophy in more detail.  Now we’ll be taking a look at the work of Martin Heidegger.

Heidegger was perhaps the most complex of the existentialist philosophers, as his work is often interpreted in a number of different ways and he develops a lot of terminology and concepts that can sometimes be difficult to grasp; and there’s also a fundamental limitation of fully engaging with the concepts he uses unless one is fluent in the German language, due to what is otherwise lost in translation to English.

Phenomenology, or the study of the structures of our conscious experience, was a big part of Heidegger’s work and had a big influence on his distinguishing between what he called “calculating thought” and “meditating thought”, and Barrett alludes to this distinction when he first mentions Heidegger’s feelings on thought and reason.  Heidegger says:

“Thinking only begins at the point where we have come to know that Reason, glorified for centuries, is the most obstinate adversary of thinking.”

Now at first one might be confused with the assertion that reason is somehow against thinking, but if one understands that Heidegger is only making reference to one of the two types of thinking outlined above, then we can begin to make sense of what he’s saying.  Calculating thought is what he has in mind here, which is the kind of thinking that we use all the time for planning and investigating in order to accomplish some specific purpose and achieve our goals.  Meditative thought, on the other hand, involves opening up one’s mind such that they aren’t consumed by a single perspective of an idea; it requires us to not limit our thinking to only one category of ideas, and requires us to at least temporarily free our thinking from the kind of unity that makes everything seemingly fit together.

Meaning is more or less hidden behind calculating thought, and thus meditative thought is needed to help uncover that meaning.  Calculative thinking also involves a lot of external input from society and the cultural or technological constructs that our lives are embedded in, whereas meditative thinking is internal, prioritizing the self over the collective and creating a self-derived form of truth and meaning rather than one that is externally imposed on us.  By thinking about reason’s relation to what Heidegger is calling meditative thought in particular, we can understand why Heidegger would treat reason as an adversary to what he sees as a far more valuable form of “thinking”.

It would be mistaken however to interpret this as Heidegger being some kind of an irrationalist, because Heidegger still values thinking even though he redefines what kinds of thinking exist:

“Heidegger is not a rationalist, because reason operates by means of concepts, mental representations, and our existence eludes these.  But he is not an irrationalist either.  Irrationalism holds that feeling, or will, or instinct are more valuable and indeed more truthful than reason-as in fact, from the point of view of life itself, they are.  But irrationalism surrenders the field of thinking to rationalism and thereby secretly comes to share the assumptions of its enemy.  What is needed is a more fundamental kind of thinking that will cut under both opposites.”

Heidegger’s intention seems to involve carving out a space for thinking that connects to the most basic elements of our experience, and which connects to the grounding for all of our experience.  He thinks that we’ve almost completely submerged our lives in reason or calculative thinking and that this has caused us to lose our connection with what he calls Being, and this is analogous to Kierkegaard’s claim of reason having estranged us from faith:

“Both Kierkegaard and Nietzsche point up a profound dissociation, or split, that has taken place in the being of Western man, which is basically the conflict of reason with the whole man.  According to Kierkegaard, reason threatens to swallow up faith; Western man now stands at a crossroads forced to choose either to be religious or to fall into despair…Now, the estrangement from Being itself is Heidegger’s central theme.”

Where or when did this estrangement from our roots first come to be?  Many may be tempted to say it was coincident with the onset of modernity, but perhaps it happened long before that:

“Granted that modern man has torn himself up by his roots, might not the cause of this lie farther back in his past than he thinks?  Might it not, in fact, lie in the way in which he thinks about the most fundamental of all things, Being itself?”

At this point, it’s worth looking at Heidegger’s overall view of man, and see how it might be colored by his own upbringing in southern Germany:

“The picture of man that emerges from Heidegger’s pages is of an earth-bound, time-bound, radically finite creature-precisely the image of man we should expect from a peasant, in this case a peasant who has the whole history of Western philosophy at his fingertips.”

So Heidegger was in an interesting position by wanting to go back to one of the most basic questions ever asked in philosophy, but to do so with the knowledge of the philosophy that had been developed ever since, hopefully giving him a useful outline of what went wrong in that development.

1. Being

“He has never ceased from that single task, the “repetition” of the problem of Being: the standing face to face with Being as did the earliest Greeks.  And on the very first pages of Being and Time he tells us that this task involves nothing less than the destruction of the whole history of Western ontology-that is, of the way the West has thought about Being.”

Sometimes the only way to make progress in philosophy is through a paradigm shift of some kind, where the most basic assumptions underlying our theories of the world and our existence are called into question; and we may end up having to completely discard what we thought we knew and start over, in order to follow a new path and discover a new way of looking at the problem we first began with.

For Heidegger, the problem all comes down to how we look at beings versus being (or Being), and in particular how little attention we’ve given the latter:

“Now, it is Heidegger’s contention that the whole history of Western thought has shown an exclusive preoccupation with the first member of these pairs, with the thing-which-is, and has let the second, the to-be of what is, fall into oblivion.  Thus that part of philosophy which is supposed to deal with Being is traditionally called ontology-the science of the thing-which-is-and not einai-logy, which would be the study of the to-be of Being as opposed to beings…What it means is nothing less than this: that from the beginning the thought of Western man has been bound to things, to objects.”

And it’s the priority we’ve given to positing and categorizing various types of beings that has distracted us from really contemplating the grounding for any and all beings, Being itself.  It’s more or less been taken for granted or seen as too tenuous or insubstantial to give any attention to.  We can certainly blame our lack of attention toward Being, at least in part, as a result of how we’ve understood it as contingent on the existence of particular objects:

“Once Being has been understood solely in terms of beings, things, it becomes the most general and empty of concepts: ‘The first object of the understanding,’ says St. Thomas Aquinas, ‘that which the intellect conceives when it conceives of anything.’ “

Aquinas seems to have pointed out the obvious here: we can’t easily think of the concept of Being without thinking of beings, and once we’ve thought of beings, we tend to think of its fact of being as not adding anything useful to our conception of the beings themselves (similar to Kant’s claim about existence as a superfluous or useless concept).  Heidegger wants to turn this common assumption on its head:

“Being is not an empty abstraction but something in which all of us are immersed up to our necks, and indeed over our heads.  We all understand the meaning in ordinary life of the word ‘is,’ though we are not called upon to give a conceptual explanation of it.  Our ordinary human life moves within a preconceptual understanding of Being, and it is this everyday understanding of Being in which we live, move, and have our Being that Heidegger wants to get at as a philosopher.”

Since we live our lives with an implicit understanding of Being and its permeating everything in our experience, it seems reasonable that we can turn our attention toward it and build up a more explicit conceptualization or understanding of it.  To do this carefully, without building in too many abstract assumptions or uncertain inferences, we need to make special use of phenomenology in this task.  Heidegger turns to Edmund Husserl’s work to take this project off the ground.

2. Phenomenology and Human Existence

“Instead of making intellectual speculations about the whole of reality, philosophy must turn, Husserl declared, to a pure description of what is…Heidegger accepts Husserl’s definition of phenomenology: he will attempt to describe, he says, and without any obscuring preconceptions, what human existence is.”

And this means that we need to try and analyze our experience without importing the plethora of assumptions and ideas that we’ve been taught about our existence.  We have to try and do a kind of phenomenological reduction, where we suspend our judgment about the way the world works which will include leaving aside (at least temporarily) most if not all of science and its various theories pertaining to how reality is structured.

Heidegger also makes many references to language in his work and, perhaps sharing a common thread with Wittgenstein’s views, dissects language to get at what we really mean by certain words, revealing important nuances that are taken for granted and how our words and concepts constrain our thinking to some degree:

“Heidegger’s perpetual digging at words to get at their hidden nuggets of meaning is one of his most exciting facets…(The thing for itself) will reveal itself to us, he says, only if we do not attempt to coerce it into one of our ready-made conceptual strait jackets.”

Truth is another concept that needs to be reformulated for Heidegger, and he first points out how the Greek word for truth, aletheia, means “un-hiddenness or “revelation”, and so he believes that truth lies in finding out what has been hidden from us; and this goes against our usual views of what is meant by “truth”, where we most often think of it as a status ascribed to propositions that correspond to actual facts about the world.  Since propositions can’t exist without minds, then modern conceptions of truth are limited in a number of ways, which Heidegger wants to find a way around:

“…truth is therefore, in modern usage, to be found in the mind when it has a correct judgment about what is the case.  The trouble with this view is that it cannot take account of other manifestations of truth.  For example, we speak of the “truth” of a work of art.  A work of art in which we find truth may actually have in it no propositions that are true in this literal sense.  The truth of a work of art is in its being a revelation, but that revelation does not consist in a statement or group of statements that are intellectually correct.  The momentous assertion that Heidegger makes is that truth does not reside primarily in the intellect, but that, on the contrary, intellectual truth is in fact a derivative of a more basic sense of truth.”

This more basic sense of truth will be explored more later on, but the important takeaway is that truth for Heidegger seems to involve the structure of our experience and of the beings within that experience; and it lends itself toward a kind of epistemological relativism, depending on how a human being interprets the world they’re interacting with.

On the other hand, if we think about truth as a matter of deciding what we should trust or doubt in terms of what we’re experiencing, we may wind up in a position of solipsism similar to Descartes’, where we come to doubt the “external world” altogether:

“…Descartes’ cogito ergo sum moment, is the point at which modern philosophy, and with it the modern epoch, begins: man is locked up in his own ego.  Outside him is the doubtful world of things, which his science has not taught him are really not the least like their familiar appearances.  Descartes got the external world back through a belief in God, who in his goodness would not deceive us into believing that this external world existed if it really did not.  But the ghost of subjectivism (and solipsism too) is there and haunts the whole of modern philosophy.”

While Descartes pulled himself out of solipsism through the added assumption of another (benevolent) mind existing that was responsible for any and all experience, I think there’s a far easier and less ad hoc solution to this problem.  First, we need to realize that we wouldn’t be able to tell the difference between an experience of an artificial or illusory external reality and one that was truly real independent of our conscious experience, and this means that solipsism is entirely indeterminable and so it’s almost pointless to worry about; and second, we don’t feel that we are the sole authors of every aspect of our experience anyway, otherwise there would be nothing unpredictable or uncertain with respect to any part of our experience, and therefore we’d know everything that will ever happen before it happens with not so much as an inkling of surprise or confusion.

This is most certainly not the case, which means we can be confident that we are not the sole authors of our experience and therefore there really is a reality or a causal structure that is independent of our own will and expectations.  So it’s conceptually simpler to simply define reality in the most broadest sense of the term, as nothing more or less than what we experience at any point in time.  And because we experience what appear to be other beings that are just like ourselves, each appearing to have their own experiences of reality, it’s going to be more natural and intuitive to treat them as such regardless of whether or not they’re really figments of our imagination or products of some unknown higher level reality like The Matrix.

Heidegger does something similar here, where he views our being in the “external” world as essential to who we are, and thus he doesn’t think we can separate ourselves from the world, like Descartes himself did as a result of his methodological skepticism:

“Heidegger destroys the Cartesian picture at one blow: what characterizes man essentially, he says, is that he is Being-in-the-world.  Leibniz had said that the monad has no windows; and Heidegger’s reply is that man does not look out upon an external world through windows, from the isolation of his ego: he is already out-of-doors.  He is in the world because, existing, he is involved in it totally.”

This is fundamentally different from the traditional conception of an ego or self that may or may not exist in a world; within this conception of human beings, one can’t treat the ego as separable from the world that it’s embedded in.  One way to look at this is to imagine that we were separated from the world forever, and then ask ourselves if the mode of living that remains (if any) in the absence of any world is still fundamentally human.  It’s an interesting thought experiment to imagine us somehow as a disembodied mind “residing” within an infinite void of nothingness, but if this were really done for eternity and not simply for a short while where we knew we could later return to the world we left behind, then we’d have no more interactions or relations with any objects or other beings and no mode of living at all.  I think it’s perfectly reasonable to conclude in this case that we would have lost something that we need in order to truly be a human being.  If one fails to realize this, then they’ve simply failed to truly grasp the conditions called for in the thought experiment.

Heidegger drives this point further by describing existence or more appropriately our Being as analogous to a non-localized field as in theoretical physics:

“Existence itself, according to Heidegger, means to stand outside oneself, to be beyond oneself.  My Being is not something that takes place inside my skin (or inside an immaterial substance inside that skin); my Being, rather, is spread over a field or region which is the world of its care and concern.  Heidegger’s theory of man (and of Being) might be called the Field Theory of Man (or the Field Theory of Being) in analogy with Einstein’s Field Theory of Matter, provided we take this purely as an analogy; for Heidegger would hold it a spurious and inauthentic way to philosophize to derive one’s philosophic conclusions from the highly abstract theories of physics.  But in the way that Einstein took matter to be a field (a magnetic field, say)- in opposition to the Newtonian conception of a body as existing inside its surface boundaries-so Heidegger takes man to be a field or region of Being…Heidegger calls this field of Being Dasein.  Dasein (which in German means Being-there) is his name for man.”

So we can see our state of Being (which Heidegger calls Dasein) as one that’s spread out over a number of different interactions and relations we have with other beings in the world.  In some sense we can think of the sum total of all the actions, goals, and beings that we concern ourselves with as constituting our entire field of Being.  And each being, especially each human being, shares this general property of Being even though each of those beings will be embedded within their own particular region of Being with their own set of concerns (even if these sets overlap in some ways).

What I find really fascinating is Heidegger’s dissolution of the subject-object distinction:

“That Heidegger can say everything he wants to say about human existence without using either “man” or “consciousness” means that the gulf between subject and object, or between mind and body, that has been dug by modern philosophy need not exist if we do not make it.”

I mentioned Wittgenstein earlier, because I think Heidegger’s philosophy has a few parallels.  For instance, Wittgenstein treats language and meaning as inherently connected to how we use words and concepts and thus is dependent on the contextual relations between how or what we communicate and what our goals are, as opposed to treating words like categorical objects with strict definitions and well-defined semantic boundaries.  For Wittgenstein, language can’t be separated from our use of it in the world even though we often treat it as if we can (and assuming we can is what often creates problems in philosophy), and this is similar to how Heidegger describes us human beings as inherently inseparable from our extension in the world as manifested in our many relations and concerns within that world.

Heidegger’s reference to Being as some kind of field or region of concern can be illustrated well by considering how a child comes to learn their own name (or to respond to it anyway):

“He comes promptly enough at being called by name; but if asked to point out the person to whom the name belongs, he is just as likely to point to Mommy or Daddy as to himself-to the frustration of both eager parents.  Some months later, asked the same question, the child will point to himself.  But before he has reached that stage, he has heard his name as naming a field or region of Being with which he is concerned, and to which he responds, whether the call is to come to food, to mother, or whatever.  And the child is right.  His name is not the name of an existence that takes place within the envelope of his skin: that is merely the awfully abstract social convention that has imposed itself not only on his parents but on the history of philosophy.  The basic meaning the child’s name has for him does not disappear as he grows older; it only becomes covered over by the more abstract social convention.  He secretly hears his own name called whenever he hears any region of Being named with which he is vitally involved.”

This is certainly an interesting interpretation of this facet of child development, and we might presume that the current social conventions which eventually attach our name to our body arise out of a number of pragmatic considerations such as wanting to provide a label for each conscious agent that makes decisions and behaves in ways that affect other conscious agents, and because our regions of Being overlap in some ways but not in others (i.e. some of my concerns, though not all of them, are your concerns too), it also makes sense to differentiate between all the different “sets of concerns” that comprise each human being we interact with.  But the way we tend to do this is by abstracting the sets of concerns, localizing them in time and space as “mine” and “yours” rather than the way they truly exist in the world.

Nevertheless, we could (in principle anyway) use an entirely different convention where a person’s name is treated as a representation of a non-localized set of causal structures and concerns, almost as if their body extended in some sense to include all of this, where each concern or facet of their life is like a dynamic appendage emanating out from their most concentrated region of Being (the body itself).  Doing so would markedly change the way we see one another, the way we see objects, etc.

Perhaps our state of consciousness can mislead us into forgetting about the non-locality of our field of Being, because our consciousness is spatially constrained to one locus of experience; and perhaps this is the reason why philosophers going all the way back to the Greeks have linked existence directly with our propensity to reflect in solitude.  But this solitary reflection also misleads us into forgetting the average “everydayness” of our existence, where we are intertwined with our community like a sheep directed by the rest of the herd:

“…none of us is a private Self confronting a world of external objects.  None of us is yet even a Self.  We are each simply one among many; a name among the names of our schoolfellows, our fellow citizens, our community.  This everyday public quality of our existence Heidegger calls “the One.”  The One is the impersonal and public creature whom each of us is even before he is an I, a real I.  One has such-and-such a position in life, one is expected to behave in such-and-such a manner, one does this, one does not do that, etc.  We exist thus in a state of “fallen-ness”, according to Heidegger, in the sense that we are as yet below the level of existence to which it is possible for us to rise.  So long as we remain in the womb of this externalized and public existence, we are spared the terror and the dignity of becoming a Self.”

Heidegger’s concept of fallen-ness illustrates our primary mode of living; sometimes we think of ourselves as individuals striving toward our personal self-authored goals but most of the time we’re succumbing to society’s expectations of who we are, what we should become, and how we ought to judge the value of our own lives.  We shouldn’t be surprised by this fact as we’ve evolved as a social species, which means our inclination toward a herd mentality is in some sense instinctual and therefore becomes the path of least resistance.  But because of our complex cognitive and behavioral versatility, we can also consciously rise above this collective limitation; and even if we don’t strive to, in other cases our position within the safety of the collective is torn away from us as a result of the chaos that reigns in on us from life’s many contingencies:

“But…death and anxiety (and such) intrude upon this fallen state, destroy our sheltered position of simply being one among many, and reveal to us our own existence as fearfully and irremediably our own.  Because it is less fearful to be “the One” than to be a Self, the modern world has wonderfully multiplied all the devices of self-evasion.”

It is only by rising out of this state of fallenness, that is, by our rescuing some measure of true individuality, that Heidegger thinks we can live authentically.  But in any case:

“Whether it be fallen or risen, inauthentic or authentic, counterfeit copy or genuine original, human existence is marked by three general traits: 1) mood or feeling; 2) understanding; 3) speech.  Heidegger calls these existentialia and intends them as basic categories of existence (as opposed to more common ones like quantity, quality, space, time, etc.).”

Heidegger’s use of these traits in describing our existence shouldn’t be confused with some set of internal mental states but rather we need to include them within a conception of Being or Dasein as a field that permeates the totality of our existence.  So when we consider the first trait, mood or feeling, we should think of each human being as being a mood rather than simply having a mood.  According to Heidegger, it is through moods that we feel a sense of belonging to the world, and thus, if we didn’t have any mood at all, then we wouldn’t find ourselves in a world at all.

It’s also important to understand that he doesn’t see moods as some kind of state of mind as we would typically take them to be, but rather that all states of mind presuppose this sense of belonging to a world in the first place; they presuppose a mood in order to have the possibilities for any of those states of mind.  And as Barrett mentions, not all moods are equally important for Heidegger:

“The fundamental mood, according to Heidegger, is anxiety (Angst); he does not choose this as primary out of any morbidity of temperament, however, but simply because in anxiety this here-and-now of our existence arises before us in all its precarious and porous contingency.”

It seems that he finds anxiety to be fundamental in part because it is a mood that opens us up to see Being for what it really is: a field of existence centered around the individual rather than society and its norms and expectations; and this perspective largely results from the fact that when we’re in anxiety all practical significance melts away and the world as it was no longer seems to be relevant.  Things that we took for granted now become salient and there’s a kind of breakdown in our sense of everyday familiarity.  If what used to be significant and insignificant reverse roles in this mood, then we can no longer misinterpret ourselves as some kind of entity within the world (the world of “Them”; the world of externally imposed identity and essence); instead, we finally see ourselves as within, or identical with, our own world.

Heidegger also seems to see anxiety as always there even though it may be covered up (so to speak).  Perhaps it’s hiding most of the time because the fear of being a true Self and facing our individuality with the finitude, contingency, and responsibility that goes along with it is often too much for one to bear, so we blind ourselves by shifting into a number of other, less fundamental moods which enhance the significance of the usual day-to-day world, even if the possibility for Being to “reveal itself” to us, is just below the surface.

If we think of moods as a way of our feeling a sense of belonging within a world, where they effectively constitute the total range of ways that things can have significance for us, then we can see how our moods ultimately affect and constrain our view of the possibilities that our world affords us.  And this is a good segue to consider Heidegger’s second trait of Dasein or human Being, namely that of understanding.

“The “understanding” Heidegger refers to here is not abstract or theoretical; it is the understanding of Being in which our existence is rooted, and without which we could not make propositions or theories that can claim to be “true”.  Whence comes this understanding?  It is the understanding that I have by virtue of being rooted in existence.”

I think Heidegger’s concept of understanding (verstehen) is more or less an intuitive sense of knowing how to manipulate the world in order to make use of it for some goal or other.  By having certain goals, we see the world and the entities in that world colored by the context of those goals.  What we desire then will no doubt change the way the world appears to us in terms of what functionality we can make the most use of, what tools are available to us, and whether we see something as a tool or not.  And even though our moods may structure the space of possibilities that we see the world present to us, it is our realizing what these possibilities are that seems to underlie this form of understanding that Heidegger’s referring to.

We also have no choice but to use our past experience to interpret the world, determine what we find significant and meaningful, and to further drive the evolution of our understanding; and of course this process feeds back in on itself by changing our interpretation of the world which changes our dynamic understanding yet again.  And here we might make the distinction between an understanding of the possible uses for various objects and processes, and the specific interpretation of which possibilities are relevant to the task at hand thereby making use of that understanding.

If I’m sitting at my desk with a cup of coffee and a stack of papers in front of me that need to be sorted, my understanding of these entities affords me a number of possibilities depending on my specific mode of Being or what my current concerns are at the present moment.  If I’m tired or thirsty or what-have-you I may want to make use of that cup of coffee to drink from it, but if the window is open and the wind is starting to blow the stack of papers off my desk, I may decide to use my cup of coffee as a paper weight instead of a vessel to drink from.  And if a fly begins buzzing about while I’m sorting my papers and starts to get on my nerves, distracting me from the task at hand, I may decide to temporarily roll up my stack of papers and use it as a fly swatter.  My understanding of all of these objects in terms of their possibilities allows me to interpret an object as a particular possibility that fits within the context of my immediate wants and needs.

I like to think of understanding and interpretation, whether in the traditional sense or in Heidegger’s terms, as ultimately based on our propensity to make predictions about the world and its causal structure.  As a proponent of the predictive processing framework of brain function, I see the ultimate purpose of brains as being prediction generators, where the brain serves to organize information in order to predict the causal structure of the world we interact with; and this is done at many different levels of abstraction, so we can think of us as making use of many smaller-scale modules of understanding as well as various larger-scale assemblies of those individual modules.  In some cases, the larger-scale assemblies constrain which smaller-scale modules can be used and in other cases, the smaller-scale modules are prioritized which constrain or direct the shape of the larger-scale assemblies.

Another way to say this is to say that our brains make use of a number of lower level and higher level predictions (any of which may change over time based on new experiences or desires) and depending on the context we find ourselves in, one level may have more or less influence on another level.  My lowest level predictions may include things like what each “pixel” in my visual field is going to be from moment to moment, and a slightly higher level prediction may include things like what a specific coffee cup looks like, it’s shape, weight, etc., and slightly higher level predictions may include things like what coffee cups in general look like, their range of shapes, weights, etc., and eventually we get to higher levels of predictions that may include what we expect an object can be used for.

All of these predictions taken as a whole constitute our understanding of the world and the specific sets of predictions that come into play at any point in time, depending on our immediate concerns and where our attention is directed, constitute our interpretation of what we’re presently experiencing.  Barrett points to the fact that what we consider to be truth in the most primitive sense is this intuitive sense of understanding:

“Truth and Being are thus inseparable, given always together, in the simple sense that a world with things in it opens up around man the moment he exists.  Most of the time, however, man does not let himself see what really happens in seeing.”

This process of the world opening itself up to us, such that we have an immediate sense of familiarity with it, is a very natural and automated process and so unless we consciously pull ourselves out of it to try and externally reflect on this situation, we’re just going to take our understanding and interpretation for granted.  But regardless of whether we reflect on it or not, this process of understanding and interpretation is fundamental to how we exist in the world, and thus it is fundamental to our Being in the world.

Speech is the last trait of Dasein that Heidegger mentions, and it is intimately connected with the other two traits, mood and understanding.  So what exactly is he referring to by speech?

“Speech:  Language, for Heidegger, is not primarily a system of sounds or of marks on paper symbolizing those sounds.  Sounds and marks upon paper can become language only because man, insofar as he exists, stands within language.”

This is extremely reminiscent of Wittgenstein once again, since Wittgenstein views language in terms of how it permeates our day-to-day lives within a community or social group of some kind.  Language isn’t some discrete set of symbols with fixed meanings, but rather is a much more dynamic, context-dependent communication tool.  We primarily use language to accomplish some specific goal or another in our day-to-day lives, and this is a fundamental aspect of how we exist as human beings.  Because language is attached to its use, the meaning of any word or utterance (if any) is derived from its use as well.  Sounds and marks on paper are meaningless, unless we first ascribe a meaning to it based on how its been used in the past and how its being used in the present.

And as soon as we pick up language during our child development, it becomes automatically associated with our thought processes where we begin to think in our native language, and both real perceptions and figments of our imagination are instantly attached to some linguistic label in our heads.  If I showed you a picture of an elephant, the word elephant would no doubt enter your stream of consciousness, even without my saying the word or asking you what animal you saw in the picture.  Whether our thinking linguistically comes about as a result of having learned a specific language, or if its an innate feature of our how our brains function, language permeates our thought as well as our interactions with others in the world.

While we tend to think of language as involving words, sounds, and so forth, Heidegger has an interesting take on what else falls under the umbrella of language:

“Two people are talking together.  They understand each other, and they fall silent-a long silence.  This silence is language; it may speak more eloquently than any words.  In their mood they are attuned to each other; they may even reach down into that understanding which, as we have seen above, lies below the level of articulation.  The three-mood, understanding, and speech (a speech here that is silence)-thus interweave and are one.”

I’m not sure if Heidegger is only referring to silence itself here or if he’s also including body language, which often continues even after we’ve stopped talking and which sometimes speaks better than verbal language ever could.  In silence, we often get a chance to simply read another person in a much more basic and instinctual way, ascertaining their mood, and allowing any language that may have recently transpired to be interpreted differently than if a long silence had never occurred.  Heidegger refers to what happens in silence as an attunement between fellow human beings:

“…Nor is this silence merely a gap in our chatter; it is, rather, the primordial attunement of one existent to another, out of which all language-as sounds, marks, and counters-comes.  It is only because man is capable of such silence that he is capable of authentic speech.  If he ceases to be rooted in that silence all his talk becomes chatter.”

We may however also want to consider silence when it is accompanied with action, as our actions speak louder than words and we gain information primarily in this way anyway, by simply watching another do something in the world.  And language can only work within the context of mutual understanding, which necessarily involves action (at some point) and so perhaps we could say that action at least partially constitutes the “silence” that Heidegger refers to.

Heidegger’s entire “Field Theory” of Being relies on context, and Barrett mentions this as well:

“…we might just as well call it a contextual theory of Being.  Being is the context in which all beings come to light-and this means those beings as well that are sounds or marks on paper…Men exist “within language” prior to their uttering sounds because they exist within a mutual context of understanding, which in the end is nothing but Being itself.”

I think that we could interpret this as saying that in order for us to be able to interact with one another and with the world, there’s a prerequisite context of meaning and of what matters to us, already in play in order for those interactions to be possible in the first place.  In other words, whether we’re talking to somebody or simply trying to cook our breakfast, all of our actions presuppose some background of understanding and a sense of what’s significant to us.

3. Death, Anxiety, Finitude

Heidegger views the concept of death as an important one especially as it relates to a proper understanding of the concept of Being, and he also sees the most common understanding of death as fundamentally misguided:

“The authentic meaning of death-“I am to die”-is not as an external and public fact within the world, but as an internal possibility of my own Being.  Nor is it a possibility like a point at the end of a road, which I will in time reach.  So long as I think in this way, I still hold death at a distance outside myself.  The point is that I may die at any moment, and therefore death is my possibility now…Hence, death is the most personal and intimate of possibilities, since it is what I must suffer for myself: nobody else can die for me.”

I definitely see the merit in viewing death as an imminent possibility in order to fully grasp the implications it has on how we should be living our lives.  But most people tend to consider death in very abstract terms where it’s only thought about in the periphery as something that will happen “some day” in the future; and I think we owe this abstraction and partial repression of death to our own fear of death.  In many other cases, this fear of death manifests itself into supernatural beliefs that posit life after death, the resurrection of the dead, immortality, and other forms of magic; and importantly, all of these psychologically motivated tactics prevent one from actually accepting the truth about death as an integral part of our existence as human beings.  By denying the truth about death, one is denying the true value of the life they actually have.  On the other hand, by accepting it as an extremely  personal and intimate attribute of ourselves, we can live more authentically.

We can also see that by viewing death in more personal terms, we are pulled out the world of “Them”, the collective world of social norms and externalized identities, and able to better connect with our sense of being a Self:

“Only by taking my death into myself, according to Heidegger, does an authentic existence become possible for me.  Touched by this interior angel of death, I cease to be the impersonal and social One among many…and I am free to become myself.”

Thinking about death may make us uncomfortable, but if I know I’m going to die and could die at any moment, then shouldn’t I prioritize my personal endeavors to reflect this radical contingency?  It’s hard to argue with that conclusion but even if we should be doing this it’s understandably easy to get lost in our mindless day-to-day routines and superficial concerns, and then we simply lose sight of what we ought to be doing with our limited time.  It’s important to break out of this pattern, or at least to strive to, and in doing so we can begin to center our lives around truly personal goals and projects:

“Though terrifying, the taking of death into ourselves is also liberating: It frees us from servitude to the petty cares that threaten to engulf our daily life and thereby opens us to the essential projects by which we can make our lives personally and significantly our own.  Heidegger calls this the condition of “freedom-toward-death” or “resoluteness.”

The fear of death is also unique and telling in the sense that it isn’t really directed at any object but rather it is directed at Nothingness itself.  We fear that our lives will come to an end and that we’ll one day cease to be.  Heidegger seems to refer to this disposition as anxiety rather than fear, since it isn’t really the fear of anything but rather the fear of nothing at all; we just treat this nothing as if it were an object:

“Anxiety is not fear, being afraid of this or that definite object, but the uncanny feeling of being afraid of nothing at all.  It is precisely Nothingness that makes itself present and felt as the object of our dread.”

Accordingly, the concept of Nothingness is integral to the concept of Being, since it’s interwoven in our existence:

“In Heidegger Nothingness is a presence within our own Being, always there, in the inner quaking that goes on beneath the calm surface of our preoccupation with things.  Anxiety before Nothingness has many modalities and guises: now trembling and creative, now panicky and destructive; but always it is as inseparable from ourselves as our own breathing because anxiety is our existence itself in its radical insecurity.  In anxiety we both are and are not, at one and the same time, and this is our dread.”

What’s interesting about this perspective is the apparent asymmetry between existence and non-existence, at least insofar as it relates to human beings.  Non-existence becomes a principal concern for us as a result of our kind of existence, but non-existence itself carries with it no concerns at all.  In other words, non-existence doesn’t refer to anything at all whereas (human) existence refers to everything as well as to nothing (non-existence).  And it seems to me that anxiety and our relation to Nothingness is entirely dependent on our inherent capacity of imagination, where we use imagination to simulate possibilities, and if this imagination is capable of rendering the possibility of our own existence coming to an end, then that possibility may forcefully reorganize the relative significance of all other products of our imagination.

As confusing as it may sound, death is the possibility of ending all possibilities for any extant being.  If imagining such a possibility were not enough to fundamentally change one’s perspective of their own life, then I think that combining it with the knowledge of its inevitability, that this possibility is also a certainty, ought to.  Another interesting feature of our existence is the fact that death isn’t the only possibility that relates to Nothingness, for anytime we imagine a possibility that has not yet been realized or even if we conceive of an actuality that has been realized, we are making reference to that which is not; we are referring to what we are not, to what some thing or other is not and ultimately to that which does not exist (at a particular time and place).  I only know how to identify myself, or any object in the world for that matter, by distinguishing it from what it is not.

“Man is finite because the “not”-negation-penetrates the very core of his existence.  And whence is this “not” derived?  From Being itself.  Man is finite because he lives and moves within a finite understanding of Being.”

As we can see, Barrett describes how within Heidegger’s philosophy negation is treated as something that we live and move within and this makes sense when we consider what identity itself is dependent on.  And the mention of finitude is interesting because it’s not simply a limitation in the sense of what we aren’t capable of (e.g. omnipotence, immortality, perfection, etc.), but rather that our mode of existence entails making discriminations between what is and what isn’t, between what is possible and what is not, between what is actual and what is not.

4. Time and Temporality; History

The last section on Heidegger concerns the nature of time itself, and it begins by pointing out the relation between negation and our experience of time:

“Our finitude discloses itself essentially in time.  In existing…we stand outside ourselves at once open to Being and in the open clearing of Being; and this happens temporally as well as spatially.  Man, Heidegger says, is a creature of distance: he is perpetually beyond himself, his existence at every moment opening out toward the future.  The future is the not-yet, and the past is the no-longer; and these two negatives-the not-yet and the no-longer-penetrate his existence.  They are his finitude in its temporal manifestation.”

I suppose we could call this projection towards the future the teleological aspect of our Being.  In order to do anything within our existence, we must have a goal or a number of them, and these goals refer to possible future states of existence that can only be confirmed or disconfirmed once our growing past subsumes them as actualities or lost opportunities.  We might even say that the present is somewhat of an illusion (though as far as I know Heidegger doesn’t make this claim) because in a sense we’re always living in the future and in the past, since the person we see ourselves as and the person we want to be are a manifestation of both; whereas the present itself seems to be nothing more than a fleeting moment that only references what’s in front or behind itself.

In addition to this, if we look at how our brain functions from a predictive coding perspective, we can see that our brain generates predictive models based on our past experiences and the models it generates are pure attempts to predict the future; nowhere does the present come into play.  The present as it is experienced by us, is nothing but a prediction of what is yet to come.  I think this is integral to take into account when considering how time is experienced by us and how it should be incorporated into a concept of Being.

Heidegger’s concept of temporality is also tied to the concept of death that we explored earlier.  He sees death as necessary to put our temporal existence into a true human perspective:

“We really know time, says Heidegger, because we know we are going to die.  Without this passionate realization of our mortality, time would be simply a movement of the clock that we watch passively, calculating its advance-a movement devoid of human reasoning…Everything that makes up human existence has to be understood in the light of man’s temporality: of the not-yet, the no-longer, the here-and-now.”

It’s also interesting to consider what I’ve previously referred to as the illusion of persistent identity, an idea that’s been explored by a number of philosophers for centuries: we think of our past self as being the same person as our present or future self.  Now whether the present is actually an illusion or not is irrelevant to this point; the point is that we think of ourselves as always being there in our memories and at any moment of time.  The fact that we are all born into a society that gives us a discrete and fixed name adds to this illusion of our having an identity that is fixed as well.  But we are not the same person that we were ten years ago let alone the same person we were as a five-year old child.  We may share some memories with those previous selves but even those change over time and are often altered and reconstructed upon recall.  Our values, our ontology, our language, our primary goals in life, are all changing to varying degrees over time and we mustn’t lose sight of this fact either.  I think this “dynamic identity” is a fundamental aspect of our Being.

We might help to explain how we’re taken in by the illusion of a persistent identity by understanding once again that our identity is fundamentally projected toward the future.  Since that projection changes over time (as our goals change) and since it is a process that relies on a kind of psychological continuity, we might expect to simply focus on the projection itself rather than what that projection used to be.  Heidegger stresses the importance of the future in our concept of Being:

“Heidegger’s theory of time is novel, in that, unlike earlier philosophers with their “nows,” he gives priority to the future tense.  The future, according to him, is primary because it is the region toward which man projects and in which he defines his own being.  “Man never is, but always is to be,” to alter slightly the famous line of Pope.”

But, since we’re always relying on past experiences in order to determine our future projection, to give it a frame of reference with which we can evaluate that projected future, we end up viewing our trajectory in terms of human history:

“All these things derive their significance from a more basic fact: namely, that man is the being who, however dimly and half-consciously, always understands, and must understand, his own being historically.”

And human history, in a collective sense also plays a role, since our view of ourselves and of the world is unavoidably influenced by the cultural transmission of ideas stemming from our recent past all the way to thousands of years ago.  A lot of that influence has emanated from the Greeks especially (as Barrett has mentioned throughout Irrational Man), and it’s had a substantial impact on our conceptualization of Being as well:

“By detaching the figure from the ground the object could be made to emerge into the daylight of human consciousness; but the sense of the ground, the environing background, could also be lost.  The figure comes into sharper focus, that is, but the ground recedes, becomes invisible, is forgotten.  The Greeks detached beings from the vast environing ground of Being.  This act of detachment was accompanied by a momentous shift in the meaning of truth for the Greeks, a shift which Heidegger pinpoints as taking place in a single passage in Plato’s Republic, the celebrated allegory of the cave.”

Through the advent of reason and forced abstraction, the Greeks effectively separated the object from the context it’s normally embedded within.  But if a true understanding of our Being in the world can only be known contextually, then we can see how Heidegger may see a crucial problem with this school of thought which has propagated itself in Western philosophy ever since Plato.  Even the concept of truth itself had underwent a dramatic change as a result of the Greeks discovering and developing the process of reason:

“The quality of unhiddenness had been considered the mark of truth; but with Plato in that passage truth came to be defined, rather, as the correctness of an intellectual judgment.”

And this concept of unhiddenness seems to be a variation of “subjective truth” or “subjective understanding”, although it shouldn’t be confused with modern subjectivism; in this case the unhiddenness would naturally precipitate from whatever coherency and meaning is immediately revealed to us through our conscious experience.  I think that the redefining of truth had less of an impact on philosophy than Heidegger may have thought; the problem wasn’t the redefining of truth per se but rather the fact that a contextual understanding and subjective experience itself lost the level of importance they once had.  And I’m sure Heidegger would agree that this perceived loss of importance for both subjectivity and for a holistic understanding of human existence has led to a major shift in our view of the world, and a view that has likely resulted in some of the psychological pathologies sprouting up in our age of modernity.

Heidegger thought that allowing Being to reveal itself to us, was analogous to an artist letting the truth reveal itself naturally:

“…the artist, as well as the spectator, must submit patiently and passively to the artistic process, that he must lie in wait for the image to produce itself; that he produces false notes as soon as he tries to force anything; that, in short, he must let the truth of his art happen to him?  All of these points are part of what Heidegger means by our letting Being be.  Letting it be, the artist lets it speak to him and through him; and so too the thinker must let it be thought.”

He seems to be saying that modern ways of thinking are too forceful in the sense of our always prioritizing the subject-object distinction in our way of life.  We’ve let the obsessive organization of our lives and the various beings within those lives drown out our appreciation of, and ability to connect to, Being itself.  I think that another way we could put this is to say that we’ve split ourselves off from the sense of self that has a strong connection to nature and this has disrupted our ability to exist in a mode that’s more in tune with our evolutionary psychology, a kind of harmonious path of least resistance.  We’ve become masters over manipulating our environment while simultaneously losing our innate connection to that environment.  There’s certainly a lesson to be learned here even if the problem is difficult to articulate.

•      •      •      •      •

This concludes William Barrett’s chapter on Martin Heidegger.  I’ve gotta say that I think Heidegger’s views are fascinating as they serve to illustrate a radically different way of viewing the world and human existence, and for those that take his philosophy seriously, it forces us to re-evaluate much of the Western philosophical thought that we’ve been taking for granted, and which has constrained a lot of our thinking.  And I think it’s both refreshing and worthwhile to explore some of these novel ideas as they help us see life through an entirely new lens.  We need a shift in our frame of mind; a need to think outside the box so we have a better chance of finding new solutions to many of the problems we face in today’s world.  In the next post for this series on William Barrett’s Irrational Man, I’ll be looking at the philosophy of Jean-Paul Sartre.

Irrational Man: An Analysis (Part 3, Chapter 7: Kierkegaard)

Part III – The Existentialists

In the last post in this series on William Barrett’s Irrational Man, we examined how existentialism was influenced by a number of poets and novelists including several of the Romantics and the two most famous Russian authors, namely Dostoyevsky and Tolstoy.  In this post, we’ll be entering part 3 of Barrett’s book, and taking a look at Kierkegaard specifically.

Chapter 7 – Kierkegaard

Søren Kierkegaard is often considered to be the first existentialist philosopher and so naturally Barrett begins his exploration of individual existentialists here.  Kierkegaard was a brilliant man with a broad range of interests; he was a poet as well as a philosopher, exploring theology and religion (Christianity in particular), morality and ethics, and various aspects of human psychology.  The fact that he was also a devout Christian can be seen throughout his writings, where he attempted to delineate his own philosophy of religion with a focus on the concepts of faith, doubt, and also his disdain for any organized forms of religion.  Perhaps the most influential core of his work is the focus on individuality, subjectivity, and the lifelong search to truly know oneself.  In many ways, Kierkegaard paved the way for modern existentialism, and he did so with a kind of poetic brilliance that made clever use of both irony and metaphor.

Barrett describes Kierkegaard’s arrival in human history as the onset of an ironic form of intelligence intent on undermining itself:

“Kierkegaard does not disparage intelligence; quite the contrary, he speaks of it with respect and even reverence.  But nonetheless, at a certain moment in history this intelligence had to be opposed, and opposed with all the resources and powers of a man of brilliant intelligence.”

Kierkegaard did in fact value science as a methodology and as an enterprise, and he also saw the importance of objective knowledge; but he strongly believed that the most important kind of knowledge or truth was that which was derived from subjectivity; from the individual and their own concrete existence, through their feelings, their freedom of choice, and their understanding of who they are and who they want to become as an individual.  Since he was also a man of faith, this meant that he had to work harder than most to manage his own intellect in order to prevent it from enveloping the religious sphere of his life.  He felt that he had to suppress the intellect at least enough to maintain his own faith, for he couldn’t imagine a life without it:

“His intellectual power, he knew, was also his cross.  Without faith, which the intelligence can never supply, he would have died inside his mind, a sickly and paralyzed Hamlet.”

Kierkegaard saw faith as of the utmost importance to our particular period in history as well, since Western civilization had effectively become disconnected from Christianity, unbeknownst to the majority of those living in Kierkegaard’s time:

“The central fact for the nineteenth century, as Kierkegaard (and after him Nietzsche, from a diametrically opposite point of view) saw it, was that this civilization that had once been Christian was so no longer.  It had been a civilization that revolved around the figure of Christ, and was now, in Nietzsche’s image, like a planet detaching itself from its sun; and of this the civilization was not yet aware.”

Indeed, in Nietzsche’s The Gay Science, we hear of a madman roaming around one morning who makes such a pronouncement to a group of non-believers in a marketplace:

” ‘Where is God gone?’ he called out.  ‘I mean to tell you!  We have killed him, – you and I!  We are all his murderers…What did we do when we loosened this earth from its sun?…God is dead!  God remains dead!  And we have killed him!  How shall we console ourselves, the most murderous of all murderers?…’  Here the madman was silent and looked again at his hearers; they also were silent and looked at him in surprise.  At last he threw his lantern on the ground, so that it broke in pieces and was extinguished.  ‘I come too early,’ he then said ‘I am not yet at the right time.  This prodigious event is still on its way, and is traveling, – it has not yet reached men’s ears.’ “

Although Nietzsche will be looked at in more detail in part 8 of this post-series, it’s worth briefly mentioning what he was pointing out with this essay.  Nietzsche was highlighting the fact that at this point in our history after the Enlightenment, we had made a number of scientific discoveries about our world and our place in it, and this had made the concept of God somewhat superfluous.  As a result of the drastic rise in secularization, the proportion of people that didn’t believe in God rose substantially; and because Christianity no longer had the theistic foundation it relied upon, all of the moral systems, traditions, and the ultimate meaning in one’s life derived from Christianity had to be re-established if not abandoned altogether.

But many people, including a number of atheists, didn’t fully appreciate this fact and simply took for granted much of the cultural constructs that arose from Christianity.  Nietzsche had a feeling that most people weren’t intellectually fit for the task of re-grounding their values and finding meaning in a Godless world, and so he feared that nihilism would begin to dominate Western civilization.  Kierkegaard had similar fears, but as a man who refused to shake his own belief in God and in Christianity, he felt that it was his imperative to try and revive the Christian faith that he saw was in severe decline.

1.  The Man Himself

“The ultimate source of Kierkegaard’s power over us today lies neither in his own intelligence nor in his battle against the imperialism of intelligence-to use the formula with which we began-but in the religious and human passion of the man himself, from which the intelligence takes fire and acquires all its meaning.”

Aside from the fact that Kierkegaard’s own intelligence was primarily shaped and directed by his passion for love and for God, I tend to believe that intelligence is in some sense always subservient to the aims of one’s desires.  And if desires themselves are derivative of, or at least intimately connected to, feeling and passion, then a person’s intelligence is going to be heavily guided by subjectivity; not only in terms of the basic drives that attempt to lead us to a feeling of homeostasis but also in terms of the psychological contentment resulting from that which gives our lives meaning and purpose.

For Kierkegaard, the only way to truly discover what the meaning for one’s own life is, or to truly know oneself, is to endure the painful burden of choice eventually leading to the elimination of a number of possibilities and to the creation of a number of actualities.  One must make certain significant choices in their life such that, once those choices are made, they cannot be unmade; and every time this is done, one is increasingly committing oneself to being (or to becoming) a very specific self.  And Kierkegaard thought that renewing one’s choices daily in the sense of freely maintaining a commitment to them for the rest of one’s life (rather than simply forgetting that those choices were made and moving on to the next one) was the only way to give those choices any meaning, let alone any prolonged meaning.  Barrett mentions the relation of choice, possibility, and reality as it was experienced by Kierkegaard:

“The man who has chosen irrevocably, whose choice has once and for all sundered him from a certain possibility for himself and his life, is thereby thrown back on the reality of that self in all its mortality and finitude.  He is no longer a spectator of himself as a mere possibility; he is that self in its reality.”

As can be seen with Kierkegaard’s own life, some of these choices (such as breaking off his engagement with Regine Olsen) can be hard to live with, but the pain and suffering experienced in our lives is still ours; it is still a part of who we are as an individual and further affirms the reality of our choices, often adding an inner depth to our lives that we may otherwise never attain.

“The cosmic rationalism of Hegel would have told him his loss was not a real loss but only the appearance of loss, but this would have been an abominable insult to his suffering.”

I have to agree with Kierkegaard here that the felt experience one has is as real as anything ever could be.  To say otherwise, that is, to negate the reality of this or that felt experience is to deny the reality of any felt experience whatsoever; for they all precipitate from the same subjective currency of our own individual consciousness.  We can certainly distinguish between subjective reality and objective reality, for example, by evaluating which aspects of our experience can and cannot be verified through a third-party or through some kind of external instrumentation and measurement.  But this distinction only helps us in terms of fine-tuning our ability to make successful predictions about our world by better understanding its causal structure; it does not help us determine what is real and what is not real in the broadest sense of the term.  Reality is quite simply what we experience; nothing more and nothing less.

2. Socrates and Hegel; Existence and Reason

Barrett gives us an apt comparison between Kierkegaard and Socrates:

“As the ancient Socrates played the gadfly for his fellow Athenians stinging them into awareness of their own ignorance, so Kierkegaard would find his task, he told himself, in raising difficulties for the easy conscience of an age that was smug in the conviction of its own material progress and intellectual enlightenment.”

And both philosophers certainly made a lasting impact with their use of the Socratic method; Socrates having first promoted this method of teaching and argumentation, and Kierkegaard making heavy use of it in his first published work Either/Or, and in other works.

“He could teach only by example, and what Kierkegaard learned from the example of Socrates became fundamental for his own thinking: namely, that existence and a theory about existence are not one and the same, any more than a printed menu is as effective a form of nourishment as an actual meal.  More than that: the possession of a theory about existence may intoxicate the possessor to such a degree that he forgets the need of existence altogether.”

And this problem, of not being able to see the forest for the trees, plagues many people that are simply distracted by the details that are uncovered when they’re simply trying to better understand the world.  But living a life of contentment and deeply understanding life in general are two very different things and you can’t invest more in one without retracting time and effort from the other.  Having said that, there’s still some degree of overlap between these two goals; contentment isn’t likely to be maximally realized without some degree of in-depth understanding of the life you’re living and the experiences you’ve had.  As Socrates famously put it: “the unexamined life is not worth living.”  You just don’t want to sacrifice too many of the experiences just to formulate a theory about them.

How does reason, rationality, and thought relate to any theories that address what is actually real?  Well, the belief in a rational cosmos, such as that held by Hegel, can severely restrict one’s ontology when it comes to the concept of existence itself:

“When Hegel says, “The Real is rational, and the rational is real,” we might at first think that only a German idealist with his head in the clouds, forgetful of our earthly existence, could so far forget all the discords, gaps, and imperfections in our ordinary experience.  But the belief in a completely rational cosmos lies behind the Western philosophic tradition; at the very dawn of this tradition Parmenides stated it in his famous verse, “It is the same thing that can be thought and that can be.”  What cannot be thought, Parmenides held, cannot be real.  If existence cannot be thought, but only lived, then reason has no other recourse than to leave existence out of its picture of reality.”

I think what Parmenides said would have been more accurate (if not correct) had he rephrased it just a little differently: “It is the same thing that can be thought consciously experienced and that can be.”  Rephrasing it as such allows for a much more inclusive conception of what is considered real, since anything that is within the possibility of conscious experience is given an equal claim to being real in some way or another; which means that all the irrational thoughts or feelings, the gaps and lack of coherency in some of our experiences, are all taken into account as a part of reality. What else could we mean by saying that existence must be lived, other than the fact that existence must be consciously experienced?  If we mean to include the actions we take in the world, and thus the bodily behavior we enact, this is still included in our conscious experience and in the experiences of other conscious agents.  So once the entire experience of every conscious agent is taken into account, what else could there possibly be aside from this that we can truly say is necessary in order for existence to be lived?

It may be that existence and conscious experience are not identical, even if they’re always coincident with one another; but this is in part contingent on whether or not all matter and energy in the universe is conscious or not.  If consciousness is actually universal, then perhaps existence and some kind of experientiality or other (whether primitive or highly complex) are in fact identical with one another.  And if not, then it is still the existence of those entities which are conscious that should dominate our consideration since that is the only kind of existence that can actually be lived at all.

If we reduce our window of consideration from all conscious experience down to merely the subset of reason or rationality within that experience, then of course there’s going to be a problem with structuring theories about the world within such a limitation; for anything that doesn’t fit within its purview simply disappears:

“As the French scientist and philosopher Emile Meyerson says, reason has only one means of accounting for what does not come from itself, and that is to reduce it to nothingness…The process is still going on today, in somewhat more subtle fashion, under the names of science and Positivism, and without invoking the blessing of Hegel at all.”

Although, in order to be charitable to science, we ought to consider the upsurge of interest and development in the scientific fields of psychology and cognitive science in the decades following the writing of Barrett’s book; for consciousness and the many aspects of our psyche and our experience have taken on a much more important role in terms of what we are valuing in science and the kinds of phenomena we’re trying so hard to understand better.  If psychology and the cognitive and neurosciences include the entire goings-on of our brain and overall subjective experience, and if all the information that is processed therein is trying to be accounted for, then science should no longer be considered cut-off from, or exclusive to, that which lies outside of reason, rationality, or logic.

We mustn’t confuse or conflate reason with science even if science includes the use of reason, for science can investigate the unreasonable; it can provide us with ways of making more and more successful predictions about the causal structure of our experience even as they relate to emotions, intuition, and altered or transcendent states of consciousness like those stemming from meditation, religious experiences or psycho-pharmacological substances.  And scientific fields like moral and positive psychology are also better informing us of what kinds of lifestyles, behaviors, character traits and virtues lead to maximal flourishing and to the most fulfilling lives.  So one could even say that science has been serving as a kind of bridge between reason and existence; between rationality and the other aspects of our psyche that make life worth living.

Going back to the relation between reason and existence, Barrett mentions the conceptual closure of the former to the latter as argued by Kant:

“Kant declared, in effect, that existence can never be conceived by reason-though the conclusions he drew from this fact were very different from Kierkegaard’s.  “Being”, says Kant, “is evidently not a real predicate, or concept of something that can be added to the concept of a thing.”  That is, if I think of a thing, and then think of that thing as existing, my second concept does not add any determinate characteristic to the first…So far as thinking is concerned, there is no definite note or characteristic by which, in a concept, I can represent existence as such.”

And yet, we somehow manage to be able to distinguish between the world of imaginary objects and that of non-imaginary objects (at least most of the time); and we do this using the same physical means of perception in our brain.  I think it’s true to say that both imaginary and non-imaginary objects exist in some sense; for both exist physically as representations in our brains such that we can know them at all, even if they differ in terms of whether the representations are likely to be shared by others (i.e. how they map onto what we might call our external reality).

If I conceive of an apple sitting on a table in front of me, and then I conceive of an apple sitting on a table in front of me that you would also agree is in fact sitting on the table in front of me, then I’ve distinguished conceptually between an imaginary apple and one that exists in our external reality.  And since I can’t ever be certain whether or not I’m hallucinating that there’s an actual apple on the table in front of me (or any other aspect of my experienced existence), I must accept that the common thread of existence, in terms of what it really means for something to exist or not, is entirely grounded on its relation to (my own) conscious experience.  It is entirely grounded on our individual perception; on the way our own brains make predictions about the causes of our sensory input and so forth.

“If existence cannot be represented in a concept, he says (Kierkegaard), it is not because it is too general, remote, and tenuous a thing to be conceived of but rather because it is too dense, concrete, and rich.  I am; and this fact that I exist is so compelling and enveloping a reality that it cannot be reproduced thinly in any of my mental concepts, though it is clearly the life-and-death fact without which all my concepts would be void.”

I actually think Kierkegaard was closer to the mark than Kant was, for he claimed that it was not so much that reason reduces existence to nothingness, but rather that existence is so tangible, rich and complex that reason can’t fully encompass it.  This makes sense insofar as reason operates through the principle of reduction, abstraction, and the dissecting of a holistic experience into parts that relate to one another in a certain way in order to make sense of that experience.  If the holistic experience is needed to fully appreciate existence, then reason alone isn’t going to be up to the task.  But reason also seems to unify our experiences, and if this unification presupposes existence in order to make sense of that experience then we can’t fully appreciate existence without reason either.

3. Aesthetic, Ethical, Religious

Kierkegaard lays out three primary stages of living in his philosophy: namely, the aesthetic, the ethical, and the religious.  While there are different ways to interpret this “stage theory”, the most common interpretation treats these stages like a set of concentric circles or spheres where the aesthetic is in the very center, the ethical contains the aesthetic, and the religious subsumes both the ethical and the aesthetic.  Thus, for Kierkegaard, the religious is the most important stage that, in effect, supersedes the others; even though the religious doesn’t eliminate the other spheres, since a religious person is still capable of being ethical or having aesthetic enjoyment, and since an ethical person is still capable of aesthetic enjoyment, etc.

In a previous post where I analyzed Kierkegaard’s Fear and Trembling, I summarized these three stages as follows:

“…The aesthetic life is that of sensuous or felt experience, infinite potentiality through imagination, hiddenness or privacy, and an overarching egotism focused on the individual.  The ethical life supersedes or transcends this aesthetic way of life by relating one to “the universal”, that is, to the common good of all people, to social contracts, and to the betterment of others over oneself.  The ethical life, according to Kierkegaard, also consists of public disclosure or transparency.  Finally, the religious life supersedes the ethical (and thus also supersedes the aesthetic) but shares some characteristics of both the aesthetic and the ethical.

The religious, like the aesthetic, operates on the level of the individual, but with the added component of the individual having a direct relation to God.  And just like the ethical, the religious appeals to a conception of good and evil behavior, but God is the arbiter in this way of life rather than human beings or their nature.  Thus the sphere of ethics that Abraham might normally commit himself to in other cases is thought to be superseded by the religious sphere, the sphere of faith.  Within this sphere of faith, Abraham assumes that anything that God commands is Abraham’s absolute duty to uphold, and he also has faith that this will lead to the best ends…”

While the aesthete generally tends to live life in search of pleasure, always trying to flee away from boredom in an ever-increasing fit of desperation to continue finding moments of pleasure despite the futility of such an unsustainable goal, Kierkegaard also claims that the aesthete includes the intellectual who tries to stand outside of life; detached from it and only viewing it as a spectator rather than a participant, and categorizing each experience as either interesting or boring, and nothing more.  And it is this speculative detachment from life, which was an underpinning of Western thought, that Kierkegaard objected to; an objection that would be maintained within the rest of existential philosophy that was soon to come.

If the aesthetic is unsustainable or if someone (such as Kierkegaard) has given up such a life of pleasure, then all that remains is the other spheres of life.  For Kierkegaard, the ethical life, at least on its own, seemed to be insufficient for making up what was lost in the aesthetic:

“For a really passionate temperament that has renounced the life of pleasure, the consolations of the ethical are a warmed-over substitute at best.  Why burden ourselves with conscience and responsibility when we are going to die, and that will be the end of it?  Kierkegaard would have approved of the feeling behind Nietzsche’s saying, ‘God is dead, everything is permitted.’ ” 

One conclusion we might arrive at, after considering Kierkegaard’s attitude toward the ethical life, is that he may have made a mistake when he renounced the life of pleasure entirely.  Another conclusion we might make is that Kierkegaard’s conception of the ethical is incomplete or misguided.  If he honestly asks, in the hypothetical absence of God or any option for a religious life, why we ought to burden ourselves with conscience and responsibility, this seems to betray a fundamental flaw in his moral and ethical reasoning.  Likewise for Nietzsche, and Dostoyevsky for that matter, where they both echoed similar sentiments: “If God does not exist, then everything is permissible.”

As I’ve argued elsewhere (here, and here), the best form any moral theory can take (such that it’s also sufficiently motivating to follow) is going to be centered around the individual, and it will be grounded on the hypothetical imperative that maximizes their personal satisfaction and life fulfillment.  If some behaviors serve toward best achieving this goal and other behaviors detract from it, as a result of our human psychology, sociology, and thus as a result of the finite range of conditions that we thrive within as human beings, then regardless of whether a God exists or not, everything is most certainly not permitted.  Nietzsche was right however when he claimed that the “death of God” (so to speak) would require a means of re-establishing a ground for our morals and values, but this doesn’t mean that all possible grounds for doing so have an equal claim to being true nor will they all be equally efficacious in achieving one’s primary moral objective.

Part of the problem with Kierkegaard’s moral theorizing is his adoption of Kant’s universal maxim for ethical duty: “Act only according to that maxim whereby you can at the same time will that it should become a universal law.”  The problem is that this maxim or “categorical imperative” (the way it’s generally interpreted at least) doesn’t take individual psychological and circumstantial idiosyncrasies into account.  One can certainly insert these idiosyncrasies into Kant’s formulation, but that’s not how Kant intended it to be used nor how Kierkegaard interpreted it.  And yet, Kierkegaard seems to smuggle in such exceptions anyway, by having incorporated it into his conception of the religious way of life:

“An ethical rule, he says, expresses itself as a universal: all men under such-and-such circumstances ought to do such and such.  But the religious personality may be called upon to do something that goes against the universal norm.”

And if something goes against a universal norm, and one feels that they ought to do it anyway (above all else), then they are implicitly denying that a complete theory of ethics involves exclusive universality (a categorical imperative); rather, it must require taking some kind of individual exceptions into account.  Kierkegaard seems to be prioritizing the individual in all of his philosophy, and yet he didn’t think that a theory of ethics could plausibly account for such prioritization.

“The validity of this break with the ethical is guaranteed, if it ever is, by only one principle, which is central to Kierkegaard’s existential philosophy as well as to his Christian faith-the principle, namely, that the individual is higher than the universal. (This means also that the individual is always of higher value than the collective).”

I completely agree with Kierkegaard that the individual is always of higher value than the collective; and this can be shown by any utilitarian moral theory that forgets to take into account the psychological state of those individual actors and moral agents carrying out some plan to maximize happiness or utility for the greatest number of people.  If the imperative comes down to my having to do something such that I can no longer live with myself afterward, then the moral theory has failed miserably.  Instead, I should feel that I did the right thing in any moral dilemma (when thinking clearly and while maximally informed of the facts), even when every available option is less than optimal.  We have to be able to live with our own actions, and ultimately with the kind of person that those actions have made us become.  The concrete self that we alone have conscious access to takes on a priority over any abstract conception of other selves or any kind of universality that references only a part of our being.

“Where then as an abstract rule it commands something that goes against my deepest self (but it has to be my deepest self, and herein the fear and trembling of the choice reside), then I feel compelled out of conscience-a religious conscience superior to the ethical-to transcend that rule.  I am compelled to make an exception because I myself am an exception; that is, a concrete being whose existence can never be completely subsumed under any universal or even system of universals.”

Although I agree with Kierkegaard here for the most part, in terms of our giving the utmost importance to the individual self, I don’t think that a religious conscience is something one can distinguish from their conscience generally; rather, it would just be one’s conscience, albeit one altered by a set of religious beliefs, which may end up changing how one’s conscience operates but doesn’t change the fact that it is still their conscience nevertheless.

For example, if two people differ with respect to their belief in souls, where one has a religious belief that a fertilized egg has a soul and the other person only believes that people with a capacity for consciousness or a personality have souls (or have no soul at all), then that difference in belief may affect their conscience differently if both parties were to, for example, donate a fertilized egg to a group of stem-cell researchers.  The former may feel guilty afterward (knowing that the egg they believe to be inhabited by a soul may be destroyed), whereas the latter may feel really good about themselves for aiding important life-saving medical research.  This is why one can only make a proper moral assessment when they are taking seriously what is truly factual about the world (what is supported by evidence) and what is a cherished religious or otherwise supernatural belief.  If one’s conscience is primarily operating on true evidenced facts about the world (along with their intuition), then their conscience and what some may call their religious conscience should be referring to the exact same thing.

Despite the fact that viable moral theories should be centered around the individual rather than the collective, this doesn’t mean that we shouldn’t try and come up with sets of universal moral rules that ought to be followed most of the time.  For example, a rule like “don’t steal from others” is a good rule to follow most of the time, and if everyone in a society strives to obey such a rule, that society will be better off overall as a result; but if my child is starving and nobody is willing to help in any way, then my only option may be to steal a loaf of bread in order to prevent the greater moral crime of letting a child starve.

This example should highlight a fundamental oversight regarding Kant’s categorical imperative as well: you can build in any number of exceptions within a universal law to make it account for personal circumstances and even psychological idiosyncrasies, thus eliminating the kind of universality that Kant sought to maintain.  For example, if I willed it to be a universal law that “you shouldn’t take your own life,” I could make this better by adding an exception to it so that the law becomes “you shouldn’t take your own life unless it is to save the life of another,” or even more individually tailored to “you shouldn’t take your own life unless not doing so will cause you to suffer immensely or inhibit your overall life satisfaction and life fulfillment.”  If someone has a unique life history or psychological predisposition whereby taking their own life is the only option available to them lest they suffer needlessly, then they ought to take their own life regardless of whatever categorical imperative they are striving to uphold.

There is however still an important reason for adopting Kant’s universal maxim (at least generally speaking): the fact that universal laws like the Golden Rule provide people with an easy heuristic to quickly ascertain the likely moral status of any particular action or behavior.  If we try and tailor in all sorts of idiosyncratic exceptions (with respect to yourself as well as others), it makes the rules much more complicated and harder to remember; instead, one should use the universal rules most of the time and only when they see a legitimate reason to question it, should they consider if an exception should be made.

Another important point regarding ethical behavior or moral dilemmas is the factor of uncertainty in our knowledge of a situation:

“But even the most ordinary people are required from time to time to make decisions crucial for their own lives, and in such crises they know something of the “suspension of the ethical” of which Kierkegaard writes.  For the choice in such human situations is almost never between a good and an evil, where both are plainly as such and the choice therefore made in all the certitude of reason; rather it is between rival goods, where one is bound to do some evil either way, and where the ultimate outcome and even-of most of all-our own motives are unclear to us.  The terror of confronting oneself in such a situation is so great that most people panic and try to take cover under any universal rule that will apply, if only it will save them from the task of choosing themselves.”

And rather than making these tough decisions themselves, a lot of people would prefer for others to tell them what’s right and wrong behavior such as getting these answers from a religion, from one’s parents, from a community, etc.; but this negates the intimate consideration of the individual where each of these difficult choices made will lead them down a particular path in their life and shape who they become as a person.  The same fear drives a lot of people away from critical thinking, where many would prefer to have people tell them what’s true and false and not have to think about these things for themselves, and so they gravitate towards institutions that say they “have all the answers” (even if many that fear critical thinking wouldn’t explicitly say that this is the case, since it is primarily a manifestation of the unconscious).  Kierkegaard highly valued these difficult moments of choice and thought they were fundamental to being a true self living an authentic life.

But despite the fact that universal ethical rules are convenient and fairly effective to use in most cases, they are still far from perfect and so one will find themselves in a situation where they simply don’t know which rule to use or what to do, and one will just have to make a decision that they think will be the most easy to live with:

“Life seems to have intended it this way, for no moral blueprint has ever been drawn up that covers all the situations for us beforehand so that we can be absolutely certain under which rule the situation comes.  Such is the concreteness of existence that a situation may come under several rules at once, forcing us to choose outside any rule, and from inside ourselves…Most people, of course, do not want to recognize that in certain crises they are being brought face to face with the religious center of their existence.”

Now I wouldn’t call this the religious center of one’s existence but rather the moral center of one’s existence; it is simply the fact that we’re trying to distinguish between universal moral prescriptions (which Kierkegaard labels as “the ethical”) and those that are non-universal or dynamic (which Kierkegaard labels as “the religious”).  In any case, one can call this whatever they wish, as long as they understand the distinction that’s being made here which is still an important one worth making.  And along with acknowledging this distinction between universal and individual moral consideration, it’s also important that one engages with the world in a way where our individual emotional “palette” is faced head on rather than denied or suppressed by society or its universal conventions.

Barrett mentions how the denial of our true emotions (brought about by modernity) has inhibited our connection to the transcendent, or at least, inhibited our appreciation or respect for the causal forces that we find to be greater than ourselves:

“Modern man is farther from the truth of his own emotions than the primitive.  When we banish the shudder of fear, the rising of the hair of the flesh in dread, or the shiver of awe, we shall have lost the emotion of the holy altogether.”

But beyond acknowledging our emotions, Kierkegaard has something to say about how we choose to cope with them, most especially that of anxiety and despair:

“We are all in despair, consciously or unconsciously, according to Kierkegaard, and every means we have of coping with this despair, short of religion, is either unsuccessful or demoniacal.”

And here is another place where I have to part ways with Kierkegaard despite his brilliance in examining the human condition and the many complicated aspects of our psychology; for relying on religion (or more specifically, relying on religious belief) to cope with despair is but another distraction from the truth of our own existence.  It is an inauthentic way of living life since one is avoiding the way the world really is.  I think it’s far more effective and authentic for people to work on changing their attitude toward life and the circumstances they find themselves in without sacrificing a reliable epistemology in the process.  We need to provide ourselves with avenues for emotional expression, work to increase our mindfulness and positivity, and constantly strive to become better versions of ourselves by finding things we’re passionate about and by living a philosophically examined life.  This doesn’t mean that we have to rid ourselves of the rituals, fellowship, and meditative benefits that religion offer; but rather that we should merely dispense with the supernatural component and the dogma and irrationality that’s typically attached to and promoted by religion.

4. Subjective and Objective Truth

Kierkegaard ties his conception of the religious life to the meaning of truth itself, and he distinguishes this mode of living with the concept of religious belief.  While one can assimilate a number of religious beliefs just as one can do with non-religious beliefs, for Kierkegaard, religion itself is something else entirely:

“If the religious level of existence is understood as a stage upon life’s way, then quite clearly the truth that religion is concerned with is not at all the same as the objective truth of a creed or belief.  Religion is not a system of intellectual propositions to which the believer assents because he knows it to be true, as a system of geometry is true; existentially, for the individual himself, religion means in the end simply to be religious.  In order to make clear what it means to be religious, Kierkegaard has to reopen the whole question of the meaning of truth.”

And his distinction between objective and subjective truth is paramount to understanding this difference.  One could say perhaps that by subjective truth he is referring to a truth that must be embodied and have an intimate relation to the individual:

“But the truth of religion is not at all like (objective truth): it is a truth that must penetrate my own personal existence, or it is nothing; and I must struggle to renew it in my life every day…Strictly speaking, subjective truth is not a truth that I have, but a truth that I am.”

The struggle for renewal goes back to Kierkegaard’s conception of how the meaning a person finds in their life is in part dependent on some kind of personal commitment; it relies on making certain choices that one remakes day after day, keeping these personal choices in focus so as to not lose sight of the path we’re carving out for ourselves.  And so it seems that he views subjective truth as intimately connected to the meaning we give our lives, and to the kind of person that we are now.

Perhaps another way we can look at this conception, especially as it differs from objective truth, is to examine the relation between language, logic, and conscious reasoning on the one hand, and intuition, emotion, and the unconscious mind on the other.  Objective truth is generally communicable, it makes explicit predictions about the causal structure of reality, and it involves a way of unifying our experiences into some coherent ensemble; but subjective truth involves felt experience, emotional attachment, and a more automated sense of familiarity and relation between ourselves and the world.  And both of these facets are important for our ability to navigate the world effectively while also feeling that we’re psychologically whole or complete.

5. The Attack Upon Christendom

Kierkegaard points out an important shift in modern society that Barrett mentioned early on in this book; the move toward mass society, which has effectively eaten away at our individuality:

“The chief movement of modernity, Kierkegaard holds, is a drift toward mass society, which means the death of the individual as life becomes ever more collectivized and externalized.  The social thinking of the present age is determined, he says, by what might be called the Law of Large Numbers: it does not matter what quality each individual has, so long as we have enough individuals to add up to a large number-that is, to a crowd or mass.”

And false metrics of success like economic growth and population growth have definitely detracted from the quality each of our lives is capable of achieving.  And because of our inclinations as a social species, we are (perhaps unconsciously) drawn towards the potential survival benefits brought about by joining progressively larger and larger groups.  In terms of industrialization, we’ve been using technology to primarily allow us to support more people on the globe and to increase the output of each individual worker (to benefit the wealthiest) rather than substantially reducing the number of hours worked per week or eliminating poverty outright.  This has got to be the biggest failure of the industrial revolution and of capitalism (when not regulated properly), and one that’s so often taken for granted.

Because of the greed that’s consumed the moral compass of those at the top of our sociopolitical hierarchy, our lives have been funneled into a military-industrial complex that will only surrender our servitude when the rich eventually stop asking for more and more of the fruits of our labor.  And by the push of marketing and social pressure, we’re tricked into wanting to maintain society the way it is; to continue to buy more consumable garbage that we don’t really need and to be complacent with such a lifestyle.  The massive externalization of our psyche has led to a kind of, as Barrett put it earlier, spiritual poverty.  And Kierkegaard was well aware of this psychological degradation brought on by modernity’s unchecked collectivization.

Both Kierkegaard and Nietzsche also saw a problem with how modernity had effectively killed God; though Kierkegaard and Nietzsche differed in their attitudes toward organized religion, Christianity in particular:

“The Grand Inquisitor, the Pope of Popes, relieves men of the burden of being Christian, but at the same time leaves them the peace of believing they are Christians…Nietzsche, the passionate and religious atheist, insisted on the necessity of a religious institution, the Church, to keep the sheep in peace, thus putting himself at the opposite extreme from Kierkegaard; Dostoevski in his story of the Grand Inquisitor may be said to embrace dialectically the two extremes of Kierkegaard and Nietzsche.  The truth lies in the eternal tension between Christ and the Grand Inquisitor.  Without Christ the institution of religion is empty and evil, but without the institution as a means of mitigating it the agony in the desert of selfhood is not viable for most men.”

Modernity had helped produce organized religion, thus diminishing the personal, individualistic dimension of spirituality which Kierkegaard saw as indispensable; but modernity also facilitated the “death of God” making even organized religion increasingly difficult to adhere to since the underlying theistic foundation was destroyed for many.  Nietzsche realized the benefits of organized religion since so many people are unable to think critically for themselves, are unable to find an effective moral framework that isn’t grounded on religion, and are unable to find meaning or stability in their lives that isn’t grounded on belief in God or in some religion or other.  In short, most people aren’t able to deal with the burdens realized within existentialism.

Due to the fact that much of European culture and so many of its institutions had been built around Christianity, this made the religion much more collectivized and less personal, but it also made Christianity more vulnerable to being uprooted by new ideas that were given power from the very same collective.  Thus, it was the mass externalization of religion that made religion that much more accessible to the externalization of reason, as exemplified by scientific progress and technology.  Reason and religion could no longer co-exist in the same way they once had because the externalization drastically reduced our ability to compartmentalize the two.  And this made it that much harder to try and reestablish a more personal form of Christianity, as Kierkegaard had been longing for.

Reason also led us to the realization of our own finitude, and once this view was taken more seriously, it created yet another hurdle for the masses to maintain their religiosity; for once death is seen as inevitable, and immortality accepted as an impossibility, one of the most important uses for God becomes null and void:

“The question of death is thus central to the whole of religious thought, is that to which everything else in the religious striving is an accessory: ‘If there is no immortality, what use is God?’ “

The fear of death is a powerful motivator for adopting any number of beliefs that might help to manage the cognitive dissonance that results from it, and so the desire for eternal happiness makes death that much more frightening.  But once death is truly accepted, then many of the other beliefs that were meant to provide comfort or consolation are no longer necessary or meaningful.  For Kierkegaard, he didn’t think we could cope with the despair of an inevitable death without a religion that promised some way to overcome it and to transcend our life and existence in this world, and he thought Christianity was the best religion to accomplish this goal.

It is on this point in particular, how he fails to properly deal with death, that I find Kierkegaard to lack an important strength as an existentialist; as it seems that in order to live an authentic life, a person must accept death, that is, they must accept the finitude of our individual human existence.  As Heidegger would later go on to say, buried deep within the idea of death as the possibility of impossibility is a basis for affirming one’s own life, through the acceptance of our mortality and the uncertainty that surrounds it.

Click here for the next post in this series, part 8 on Nietzsche’s philosophy.

Mind, Body, and the Soul: The Quest for an Immaterial Identity

There’s little if any doubt that the brain (the human brain in particular) is the most complex entity or system that we’ve ever encountered in the known universe, and thus it is not surprising that it has allowed humans to reach the top of the food chain and also the ability to manipulate our environment more than any other creature on Earth.  Not only has it provided humans with the necessary means for surviving countless environmental pressures, effectively evolving as a sort of anchor and catalyst for our continued natural selection over time (through learning, language, adaptive technology, etc.), but it has also allowed humans to become aware of themselves, aware of their own consciousness, and aware of their own brains in numerous other ways.  The brain appears to be the first evolved feature of an organism capable of mapping the entire organism (including its interaction with the external environment), and it may even be the case that consciousness later evolved as a result of the brain making maps of itself.  Even beyond these capabilities, the human brain has also been able to map itself in terms of perceptually acquired patterns related to its own activity (i.e. when we study and learn about how our brains work).

It isn’t at all surprising when people marvel over the complexity, beauty and even seemingly surreal qualities of the brain as it produces the qualia of our subjective experience including all of our sensations, emotions and the resulting feelings that ensue.  Some of our human attributes are so seemingly remarkable, that many people have gone so far as to say that at least some of these attributes are either supernatural, supernaturally endowed, and/or are forever exclusive to humans.  For example, some religious people claim that humans alone have some kind of immaterial soul that exists outside of our experiential reality.  Some also believe that humans alone possess free will, are conscious in some way forever exclusive to humans (some have even argued that consciousness in general is an exclusively human trait), and a host of other (perhaps anthropocentric) “human only” attributes, with many of them forever exclusive to humans.  In the interest of philosophical exploration, I’d like to consider and evaluate some of these claims about “exclusively human” attributes.  In particular, I’d like to focus on the non-falsifiable claim of having a soul, with the aid of reason and a couple of thought experiments, although these thought experiments may also shed some light on other purported “exclusively human” attributes (e.g. free will, consciousness, etc.).  For the purposes of simplicity in these thought experiments, I may periodically refer to many or all purported “humanly exclusive” attributes as simply, “H”.  Let’s begin by briefly examining some of the common conceptions of a soul and how it is purported to relate to the physical world.

What is a Soul?

It seems that most people would define a soul to be some incorporeal entity or essence that serves as an immortal aspect or representation of an otherwise mortal/living being.  Furthermore, many people think that souls are something possessed by human beings alone.  There are also people who ascribe souls to non-living entities (such as bodies of water, celestial bodies, wind, etc.), but regardless of these distinctions, for those that believe in souls, there seems to be something in common: souls appear to be non-physical entities correlated, linked, or somehow attached to a particular physical body or system, and are usually believed to give rise to consciousness, a “life force”, animism, or some power of agency.  Additionally, they are often believed to transcend material existence through their involvement in some form of an afterlife.  While it is true that souls and any claims about souls are unfalsifiable and thus are excluded from any kind of empirical investigation, let’s examine some commonly held assumptions and claims about souls and see how they hold up to a more critical examination.

Creation or Correlation of Souls

Many religious people now claim that a person’s life begins at conception (after Science discovered this specific stage of reproduction), and thus it would be reasonable to assume that if they have a soul, that soul is effectively created at conception.  However, some also believe that all souls have co-existed for the same amount of time (perhaps since the dawn of our universe), and that souls are in some sense waiting to be linked to the physical person once they are conceived or come into existence.  Another way of expressing this latter idea is the belief that all souls have existed since some time long ago, but only after the reproductive conception of a person does that soul begin to have a physical correlate or incarnation linked to it.  In any case, the presumed soul is believed to be correlated to a particular physical body (generally presumed to be a “living” body, if not a human body), and this living body has been defined by many to begin its life either at conception (i.e. fertilization), shortly thereafter as an embryo (i.e. once the fertilized egg/cell undergoes division at least once), or once it is considered a fetus (depending on the context for such a definition).  The easiest definition to use for the purposes of this discussion is to define life to begin at conception (i.e. fertilization).

For one, regardless of the definition chosen, it seems difficult to define exactly when the particular developmental stage in question is reached.  Conception could be defined to take place once the spermatozoa’s DNA contents enter the zygote or perhaps not until some threshold has been reached in a particular step of the process afterward (e.g. some time after the individual parent DNA strands have mixed to produce a double-helix daughter strand).  Either way, most proponents of the idea of a human soul seem to assume that a soul is created or at least correlated (if created some time earlier) at the moment of, or not long after, fertilization.  At this point, the soul is believed to be correlated or representative of the now “living” being (which is of course composed of physical materials).

At a most basic level, one could argue, if we knew exactly when a soul was created/correlated with a particular physical body (e.g. a fertilized egg), then by reversing the last step in the process that instigated the creation/correlation of the soul, we should be able to destroy/decorrelate the soul.  Also, if a soul was in fact correlated with an entire fertilized egg, then if we remove even one atom, molecule, etc., would that correlation change?  If not, then it would appear that the soul is not actually correlated with the entire fertilized egg, but rather it is correlated with some higher level aspect or property of it (whatever that may be).

Conservation & Identity of Souls

Assuming a soul is in fact created or correlated with a fertilized egg, what would happen in the case of chimerism, where more than one fertilized egg fuse together in the early stages of embryonic development?  Would this developing individual have two souls?  By the definition or assumptions given earlier, if a soul is correlated with a fertilized egg in some way, and two fertilized eggs (each with their own soul) merge together, then this would indicate one of a few possibilities.  Either two souls merged into one (or one is actually destroyed) which would demonstrate that the number of souls are not conserved (indicating that not all souls are eternal/immortal), or the two souls would co-exist with that one individual and would imply that not all individuals have the same number of souls (some have one, some may have more) and thus souls don’t each have their own unique identity with a particular person, or it would indicate that after the merging of fertilized eggs took place, one of the two souls would detach from or become decorrelated with its physical counterpart, and the remaining soul would get to keep the booty of both fertilized eggs or so to speak.

In the case of identical twins, triplets, etc., a fertilized egg eventually splits, and we are left with the opposite conundrum. It would seem that we would be starting with one soul that eventually splits into two or more, and thus there would be another violation of the conservation of the number of souls.  Alternatively, if the number of souls are indeed conserved, an additional previously existing soul (if this was the case) could become correlated with the second fertilized egg produced. Yet another possibility would be to say that the “twins to be” (i.e. the fertilized egg prior to splitting) has two souls to start with and when the egg splits, the souls are segregated and each pre-destined twin is given their own.

The only way to avoid these implications would be to modify the assumption given earlier, regarding when a soul is created or correlated.  It would have to be defined such that a soul is created or correlated with a physical body some time after an egg is fertilized when it is no longer possible to fuse with another fertilized egg and after it can no longer split into fertilized multiples (i.e. twins, triplets, etc.).  If this is true, then one could no longer say that a fertilized egg necessarily has a soul, for that wouldn’t technically be the case until some time afterward when chimerism or monozygotic multiples were no longer possible.

If people believe in non-physical entities that can’t be seen or in any way extrospectively verified, it’s not much of a stretch to say that they can come up with a way to address these questions or reconcile these issues, with yet more unfalsifiable claims.  Some of these might not even be issues for various believers but I only mention these potential issues to point out the apparent arbitrariness or poorly defined aspects of many claims and assumptions regarding souls. Now let’s look at a few thought experiments to further analyze the concept of a soul and purported “exclusively human” attributes (i.e. “H”) as mentioned in the introduction of this post.

Conservation and Identity of “H”

Thought Experiment # 1: Replace a Neuron With a Non-Biological Analog

What if one neuron in a person’s brain is replaced with a non-biological/artificial version, that is, what if some kind of silicon-based (or other non-carbon-based) analog to a neuron was effectively used to replace a neuron?  We are assuming that this replacement with another version will accomplish the same vital function, that is, the same subjective experience and behavior.  This non-biologically-based neuronal analog may be powered by ATP (Adenosine Triphosphate) and also respond to neurotransmitters with electro-chemical sensors — although it wouldn’t necessarily have to be constrained by the same power or signal transmission media (or mechanisms) as long as it produced the same end result (i.e. the same subjective experience and behavior).  As long as the synthetic neuronal replacement accomplished the same ends, the attributes of the person (i.e. their identity, their beliefs, their actions, etc.) should be unaffected despite any of these changes to their hardware.

Regarding the soul, if souls do in fact exist and they are not physically connected to the body (although people claim that souls are somehow associated with a particular physical body), then it seems reasonable to assume that changing a part of the physical body should have no effect on an individual’s possession of that soul (or any “H” for that matter), especially if the important attributes of the individual, i.e., their beliefs, thoughts, memories, and subsequent actions, etc., were for all practical purposes (if not completely), the same as before.  Even if there were some changes in the important aspects of the individual, say, if there was a slight personality change after some level of brain surgery, could anyone reasonably argue that their presumed soul (or their “H”) was lost as a result?  If physical modifications of the body led to the loss of a soul (or of any elements of “H”), then there would be quite a large number of people (and an increasing number at that) who no longer have souls (or “H”) since many people indeed have had various prosthetic modifications used in or on their bodies (including brain and neural prosthetics) as well as other intervening mediation of body/brain processes (e.g. through medication, transplants, various levels of critical life support, etc.).

For those that think that changing the body’s hardware would somehow disconnect the presumed soul from that person’s body (or eliminate other elements of their “H”), they should consider that this assumption is strongly challenged by the fact that many of the atoms in the human body are replaced (some of them several times over) throughout one’s lifetime anyway.  Despite this drastic biological “hardware” change, where our material selves are constantly being replaced with new atoms from the food that we eat and the air that we breathe (among other sources), we still manage to maintain our memories and our identity simply because the functional arrangements of the brain cells (i.e. neurons and glial cells) which are composed of those atoms are roughly preserved over time and thus the information contained in such arrangements and/or their resulting processes are preserved over time.  We can analogize this important point by thinking about a computer that has had its hardware replaced, albeit in a way that matches or maintains its original physical state, and understand that as a result of this configuration preservation, it also should be able to maintain its original memory, programs and normal functional operation.  One could certainly argue that the computer in question is technically no longer the “same” computer because it no longer has any of the original hardware.  However, the information regarding the computer’s physical state, that is, the specific configuration and states of parts that allow it to function exactly as it did before the hardware replacement, is preserved.  Thus, for all practical purposes in terms of the identity of that computer, it remained the same regardless of the complete hardware change.

This is an important point to consider for those who think that replacing the hardware of the brain (even if limited to a biologically sustained replacement) is either theoretically impossible, or that it would destroy one’s ability to be conscious, to maintain their identity, to maintain their presumed soul, or any presumed element of “H”.  The body naturally performs these hardware changes (through metabolism, respiration, excretion, etc.) all the time and thus the concept of changing hardware while maintaining the critical aspects of an individual is thoroughly demonstrated throughout one’s lifetime.  On top of this, the physical outer boundary that defines our bodies is also arbitrary in the sense that we exchange atoms between our outer surface and the environment around us (e.g. by shedding skin cells, or through friction, molecular desorption/adsorption/absorption, etc.).  The key idea to keep in mind is that these natural hardware changes imply that “we” are not defined specifically by our hardware or some physical boundary with a set number of atoms, but rather “we” are based on how our hardware is arranged/configured (allowing for some variation of configuration states within some finite acceptable range), and the subsequent processes and functions that result from such an arrangement as mediated by the laws of physics.

Is the type of hardware important?  It may be true that changing a human’s hardware to a non-biological version may never be able to accomplish exactly the same subjective experience and behavior that was possible with the biological hardware, however we simply don’t know that this is the case.  It may be that both the type of hardware as well as the configuration are necessary for a body and brain to produce the same subjective experience and behavior.  However, the old adage “there’s more than one way to skin a cat” has been applicable to so many types of technologies and to the means used to accomplish a number of goals.  There are a number of different hardware types and configurations that can be used to accomplish a particular task, even if, after changing the hardware the configuration must also be changed to accomplish a comparable result.  The question becomes, which parts or aspects of the neural process in the brain produces subjective experience and behavior?  If this becomes known, we should be able to learn how biologically-based hardware and its configuration work together in order to accomplish a subjective experience and behavior, and then also learn if non-biologically-based hardware (perhaps with its own particular configuration) can accomplish the same task.  For the purposes of this thought experiment, let’s assume that we can swap out the hardware with a different type, even if, in order to preserve the same subjective experience and behavior, the configuration must be significantly different than it was with the original biologically-based hardware.

So, if we assume that we can replace a neuron with an efficacious artificial version, and still maintain our identity, our consciousness, any soul that might be present, or any element of “H” for that matter, then even if we replace two neurons with artificial versions, we should still have the same individual.  In fact, even if we replace every neuron, perhaps just one neuron at a time, eventually we would be replacing the entire brain with an artificial version, and yet still have the same individual.  This person would now have a completely non-biologically based “brain”.  In theory, their identity would be the same, and they would subjectively experience reality and their selves as usual.  Having gone this far, let’s assume that we replace the rest of the body with an artificial version.  Replacing the rest of the body, one part at a time, should be far less significant a change than replacing the brain, for the rest of the body is far less complex.

It may be true that the body serves as an integral homeostatic frame of reference necessary for establishing some kind of self-object basis of consciousness (e.g. Damasio’s Theory of Consciousness), but as long as our synthetic brain is sending/receiving the appropriate equivalent of sensory/motor information (i.e. through an interoceptive feedback loop among other requirements) from the new artificial body, the model or map of the artificial body’s internal state provided by the synthetic brain should be equivalent.  It should also be noted that the range of conditions necessary for homeostasis in one human body versus another is far narrower and less individualized than the differences found between the brains of two different people.  This supports the idea that the brain is in fact the most important aspect of our individuality, and thus replacing the rest of the body should be significantly easier to accomplish and also less critical a change.  After replacing the rest of the body, we would now have a completely artificial non-biological substrate for our modified “human being”, or what many people would refer to as a “robot”, or a system of “artificial intelligence” with motor capabilities.  This thought experiment seems to suggest at least one of several implications:

  • Some types of robots can possess “H” (e.g. soul, consciousness, free-will, etc.), and thus “H” are not uniquely human, nor are they forever exclusive to humans.
  • Humans lose some or all of their “H” after some threshold of modification has taken place (likely a modification of the brain)
  • “H”, as it is commonly defined at least, does not exist

The first implication listed above would likely be roundly rejected by most people that believe in the existence of “H” for several reasons including the fact that most people see robots as fundamentally different than living systems, they see “H” as only applicable to human beings, and they see a clear distinction between robots and human beings (although the claim that these distinctions exist has been theoretically challenged by this thought experiment).  The second implication sounds quite implausible (even if we assume that “H” exists) as it would seem to be impossible to define when exactly any elements of “H” were lost based on exceeding some seemingly arbitrary threshold of modification.  For example, would the loss of some element of “H” occur only after the last neuron was replaced with an artificial version?  If the loss of “H” did occur after some specific number of neurons were removed (or after the number of neurons that remained fell below some critical minimum quantity), then what if the last neuron removed (which caused this critical threshold to be met) was biologically preserved and later re-installed, thus effectively reversing the last neuronal replacement procedure?  Would the previously lost “H” then return?

Thought Experiment # 2: Replace a Non-Biological Neuronal Analog With a Real Neuron

We could look at this thought experiment (in terms of the second implication) yet another way by simply reversing the order of the thought experiment.  For example, imagine that we made a robot from scratch that was identical to the robot eventually obtained from the aforementioned thought experiment, and then we began to replace its original non-biologically-based neuronal equivalent with actual biologically-based neurons, perhaps even neurons that were each taken from a separate human brain (say, from one or several cadavers) and preserved for such a task.  Even after this, consider that we proceed to replace the rest of the robot’s “body”, again piecewise (say, from one or several cadavers), until it was completely biologically-based to match the human being we began with in the initial thought experiment.  Would or could this robot acquire “H” at some point, or be considered human?  It seems that there would be no biological reason to claim otherwise.

Does “H” exist?  If So, What is “H”?

I’m well aware of how silly some of these hypothetical questions and considerations sound, however I find it valuable to follow the reasoning all the way through in order to help illustrate the degree of plausibility of these particular implications, and the plausibility or validity of “H”.  In the case of the second implication given previously (that humans lose some or all of “H” after some threshold of modification), if there’s no way to define or know when “H” is lost (or gained), then nobody can ever claim with certainty that an individual has lost their “H”, and thus they would have to assume that all elements of “H” have never been lost (if they want to err on the side of, what some may call, ethical or moral caution).  By that rationale, one would find themselves forced to accept the first implication (some types of robots can possess “H”, and thus “H” isn’t unique to humans).  If anyone denies the first two implications, it seems that they are only left with the third option.  The third implication seems to be the most likely (that “H” as previously defined does not exist), however it should be mentioned that even this third implication may be circumvented by realizing that it has an implicit loophole.  There is a possibility that some or all elements and/or aspects of “H” are not exactly what people assume them to be, and therefore “H” may exist in some other sense.  For example, what if we considered particular patterns themselves, i.e., the brain/neuronal configurations, patterns of brain waves, neuronal firing patterns, patterns of electro-chemical signals emanated throughout the body, etc., to be the “immaterial soul” of each individual?  We could look at these patterns as being immaterial if the physical substrate that employs them is irrelevant, or by simply noting that patterns of physical material states are not physical materials in themselves.

This is analogous to the concept that the information contained in a book can be represented on paper, electronically, in multiple languages, etc., and is not reliant on a specific physical medium.  This would mean that one could accept the first implication that robots or “mechanized humans” possess “H”, although it would also necessarily imply that any elements of “H” aren’t actually unique or exclusive to humans as they were initially assumed to be.  One could certainly accept this first implication by noting that the patterns of information (or patterns of something if we don’t want to call it information per se) that comprise the individual were conserved throughout the neuronal (or body) replacement in these thought experiments, and thus the essence or identity of the individual (whether “human” or “robot”) was preserved as well.

Pragmatic Considerations & Final Thoughts

I completely acknowledge that in order for this hypothetical neuronal replacement to be truly accurate in reproducing normal neuronal function (even with just one neuron), above and beyond the potential necessity of both a specific type of hardware as well as configuration (as mentioned earlier), the non-biologically based version would presumably also have to replicate the neuronal plasticity that the brain normally possesses.  In terms of brain plasticity, there are basically four known factors involved with neuronal change, sometimes referred to as the four R’s: regeneration, reconnection, re-weighting, and rewiring.  So clearly, any synthetic neuronal version would likely involve some kind of malleable processing in order to accomplish at least some of these tasks (if not all of them to some degree), as well as some possible nano-self-assembly processes if actual physical rewiring were needed.  The details of what and how this would be accomplished will become better known over time as we learn more about the possible neuronal dynamic mechanisms involved (e.g. neural darwinism or other means of neuronal differential reproduction, connectionism, Hebbian learning, DNA instruction, etc.).

I think that the most important thing to gain from these thought experiments is the realization of the inability or severe difficulty in taking the idea of souls or “H” seriously given the incompatibility between the traditional  conception of a concrete soul or other “H” and the well-established fluidic or continuous nature of the material substrates that they are purportedly correlated with.  That is, all the “things” in this world, including any forms of life (human or not) are constantly undergoing physical transformation and change, and they possess seemingly arbitrary boundaries that are ultimately defined by our own categorical intuitions and subjective perception of reality.  In terms of any person’s quest for “H”, if what one is really looking for is some form of constancy, essence, or identity of some kind in any of the things around us (let alone in human beings), it seems that it is the patterns of information (or perhaps the patterns of energy to be more accurate) as well as the level of complexity or type of patterns that ultimately constitute that essence and identity.  Now if it is reasonable to conclude that the patterns of information or energy that comprise any physical system aren’t equivalent to the physical constituent materials themselves, one could perhaps say that these patterns are a sort of “immaterial” attribute of a set of physical materials.  This seems to be as close to the concept of an immaterial “soul” as a physicalist or materialist could concede exists, since, at the very least it involves a property of continuity and identity which somewhat transcends the physical materials themselves.

Non-separability vs. Reductionism: an Inference of Quantum Mechanics

Reductionism has been an extremely effective approach for learning a lot about the world around us.  The amount of information that we’ve obtained as a result of this method has allowed us to more effectively predict the course of nature and thus appears to have been invaluable in our quest for better understanding the universe.  However, I believe that several limitations of reductionism have been revealed through Quantum Mechanics, thus demonstrating it to be inapplicable to the fundamental nature of the universe.  This inapplicability results from the “non-separability” property that appears to emerge from within Quantum mechanical theory.  While this is just my inference of some of the quantum physical findings, the implications of this property of “non-separability” (i.e. “one-ness”) are vast, far-reaching and are incredibly significant in terms of the “lens” we use to look at the world around us.

I feel that I need to define what I mean by “reductionism” as there are a few interpretations (some more ambiguous than others) as well as specific types of reductionism that people may refer to more than others.  Among these types are: theoretical, methodological and ontological reductionism.  I’m mostly concerned with ontological and methodological reductionism.  When I use the term “reductionism”, I am referring to the view that the nature of complex things can be reduced to the interaction of their parts.  An alternate definition that I’ll use in combination with the former is the view that the whole (a complex system) is simply the sum of its parts (some may see this as the opposite of “holism” or “synergy” — in which the whole is greater than the sum of it’s parts).  The use of the word “parts” is what I find most important in these definitions.  We can see an obvious problem if we attempt to unify the concept of “parts” (a required component of reductionism) with the concept of “no parts” (an implied component of non-separability).  My use of the word “non-separability” implies that any parts that may appear to exist are illusory parts due to the fact that the concept “parts” implies that it is separate from everything else (separate from “other parts”, etc.).  To simplify matters, one could equate the concept of non-separability that I’m referring to with the concept of “one-ness” (i.e. there are no “parts”, there is only the whole).

Reductionism is an idea that appears to have been around for centuries.  The idea has penetrated the realms of science and philosophy from the time of Thales from Miletus — who held the belief that the world was or was composed of water at a fundamental level, to the time of the scientific revolution and onward with the advent of Isaac Newton’s classical mechanics (or more appropriately “Newtonian” mechanics) and the new yet equivalent views of Lagrangian and Hamiltonian mechanics that ensued.  I think it would be safe to say that most scientists from that point forward were convinced that anything could be broken down into fundamental parts providing us with a foundation for 100% predictability (determinism).  After all, if a system seemed too complex to predict, breaking it down into simpler components seemed to be the easiest route to succeed.  Even though scientists couldn’t predict anything with 100% certainty, it was believed by many that it was possible if the instrumentation used was sufficient enough.  They thought that perhaps there were fundamental constituent building blocks that followed a deterministic model.  This view would change drastically in the early part of the 20th century when the foundations for Quantum Mechanics were first established by Bohr, de Broglie, Heisenberg, Planck, Shrodinger and many others.  New discoveries such as Heisenberg’s famous “Uncertainty Principle”, as well as the concepts of wave-particle duality, quantum randomness, superposition, wave function collapse, entanglement, non-locality and others, began to repudiate classical ideas of causality and determinism.  These discoveries also implied that certain classical concepts like “location”, “object”, and “separate” needed to be reconsidered.  This appeared to be the beginning of the end of reductionism in my view, or more specifically, it demonstrated the fundamental limitations of such an approach.  One thing I want to point out is that I completely acknowledge that reductionism was the approach that eventually led to these quantum discoveries.  After all, we wouldn’t have been able to discover the quantum realm at all had we not “scaled” down our research (i.e. investigated physically smaller and smaller regimes) using reductionism.  While it did in fact lead us to the underlying quantum realm, it is what we choose to do with this new information that will effect our overall view of the world around us.

One interesting thing about the concept of reductionism is it’s relationship to the scientific method.  To clarify, at the very least, we can say that in order to have separate objects and observers (as we require in the scientific method) we must dismiss the idea of non-separability, thus employing the idea of separateness (required for reductionism).  The effectiveness of the scientific method presumes that the observer is not affecting the observed (i.e. they are isolated from each other).  Scientists are well aware of the “Observer effect”, whereby the minimization of this effect defines a better-controlled experiment and the elimination of the effect altogether defines a perfectly controlled experiment.  Experimentation within Quantum mechanics is no exception to this effect.  In fact, quantum mechanical measurements appear to have what I like to call a “100% observer effect”.  Allow me to clarify.  The superposition principle in quantum mechanics states that a physical system (e.g. an electron) exists in all it’s possible states simultaneously and only upon measuring that system will it give a result that corresponds to one of the possible states or configurations.  This unique effect produced by measurement is referred to as the “collapse of the wave-function”, at least according to the Copenhagen interpretation of quantum mechanics.  I think that this “100% observer effect” is worth noting when considering the scientific method and the reductionist elements within it.  There appears to be an inescapable interconnectedness between the observer and the observed which forces us to recognize the limitations of the scientific method within this realm.  At the very least, within this quantum realm, our classical idea of a “controlled” experiment no longer exists.

Another interesting discovery within Quantum mechanics was Heisenberg’s Uncertainty principle which states that we can’t know any two complementary properties of a “particle” simultaneously without some minimum degree of uncertainty.  One example of complementary properties of a particle are position and momentum, whereby when we measure one property to an arbitrary accuracy we lose accuracy in our ability to measure the other property and vice versa.  Here again within the quantum realm we are able to see an interconnectedness, although this time it is between physical properties themselves.  Just as we saw in the case of the observer and the observed, what we once thought of as being completely separate and independent of one another, turn out to be unavoidably interconnected.

The last concept I want to discuss within quantum theory is that of quantum entanglement.  If two particles come into physical contact with each other, interact or are created in a very specific way, they can become entangled or linked together in very specific ways (much like that of the aforementioned complementary properties: position and momentum).  In this case, the linkage between the two particles is a shared quantum state where one particle can’t be fully described without considering the state of the other.  This shared quantum state “collapses” as soon as one of the particle’s states is measured.  For example, if we have two electrons that become entangled, and then we measure one of them to be a spin up electron, by default the other will be a spin down electron.  Prior to measurement, there is an equal probability that the first particle’s state measured will be spin up or spin down.  Again, according to the Copenhagen interpretation of quantum mechanics, the quantum state that this electron assumes upon measurement is completely random.  However, once the electron’s spin has been measured, there is a 100% chance that the other electron in the entangled pair will have the anti-correlated spin, even though in theory it should still be a 50% chance as was the case for the first electron.  At the time this was discovered, most physicists (most notably Einstein) were convinced that there was some form of a hidden variables theory which could determine which state the electron would assume upon measurement.  Unfortunately, Bell’s theorem suggested that local hidden variables were impossible.  This left only one alternative for an underlying determinism (if there was one) and that was the theory of non-local hidden variables.  Non-locality however is something that is so counter-intuitive due to its acausal nature, even if it was somehow deterministic, it would not make sense from a reductionist (i.e. locally interacting parts) perspective.  Thus, all of the reductionist approaches — that is, the assumption that complex systems can be explained by breaking them down into smaller, yet causally connected (locally interacting) parts, is inadequate for explaining the phenomena pervading the quantum realm.

The unique yet explicit case of quantum entanglement phenomena not only suggests that the idea of separate “parts” is flawed but also that the interconnectedness exhibited is non-local, which I believe to be extremely significant from not only a scientific and philosophical viewpoint, but from a spiritual perspective as well.  If “something” is non-locally interconnected with “something” else, does it not imply that the two are actually “one” ?  Perhaps this is an example of merely seeing “two different faces of the same coin”.  While they may appear to be separated in 3D-space, it does not mean that they have to be separated in a dimension outside our physical reality or experience.  To put it another way, non-locality suggests the possibility of extra spatial dimensions if we are to negate the alternative — that is, that the speed of light is being violated within 3D space by faster-than-light information transfer (e.g. one electron communicates to the other instantaneously such that the appropriate anti-correlation exists between their spin state).  If two particles that seem to be “separate” in our physical reality are in fact “one”, then we could surmise that other sets of particles or even all particles for that matter (that we do not currently define to be “entangled”) are entangled in an unknown way.  There may be an infinite number of extra spatial dimensions to account for all unknown types of “entanglement”.  We could hypothesize in this case that every seemingly separate particle is actually interconnected as “one” entity.  While we may never know whether or not this is the case, it’s an interesting thought.  It would imply that the degrees of separation that we observe are fallacious, and simply a result of the connection being hidden.  It would be analogous to our thinking that the continents are separate pieces of land, when in fact, deep in the ocean, we can clearly see the continuity between all of those continents.

If we consider the Big Bang Theory, the “singularity” that supposedly existed prior to expansion would be a more intuitive way of looking at this homogenous entity I describe as “one”.  Perhaps after expansion, it started to take on an appearance of reducible separateness (within three dimensions of space anyways), and my theory that this separateness is illusory appears to be demonstrated in quantum phenomena despite the fact that this idea is entirely counter-intuitive to our everyday experience in a 3D-limited scope of our physical reality.  I feel that it’s important to recognize this illusion of separateness from a philosophical perspective and utilize this view to help realize that we are all “one” with each other and the entire universe.  While I may have no prescription for how to use this information (as I believe it will require self-discovery to have any real meaning to you), it’s just the way I feel on the matter — take it or leave it.  Life goes on.