“Black Mirror” Reflections: U.S.S. Callister (S4, E1)

This Black Mirror reflection will explore season 4, episode 1, which is titled “U.S.S. Callister”.  You can click here to read my last Black Mirror reflection (season 3, episode 2: “Playtest”).  In U.S.S. Callister, we’re pulled into the life of Robert Daly (Jesse Plemons), the Chief Technical Officer at Callister Inc., a game development company that has produced a multiplayer simulated reality game called Infinity.  Within this game, users control a starship (an obvious homage to Star Trek), although Daly, the brilliant programmer behind this revolutionary game, has his own offline version of the game which has been modded to look like his favorite TV show Space Fleet, where Daly is the Captain of the ship.

We quickly learn that most of the employees at Callister Inc. don’t treat Daly very kindly, including his company’s co-founder James Walton (Jimmy Simpson).  Daly appears to be an overly passive, shy, introvert.  During one of Daly’s offline gaming sessions at home, we come to find out that the Space Fleet characters aboard the starship look just like his fellow employees, and as Captain of his simulated crew, he indulges in berating them all.  Due to the fact that Daly is Captain and effectively controls the game, he is rendered nearly omnipotent, and able to force his crew to perpetually bend to his will, lest they suffer immensely.

It turns out that these characters in the game are actually conscious, created from Daly having surreptitiously acquired his co-workers’ DNA and somehow replicated their consciousness and memories and uploaded them into the game (and any biologists or neurologists out there, let’s just forget for the moment that DNA isn’t complex enough to store this kind of neurological information).  At some point, a wrench is thrown into Daly’s deviant exploits when a new co-worker, programmer Nanette Cole (Cristin Milioti), is added to his game and manages to turn the crew against him.  Once the crew finds a backdoor means of communicating with the world outside the game, Daly’s world is turned upside down with a relatively satisfying ending chock-full of poetic justice, as his digitized, enslaved crew members manage to escape while he becomes trapped inside his own game as it’s being shutdown and destroyed.

Daly stuck

This episode is rife with a number of moral issues that build on one another, all deeply coupled with the ability to engineer a simulated reality (perceptual augmentation).  Virtual worlds carry a level of freedom that just isn’t possible in the real world, where one can behave in countless ways with little or no consequence, whether acting with beneficence or utter malice.  One can violate physical laws as well as prescriptive laws, opening up a new world of possibilities that are free to evolve without the feedback of social norms, legal statutes, and law enforcement.

People have long known about various ways of escaping social and legal norms through fiction and game playing, where one can imagine they are somebody else, living in a different time and place, and behave in ways they’d never even think of doing in the real world.

But what happens when they’re finished with the game and go back to the real world with all its consequences, social norms and expectations?  Doesn’t it seem likely that at least some of the behaviors cultivated in the virtual world will begin to rear their ugly heads in the real world?  One can plausibly argue that violent game playing is simply a form of psychological sublimation, where we release many of our irrational and violent impulses in a way that’s more or less socially acceptable.  But there’s also bound to be a difference between playing a very abstract game involving violence or murder, such as the classic board-game Clue, and playing a virtual reality game where your perceptions are as realistic as can be and you choose to murder some other character in cold blood.

Clearly in this episode, Daly was using the simulated reality as a means of releasing his anger and frustration, by taking it out on reproductions of his own co-workers.  And while a simulated experiential alternative could be healthy in some cases, in terms of its therapeutic benefit and better control over the consequences of the simulated interaction, we can see that Daly took advantage of his creative freedom, and wielded it to effectively fashion a parallel universe where he was free to become a psychopath.

It would already be troubling enough if Daly behaved as he did to virtual characters that were not actually conscious, because it would still show Daly pretending that they are conscious; a man who wants them to be conscious.  But the fact that Daly knows they are conscious makes him that much more sadistic.  He is effectively in the position of a god, given his powers over the simulated world and every conscious being trapped within it, and he has used these powers to generate a living hell (thereby also illustrating the technology’s potential to, perhaps one day, generate a living heaven).  But unlike the hell we hear about in myths and religious fables, this is an actual hell, where a person can suffer the worst fates imaginable (it is in fact only limited by the programmer’s imagination) such as experiencing the feeling of suffocation, yet unable to die and thus with no end in sight.  And since time is relative, in this case based on the ratio of real time to simulated time (or the ratio between one simulated time and another), a character consciously suffering in the game could feel as if they’ve been suffering for months, when the god-like player has only felt several seconds pass.  We’ve never had to morally evaluate these kinds of situations before, and we’re getting to a point where it’ll be imperative for us to do so.

Someday, it’s very likely that we’ll be able to create an artificial form of intelligence that is conscious, and it’s up to us to initiate and maintain a public conversation that addresses how our ethical and moral systems will need to accommodate new forms of conscious moral agents.

Jude Law, Haley Joel Osment, Brendan Gleeson, and Brian Turk in Artificial Intelligence: AI (2001)

We’ll also need to figure out how to incorporate a potentially superhuman level of consciousness into our moral frameworks, since these frameworks often have an internal hierarchy that is largely based on the degree or level of consciousness that we ascribe to other individuals and to other animals.  If we give moral preference to a dog over a worm, and preference to a human over a dog (for example), then where would a being with superhuman consciousness fit within that framework?  Most people certainly wouldn’t want these beings to be treated as gods, but we shouldn’t want to treat them like slaves either.  If nothing else, they’ll need to be treated like people.

Technologies will almost always have that dual potential, where they can be used for achieving truly admirable goals and to enhance human well being, or used to dominate others and to exacerbate human suffering.  So we need to ask ourselves, given a future world where we have the capacity to make any simulated reality we desire, what kind of world do we want?  What kind of world should we want?  And what kind of person do you want to be in that world?  Answering these questions should say a lot about our moral qualities, serving as a kind of window into the soul of each and every one of us.

Advertisement

“Black Mirror” Reflections: Playtest (S3, E2)

Cooper

Black Mirror, the British science-fiction anthology series created by Charlie Brooker, does a pretty good job experimenting with a number of illuminating concepts that highlight how modern culture is becoming increasingly shaped by (and vulnerable to) various technological advances and changes.  I’ve been interested in a number of these concepts for many years now, not least because of the many important philosophical implications (including a number of moral issues) that they point to.  I’ve decided to start a blog post series that will explore the contents of these episodes.  My intention with this blog post series, which I’m calling “Black Mirror” Reflections, will be to highlight some of my own takeaways from some of my favorite episodes.

I’d like to begin with season 2, episode 3, “Playtest”.  I’m just going to give a brief summary here as you can read the full episode summary in the link provided above.  In this episode, Cooper (Wyatt Russell) decides to travel around the world, presumably as a means of dealing with the recent death of his father, who died from early-onset Alzheimer’s.  After finding his way to London, his last destination before planning to return home to America, he runs into a problem with his credit card and bank account where he can’t access the money he needs to buy his plane ticket.

While he waits for his bank to fix the problem with his account, Cooper decides to earn some cash using an “Oddjobs” app, which provides him with a number of short-term job listings in the area, eventually leading him to “SaitoGemu,” a video game company looking for game testers to try a new kind of personalized horror game involving a (seemingly) minimally invasive brain implant procedure.  He briefly hesitates but, desperate for money and reasonably confident in its safety, he eventually consents to the procedure whereby the implant is intended to wire itself into the gamer’s brain, resulting in a form of perceptual augmentation and a semi-illusory reality.

cooper-mad

The implant is designed to (among other things) scan your memories and learn what your worst fears are, in order to integrate these into the augmented perceptions, producing a truly individualized, and maximally frightening horror game experience.  Needless to say, at some point Cooper begins to lose track of what’s real and what’s illusory, and due to a malfunction, he’s unable to exit the game and he ends up going mad and eventually dying as a result of the implant unpredictably overtaking (and effectively frying) his brain.

nanobots in brain

There are a lot of interesting conceptual threads in this story, and the idea of perceptual augmentation is a particularly interesting theme that finds it’s way into a number of other Black Mirror episodes.  While holographic and VR-headset gaming technologies can produce their own form of augmented reality, perceptual augmentation carried out on a neurological level isn’t even in the same ballpark, having qualitative features that are far more advanced and which are more akin to those found in the Wachowski’s The Matrix trilogy or James Cameron’s Total Recall.  Once the user is unable to distinguish between the virtual world and the external reality, with the simulator having effectively passed a kind of graphical version of the Turing Test, then one’s previous notion of reality is effectively shattered.  To me, this technological idea is one of the most awe-inspiring (yet sobering) ideas within the Black Mirror series.

The inability to discriminate between the two worlds means that both worlds are, for all practical purposes, equally “real” to the person experiencing them.  And the fact that one simulationcan’t tell the difference between such worlds ought to give a person pause to re-evaluate what it even means for something to be real.  If you doubt this, then just imagine if you were to find out one day that your entire life has really been the result of a computer simulation, created and fabricated by some superior intelligence living in a layer of reality above your own (we might even think of this being as a “god”).  Would this realization suddenly make your remembered experiences imaginary and meaningless?  Or would your experiences remain just as “real” as they’ve always been, even if they now have to be reinterpreted within a context that grounds them in another layer of reality?

To answer this question honestly, we ought to first realize that we’re likely fully confident that what we’re experiencing right now is reality, is real, is authentic, and is meaningful (just as Cooper was at some point in his gaming “adventure”).  And this seems to be at least a partial basis for how we define what is real, and how we differentiate the most vivid and qualitatively rich experiences from those we might call imaginary, illusory, or superficial.  If what we call reality is really just a set of perceptions, encompassing every conscious experience from the merely quotidian to those we deem to be extraordinary, would we really be justified in dismissing all of these experiences and their value to us if we were to discover that there’s a higher layer of reality, residing above the only reality we’ve ever known?

For millennia, humans have pondered over whether or not the world is “really” the way we see it, with perhaps the most rigorous examination of this metaphysical question undertaken by the German philosopher Immanuel Kant, with his dichotomy of the phenomenon and the noumenon (i.e. the way we see the world or something in the world versus the way the world or thing “really is” in itself, independent of our perception).  Even if we assume the noumenon exists, we can never know anything about the thing in itself, by our being fundamentally limited by our own perceptual categories and the way we subjectively interpret the world.  Similarly, we can never know for certain whether or not we’re in a simulation.

Looking at the situation through this lens, we can then liken the question of how to (re)define reality within the context of a simulation with the question of how to (re)define reality within the context of a world as it really is in itself, independent of our perception.  Both the possibility of our being in a simulation and the possibility of our perceptions stemming from an unknowable noumenal world could be true (and would likely be unfalsifiable), and yet we still manage to use and maintain a relatively robust conception and understanding of reality.  This leads me to conclude that reality is ultimately defined by pragmatic considerations (mostly those pertaining to our ability to make successful predictions and achieve our goals), and thus the possibility of our one day learning about a new, higher level of reality should merely add to our existing conception of reality, rather than completely negating it, even if it turns out to be incomplete.

Another interesting concept in this episode involves the basic design of the individualized horror game itself, where a computer can read your thoughts and memories, and then surmise what your worst fears are.  This is a technological feat that is possible in principle, and one with far-reaching implications that concern our privacy, safety, and autonomy.  Just imagine if such a power were unleashed by corporations, or mind-readingthe governments owned by those corporations, to acquire whatever information they wanted from your mind, to find out how to most easily manipulate you in terms of what you buy, who you vote for, what you generally care about, or what you do any and every day of your life.  The Orwellian possibilities are endless.

Marketing firms (both corporatocratic and political) have already been making use of discoveries in psychology and neuroscience, finding new ways to more thoroughly exploit our cognitive biases to get us to believe and desire whatever will benefit them most.  Adding to this the future ability to read our thoughts and manipulate our perceptions (even if this is first implemented as a seemingly innocuous video game), this will establish a new means of mass surveillance, where we can each become a potential “camera” watching one another (a theme also highlighted in BM, S4E3: Crocodile), while simultaneously exposing our most private of thoughts, and transmitting them to some centralized database.  Once we reach these technological heights (it’s not a matter of if but when), depending on how it’s employed, we may find ourselves no longer having the freedom to lie or to keep a secret, nor the freedom of having any mental privacy whatsoever.

To be fair, we should realize that there are likely to be undeniable benefits in our acquiring these capacities (perceptual augmentation and mind-reading), such as making virtual paradises with minimal resources, finding new ways of treating phobias, PTSD, brain to cloudand other pathologies; giving us the power of telepathy and superhuman intelligence by connecting our brains to the cloud, giving us the ability to design error-proof lie detectors and other vast enhancements in maximizing personal security and reducing crime.  But there are also likely to be enormous losses in personal autonomy, as our available “choices” are increasingly produced and constrained by automated algorithms; there are likely to be losses in privacy, and increasing difficulties in ascertaining what is true and what isn’t, since our minds will be vulnerable to artificially generated perceptions created by entities and institutions that want to deceive us.

Although we’ll constantly need to be on the lookout for these kinds of potential dangers as they arise, in the end, we may find ourselves inadvertently allowing these technologies to creep into our lives, one consumer product at a time.

Technology, Mass-Culture, and the Prospects of Human Liberation

Cultural evolution is arguably just as fascinating as biological evolution (if not more so), with new ideas and behaviors stemming from the same kinds of natural selective pressures that lead to new species along with their novel morphologies and capacities.  And as with biological evolution where it, in a sense, takes off on its own unbeknownst to the new organisms it produces and independent of the intentions they may have (with our species being the notable exception given our awareness of evolutionary history and our ever-growing control over genetics), so too cultural evolution takes off on its own, where cultural changes are made manifest through a number of causal influences that we’re largely unaware of, despite our having some conscious influence over this vastly transformative process.

Alongside these cultural changes, human civilizations have striven to find new means of manipulating nature and to better predict the causal structure that makes up our reality.  One unfortunate consequence of this is that, as history has shown us, within any particular culture’s time and place, people have a decidedly biased overconfidence in the perceived level of truth or justification for the status quo and their present world view (both on an individual and collective level).  Undoubtedly, the “group-think” or “herd mentality” that precipitates from our simply having social groups often reinforces this overconfidence, and this is so in spite of the fact that what actually influences a mass of people to believe certain things or to behave as they do is highly contingent, unstable, and amenable to irrational forms of persuasion including emotive, sensationalist propaganda that prey on our cognitive biases.

While we as a society have an unprecedented amount of control over the world around us, this type of control is perhaps best described as a system of bureaucratic organization and automated information processing, that gives less and less individual autonomy, liberty, and basic freedom, as it further expands its reach.  How much control do we as individuals really have in terms of the information we have access to, and given the implied picture of reality that is concomitant with this information in the way it’s presented to us?  How much control do we have in terms of the number of life trajectories and occupations made available to us, what educational and socioeconomic resources we have access to given the particular family, culture, and geographical location we’re born and raised in?

As more layers of control have been added to our way of life and as certain criteria for organizational efficiency are continually implemented, our lives have become externally defined by increasing layers of abstraction, and our modes of existence are further separated cognitively and emotionally from an aesthetically and otherwise psychologically valuable sense of meaning and purpose.

While the Enlightenment slowly dragged our species, kicking and screaming, out of the theocratic, anti-intellectual epistemologies of the Medieval period of human history, the same forces that unearthed a long overdue appreciation for (and development of) rationality and technological progress, unknowingly engendered a vulnerability to our misusing this newfound power.  There was an overcompensation of rationality when it was deployed to (justifiably) respond to the authoritarian dogmatism of Christianity and to the demonstrably unreliable nature of superstitious beliefs and of many of our intuitions.

This overcompensatory effect was in many ways accounted for, or anticipated within the dialectical theory of historical development as delineated by the German philosopher Georg Hegel, and within some relevant reformulations of this dialectical process as theorized by the German philosopher Karl Marx (among others).  Throughout history, we’ve had an endless clash of ideas whereby the prevailing worldviews are shown to be inadequate in some way, failing to account for some notable aspect of our perceived reality, or shown to be insufficient for meeting our basic psychological or socioeconomic needs.  With respect to any problem we’ve encountered, we search for a solution (or wait for one to present itself to us), and then we become overconfident in the efficacy of the solution.  Eventually we end up overgeneralizing its applicability, and then the pendulum swings too far the other way, thereby creating new problems in need of a solution, with this process seemingly repeating itself ad infinitum.

Despite the various woes of modernity, as explicated by the modern existentialist movement, it does seem that history, from a long-term perspective at least, has been moving in the right direction, not only with respect to our heightened capacity of improving our standard of living, but also in terms of the evolution of our social contracts and our conceptions of basic and universal human rights.  And we should be able to plausibly reconcile this generally positive historical trend with the Hegelian view of historical development, and the conflicts that arise in human history, by noting that we often seem to take one step backward followed by taking two steps forward in terms of our moral and epistemological progress.

Regardless of the progress we’ve made, we seem to be at a crucial point in our history where the same freedom-limiting authoritarian reach that plagued humanity (especially during the Middle Ages) has undergone a kind of morphogenesis, having been reinstantiated albeit in a different form.  The elements of authoritarianism have become built into the very structure of mass-culture, with an anti-individualistic corporatocracy largely mediating the flow of information throughout this mass-culture, and also mediating its evolution over time as it becomes more globalized, interconnected, and cybernetically integrated into our day-to-day lives.

Coming back to the kinds of parallels in biology that I opened up with, we can see human autonomy and our culture (ideas and behaviors) as having evolved in ways that are strikingly similar to the biological jump that life made long ago, where single-celled organisms eventually joined forces with one another to become multi-cellular.  This biological jump is analogous to the jump we made during the early onset of civilization, where we employed an increasingly complex distribution of labor and occupational specialization, allowing us to survive many more environmental hurdles than ever before.  Once civilization began, the spread of culture became much more effective for transmitting ideas both laterally within a culture and longitudinally from generation to generation, with this process heavily enhanced by our having adopted various forms of written language, allowing us to store and transmit information in much more robust ways, similar to genetic information storage and transfer via DNA, RNA, and proteins.

Although the single-celled bacterium or amoeba (for example) may be thought of as having more “autonomy” than a cell that is forcefully interconnected within a multi-cellular organism, we can see how the range of capacities available to single cells were far more limited before making the symbiotic jump, just as humans living before the onset of civilization had more “freedom” (at least of a certain type) and yet the number of possible life trajectories and experiences was minuscule when compared to a human living in a post-cultural world.  But once multi-cellular organisms began to form a nervous system and eventually a brain, the entire collection of cells making up an organism became ultimately subservient to a centralized form of executive power — just as humans have become subservient to the executive authority of the state or government (along with various social pressures of conformity).

And just as the fates of each cell in a multi-cellular organism became predetermined and predictable by its particular set of available resources and the specific information it received from neighboring cells, similarly our own lives are becoming increasingly predetermined and predictable by the socioeconomic resources made available to us and the information we’re given which constitutes our mass-culture.  We are slowly morphing from individual brains into something akin to individual neurons within a global brain of mass-consciousness and mass-culture, having our critical thinking skills and creative aspirations exchanged for rehearsed responses and docile expectations that maintain the status quo and which continually transfers our autonomy to an oligarchic power structure.

We might wonder if this shift has been inevitable, possibly being yet another example of a “fractal pattern” recapitulated in sociological form out of the very same freely floating rationales that biological evolution has been making use of for eons.  In any case, it’s critically important that we become aware of this change, so we can try and actively achieve and effectively maintain the liberties and level of individual autonomy that we so highly cherish.  We ought to be thinking about what kinds of ways we can remain cognizant of, and critical to, our culture and its products; how we can reconcile or transform technological rationality and progress with a future world comprised of truly liberated individuals; and how to transform our corporatocratic capitalist society into one that is based on a mixed economy with a social safety net that even the wealthiest citizens would be content with living under, so as to maximize the actual creative freedom people have once their basic existential needs have been met.

Will unchecked capitalism, social-media, mass-media, and the false needs and epistemological bubbles they’re forming lead to our undoing and destruction?  Or will we find a way to rise above this technologically-induced setback, and take advantage of the opportunities it has afforded us, to make the world and our technology truly compatible with our human psychology?  Whatever the future holds for us, it is undoubtedly going to depend on how many of us begin to critically think about how we can seriously restructure our educational system and how we disseminate information, how we can re-prioritize and better reflect on what our personal goals ought to be, and also how we ought to identify ourselves as free and unique individuals.

Irrational Man: An Analysis (Part 4, Chapter 11: The Place of the Furies)

In my last post in this series on William Barrett’s Irrational Man, I examined some of the work of existential philosopher Jean-Paul Sartre, which concluded part 3 of Barrett’s book.  The final chapter, Ch. 11: The Place of the Furies, will be briefly examined here and this will conclude my eleven part post-series.  I’ve enjoyed this very much, but it’s time to move on to new areas of interest, so let’s begin.

1. The Crystal Palace Unmanned

“The fact is that a good dose of intellectualism-genuine intellectualism-would be a very helpful thing in American life.  But the essence of the existential protest is that rationalism can pervade a whole civilization, to the point where the individuals in that civilization do less and less thinking, and perhaps wind up doing none at all.  It can bring this about by dictating the fundamental ways and routines by which life itself moves.  Technology is one material incarnation of rationalism, since it derives from science; bureaucracy is another, since it aims at the rational control and ordering of social life; and the two-technology and bureaucracy-have come more and more to rule our lives.”

Regarding the importance and need for more intellectualism in our society, I think this can be better described as the need for more critical thinking skills and the need for people to be able to discern fact from fiction, to recognize their own cognitive biases, and to respect the adage that extraordinary claims require extraordinary evidence.  At the same time, in order to appreciate the existentialist’s concerns, we ought to recognize that there are aspects of human psychology including certain psychological needs that are inherently irrational, including with respect to how we structure our lives, how we express our emotions and creativity, how we maintain a balance in our psyche, etc.  But, since technology is not only a material incarnation of rationalism, but also an outlet for our creativity, there has to be a compromise here where we shouldn’t want to abandon technology, but simply to keep it in check such that it doesn’t impede our psychological health and our ability to live a fulfilling life.

“But it is not so much rationalism as abstractness that is the existentialists’ target; and the abstractness of life in this technological and bureaucratic age is now indeed something to reckon with.  The last gigantic step forward in the spread of technologism has been the development of mass art and mass media of communication: the machine no longer fabricates only material products; it also makes minds. (stereotypes, etc.).”

Sure enough, we’re living in a world where many of our occupations are but one of many layers of abstraction constituting our modern “machine of civilization”.  And the military industrial complex that has taken over the modern world has certainly gone beyond the mass production of physical stuff to be consumed by the populace, and now includes the mass production and viral dissemination of memes as well.  Ideas can spread like viruses and in our current globally interconnected world (which Barrett hadn’t yet seen to the same degree when writing this book), the spread of these ideas is much faster and influential on culture than ever before.  The degree of indoctrination, and the perpetuated cycles of co-dependence between citizens and the corporatocratic, sociopolitical forces ruling our lives from above, have resulted in making our way of life and our thinking much more collective and less personal than at any other time in human history.

“Kierkegaard condemned the abstractness of his time, calling it an Age of Reflection, but what he seems chiefly to have had in mind was the abstractness of the professorial intellectual, seeing not real life but the reflection of it in his own mind.”

Aside from the increasingly abstract nature of modern living then, there’s also the abstractness that pervades our thinking about life, which detracts from our ability to actually experience life.  Kierkegaard had a legitimate point here, by pointing out the fact that theory cannot be a complete substitute for practice; that thought cannot be a complete substitute for action.  We certainly don’t want the reverse either, since action without sufficient forethought leads to foolishness and bad consequences.

I think the main point here is that we don’t want to miss out on living life by thinking about it too much.  Since living a philosophically examined life is beneficial, it remains an open question exactly what balance is best for any particular individual to live the most fulfilling life.  In the mean time we ought to simply recognize that there is the potential for an imbalance, and to try our best to avoid it.

“To be rational is not the same as to be reasonable.  In my time I have heard the most hair-raising and crazy things from very rational men, advanced in a perfectly rational way; no insight or feelings had been used to check the reasoning at any point.”

If you ignore our biologically-grounded psychological traits, or ignore the fact that there’s a finite range of sociological conditions for achieving human psychological health and well-being, then you can’t develop any theory that’s supposed to apply to humans and expect it to be reasonable or tenable.  I would argue that ignoring this subjective part of ourselves when making theories that are supposed to guide our behavior in any way is irrational, at least within the greater context of aiming to have not only logically valid arguments but logically sound arguments as well.  But, if we’re going to exclude the necessity for logical soundness in our conception of rationality, then the point is well taken.  Rationality is a key asset in responsible decision making but it should be used in a way that relies on or seeks to rely on true premises, taking our psychology and sociology into account.

“The incident (making hydrogen bombs without questioning why) makes us suspect that, despite the increase in the rational ordering of life in modern times, men have not become the least bit more reasonable in the human sense of the word.  A perfect rationality might not even be incompatible with psychosis; it might, in fact, even lead to the latter.”

Again, I disagree with Barrett’s use or conception of rationality here, but semantics aside, his main point still stands.  I think that we’ve been primarily using our intelligence as a species to continue to amplify our own power and maximize our ability to manipulate the environment in any way we see fit.  But as our technological capacity advances, our cultural evolution is getting increasingly out of sync with our biological evolution, and we haven’t been anywhere close to sufficient in taking our psychological needs and limitations into account as we continue to engineer the world of tomorrow.

What we need is rationality that relies on true or at least probable premises, as this combination should actually lead a person to what is reasonable or likely to be reasonable.  I have no doubt that without making use of a virtue such as reasonableness, rationality can become dangerous and destructive, but this is the case with every tool we use whether it’s rationality or other capacities both mental and material; tools can always be harmful when misused.

“If, as the Existentialists hold, an authentic life is not handed to us on a platter but involves our own act of self-determination (self-finitization) within our time and place, then we have got to know and face up to that time, both in its (unique) threats and its promises.”

And our use of rationality on an individual level should be used to help reach this goal of self-finitization, so that we can balance the benefits of the collective with the freedom of each individual that makes up that collective.

“I for one am personally convinced that man will not take his next great step forward until he has drained to the lees the bitter cup of his own powerlessness.”

And when a certain kind of change is perceived as something to be avoided, the only way to go through with it is by perceiving the status quo as something that needs to be avoided even more so, so that a foray out of our comfort zone is perceived as an actual improvement to our way of life.  But as long as we see ourselves as already all powerful and masters over our domain, we won’t take any major leap in a new direction of self and collective improvement.  We need to come to terms with our current position in the modern world so that we can truly see what needs repair.  Whether or not we succumb to one or both of the two major existential threats facing our species, climate change and nuclear war, is contingent on whether or not we set our eyes on a new prize.

“Sartre recounts a conversation he had with an American while visiting in this country.  The American insisted that all international problems could be solved if men would just get together and be rational; Sartre disagreed and after a while discussion between them became impossible.  “I believe in the existence of evil,” says Sartre, “and he does not.” What the American has not yet become aware of is the shadow that surrounds all human Enlightenment.”

Once again, if rationality is accompanied with true premises that take our psychology into account, then international problems could be solved (or many of them at least), but Sartre is also right insofar as there are bad ideas that exist, and people that have cultivated their lives around them.  It’s not enough to have people thinking logically, nor is some kind of rational idealism up to the task of dealing with human emotion, cognitive biases, psychopathy, and other complications in human behavior that exist.

The crux of the matter is that some ideas hurt us and other ideas help us, with our perception of these effects being the main arbiter driving our conception of what is good and evil; but there’s also a disparity between what people think is good or bad for them and what is actually good or bad for them.  I think that one could very plausibly argue that if people really knew what was good or bad for them, then applying rationality (with true or plausible premises) would likely work to solve a number of issues plaguing the world at large.

“…echoing the Enlightenment’s optimistic assumption that, since man is a rational animal, the only obstacles to his fulfillment must be objective and social ones.”

And here’s an assumption that’s certainly difficult to ground since it’s based on false premises, namely that humans are inherently rational.  We are unique in the animal kingdom in the sense that we are the only animal (or one of only a few animals) that have the capacity for rational thought, foresight, and the complex level of organization made possible from its use.  I also think that the obstacles to fulfillment are objective since they can be described as facts pertaining to our psychology, sociology, and biology, even if our psychology (for example) is instantiated in a subjective way.  In other words, our subjectivity and conscious experiences are grounded on or describable in objective terms relating to how our particular brains function, how humans as a social species interact with one another, etc.  But, fulfillment can never be reached let alone maximized without taking our psychological traits and idiosyncrasies into account, for these are the ultimate constraints on what can make us happy, satisfied, fulfilled, and so on.

“Behind the problem of politics, in the present age, lies the problem of man, and this is what makes all thinking about contemporary problems so thorny and difficult…anyone who wishes to meddle in politics today had better come to some prior conclusions as to what man is and what, in the end, human life is all about…The speeches of our politicians show no recognition of this; and yet in the hands of these men, on both sides of the Atlantic, lies the catastrophic power of atomic energy.”

And as of 2018, we’ve seen the Doomsday clock now reach two minutes to midnight, having inched one minute closer to our own destruction since 2017.  The dominance hierarchy being led and reinforced by the corporatocratic plutocracy are locked into a narrow form of tunnel vision, hell bent on maintaining if not exacerbating the wealth and power disparities that plague our country and the world as a whole, despite the fact that this is not sustainable in the long run, nor best for the fulfillment of those promoting it.

We the people do share a common goal of trying to live a good, fulfilling life; to have our basic needs met, and to have no fewer rights than anybody else in society.  You’d hardly know that this common ground exists between us when looking at the state of our political sphere, likely as polarized now in the U.S. (if not more so) than even during the Civil War.  Clearly, we have a lot of work to do to reevaluate what our goals ought to be, what our priorities ought to be, and we need a realistic and informed view of what it means to be human before any of these goals can be realized.

“Existentialism is the counter-Enlightenment come at last to philosophic expression; and it demonstrates beyond anything else that the ideology of the Enlightenment is thin, abstract, and therefore dangerous.”

Yes, but the Enlightenment has also been one of the main driving forces leading us out of theocracy, out of scientific illiteracy, and towards an appreciation of reason and evidence (something the U.S. at least, is in short supply of these days), and thus it has been crucial in giving us the means for increasing our standard of living, and solving many of our problems.  While the technological advancements derived from the Enlightenment have also been a large contributor to many of our problems, the current existential threats we face including climate change and nuclear war are more likely to be solved by new technologies, not an abolition of technology nor an abolition of the Enlightenment-brand of thinking that led to technological progress.  We simply need to better inform our technological goals of the actual needs and constraints of human beings, our psychology, and so on.

“The finitude of man, as established by Heidegger, is perhaps the death blow to the ideology of the Enlightenment, for to recognize this finitude is to acknowledge that man will always exist in untruth as well as truth.  Utopians who still look forward to a future when all shadows will be dispersed and mankind will dwell in a resplendent Crystal Palace will find this recognition disheartening.  But on second thought, it may not be such a bad thing to free ourselves once and for all from the worship of the idol of progress; for utopianism-whether the brand of Marx or or Nietzsche-by locating the meaning of man in the future leaves human beings here and how, as well as all mankind up to this point, without their own meaning.  If man is to be given meaning, the Existentialists have shown us, it must be here and now; and to think this insight through is to recast the whole tradition of Western thought.”

And we ought to take our cue from Heidegger, at the very least, to admit that we are finite, our knowledge is limited, and it always will be.  We will not be able to solve every problem, and we would do ourselves and the world a lot better if we admitted our own limitations.  But to avoid being overly cynical and simply damning progress altogether, we need to look for new ways of solving our existential problems.  Part of the solution that I see for humanity moving forward is going to be a combination of advancements in a few different fields.

By making better use of genetic engineering, we’ll one day have the ability to change ourselves in remarkable ways in order to become better adapted to our current world.  We will be able to re-sync our biological evolution with our cultural evolution so we no longer feel uprooted, like a fish out of water.  Continuing research in neuroscience will allow us to learn more about how our brains function and how to optimize that functioning.  Finally, the strides we make in computing and artificial intelligence should allow us to vastly improve our simulation power and arm us with greater intelligence for solving all the problems that we face.

Overall, I don’t see progress as the enemy, but rather that we have an alignment problem between our current measures of progress, and what will actually lead to maximally fulfilling lives.

“The realization that all human truth must not only shine against an enveloping darkness, but that such truth is even shot through with its own darkness may be depressing, and not only to utopians…But it has the virtue of restoring to man his sense of the primal mystery surrounding all things, a sense of mystery from which the glittering world of his technology estranges him, but without which he is not truly human.”

And if we actually use technology to change who we are as human beings, by altering the course of natural selection and our ongoing evolution (which is bound to happen with or without our influence, for better or worse), then it’s difficult to say what the future really holds for us.  There are cultural and technological forces that are leading to transhumanism, and this may mean that one day “human beings” (or whatever name is chosen for the new species that replaces us) will be inherently rational, or whatever we’d like our future species to be.  We’ve stumbled upon the power to change our very nature, and so it’s far too naive, simplistic, unimaginative, and short-sighted to say that humans will “always” or “never” be one way or another.  Even if this were true, it wouldn’t negate the fact that one day modern humans will be replaced by a superseding species which has different qualities than what we have now.

2. The Furies

“…Existentialism, as we have seen, seeks to bring the whole man-the concrete individual in the whole context of his everyday life, and in his total mystery and questionableness-into philosophy.”

This is certainly an admirable goal to combat simplistic abstractions of what it means to be human.  We shouldn’t expect to be able to abstract a certain capacity of human beings (such as rationality), consider it in isolation (no consideration of context), formulate a theory around that capacity, and then expect to get a result that is applicable to human beings as they actually exist in the world.  Furthermore, all of the uncertainties and complexities in our human experiences, no matter how difficult they may be to define or describe, should be given their due consideration in any comprehensive philosophical system.

“In modern philosophy particularly (philosophy since Descartes), man has figured almost exclusively as an epistemological subject-as an intellect that registers sense-data, makes propositions, reasons, and seeks the certainty of intellectual knowledge, but not as the man underneath all this, who is born, suffers, and dies…But the whole man is not whole without such unpleasant things as death, anxiety, guilt, fear and trembling, and despair, even though journalists and the populace have shown what they think of these things by labeling any philosophy that looks at such aspects of human life as “gloomy” or “merely a mood of despair.”  We are still so rooted in the Enlightenment-or uprooted in it-that these unpleasant aspects of life are like Furies for us: hostile forces from which we would escape (the easiest way is to deny that the Furies exist).”

My take on all of this is simply that multiple descriptions of human existence are needed to account for all of our experiences, thoughts, values, and behavior.  And it is what we value given our particular subjectivity that needs to be primary in these descriptions, and primary with respect to how we choose to engineer the world we live in.  Who we are as a species is a complex mixture of good and bad, lightness and darkness, and stability and chaos; and we shouldn’t deny any of these attributes nor repress them simply because they make us uncomfortable.  Instead, we would do much better to face who we are head on, and then try to make our lives better while taking our limitations into account.

“We are the children of an enlightenment, one which we would like to preserve; but we can do so only by making a pact with the old goddesses.  The centuries-long evolution of human reason is one of man’s greatest triumphs, but it is still in process, still incomplete, still to be.  Contrary to the rationalist tradition, we now know that it is not his reason that makes man man, but rather that reason is a consequence of that which really makes him man.  For it is man’s existence as a self-transcending self that has forged and formed reason as one of its projects.”

This is well said, although I’d prefer to couch this in different terms: it is ultimately our capacity to imagine that makes us human and able to transcend our present selves by pointing toward a future self and a future world.  We do this in part by updating our models of the world, simulating new worlds, and trying to better understand the world we live in by engaging with it, re-shaping it, and finding ways of better predicting its causal structure.  Reason and rationality have been integral in updating our models of the world, but they’ve also been high-jacked to some degree by a kind of super-normal stimuli reinforced by technology and our living in a world that is entirely alien to our evolutionary niche, and which have largely dominated our lives in a way that overlooks our individualism, emotions, and feelings.

Abandoning reason and rationality is certainly not the answer to this existential problem; rather, we just need to ensure that they’re being applied in a way that aligns with our moral imperatives and that they’re ultimately grounded on our subjective human psychology.

Irrational Man: An Analysis (Part 3, Chapter 9: Heidegger)

In my previous post of this series on William Barrett’s Irrational Man, I explored some of Nietzsche’s philosophy in more detail.  Now we’ll be taking a look at the work of Martin Heidegger.

Heidegger was perhaps the most complex of the existentialist philosophers, as his work is often interpreted in a number of different ways and he develops a lot of terminology and concepts that can sometimes be difficult to grasp; and there’s also a fundamental limitation of fully engaging with the concepts he uses unless one is fluent in the German language, due to what is otherwise lost in translation to English.

Phenomenology, or the study of the structures of our conscious experience, was a big part of Heidegger’s work and had a big influence on his distinguishing between what he called “calculating thought” and “meditating thought”, and Barrett alludes to this distinction when he first mentions Heidegger’s feelings on thought and reason.  Heidegger says:

“Thinking only begins at the point where we have come to know that Reason, glorified for centuries, is the most obstinate adversary of thinking.”

Now at first one might be confused with the assertion that reason is somehow against thinking, but if one understands that Heidegger is only making reference to one of the two types of thinking outlined above, then we can begin to make sense of what he’s saying.  Calculating thought is what he has in mind here, which is the kind of thinking that we use all the time for planning and investigating in order to accomplish some specific purpose and achieve our goals.  Meditative thought, on the other hand, involves opening up one’s mind such that they aren’t consumed by a single perspective of an idea; it requires us to not limit our thinking to only one category of ideas, and requires us to at least temporarily free our thinking from the kind of unity that makes everything seemingly fit together.

Meaning is more or less hidden behind calculating thought, and thus meditative thought is needed to help uncover that meaning.  Calculative thinking also involves a lot of external input from society and the cultural or technological constructs that our lives are embedded in, whereas meditative thinking is internal, prioritizing the self over the collective and creating a self-derived form of truth and meaning rather than one that is externally imposed on us.  By thinking about reason’s relation to what Heidegger is calling meditative thought in particular, we can understand why Heidegger would treat reason as an adversary to what he sees as a far more valuable form of “thinking”.

It would be mistaken however to interpret this as Heidegger being some kind of an irrationalist, because Heidegger still values thinking even though he redefines what kinds of thinking exist:

“Heidegger is not a rationalist, because reason operates by means of concepts, mental representations, and our existence eludes these.  But he is not an irrationalist either.  Irrationalism holds that feeling, or will, or instinct are more valuable and indeed more truthful than reason-as in fact, from the point of view of life itself, they are.  But irrationalism surrenders the field of thinking to rationalism and thereby secretly comes to share the assumptions of its enemy.  What is needed is a more fundamental kind of thinking that will cut under both opposites.”

Heidegger’s intention seems to involve carving out a space for thinking that connects to the most basic elements of our experience, and which connects to the grounding for all of our experience.  He thinks that we’ve almost completely submerged our lives in reason or calculative thinking and that this has caused us to lose our connection with what he calls Being, and this is analogous to Kierkegaard’s claim of reason having estranged us from faith:

“Both Kierkegaard and Nietzsche point up a profound dissociation, or split, that has taken place in the being of Western man, which is basically the conflict of reason with the whole man.  According to Kierkegaard, reason threatens to swallow up faith; Western man now stands at a crossroads forced to choose either to be religious or to fall into despair…Now, the estrangement from Being itself is Heidegger’s central theme.”

Where or when did this estrangement from our roots first come to be?  Many may be tempted to say it was coincident with the onset of modernity, but perhaps it happened long before that:

“Granted that modern man has torn himself up by his roots, might not the cause of this lie farther back in his past than he thinks?  Might it not, in fact, lie in the way in which he thinks about the most fundamental of all things, Being itself?”

At this point, it’s worth looking at Heidegger’s overall view of man, and see how it might be colored by his own upbringing in southern Germany:

“The picture of man that emerges from Heidegger’s pages is of an earth-bound, time-bound, radically finite creature-precisely the image of man we should expect from a peasant, in this case a peasant who has the whole history of Western philosophy at his fingertips.”

So Heidegger was in an interesting position by wanting to go back to one of the most basic questions ever asked in philosophy, but to do so with the knowledge of the philosophy that had been developed ever since, hopefully giving him a useful outline of what went wrong in that development.

1. Being

“He has never ceased from that single task, the “repetition” of the problem of Being: the standing face to face with Being as did the earliest Greeks.  And on the very first pages of Being and Time he tells us that this task involves nothing less than the destruction of the whole history of Western ontology-that is, of the way the West has thought about Being.”

Sometimes the only way to make progress in philosophy is through a paradigm shift of some kind, where the most basic assumptions underlying our theories of the world and our existence are called into question; and we may end up having to completely discard what we thought we knew and start over, in order to follow a new path and discover a new way of looking at the problem we first began with.

For Heidegger, the problem all comes down to how we look at beings versus being (or Being), and in particular how little attention we’ve given the latter:

“Now, it is Heidegger’s contention that the whole history of Western thought has shown an exclusive preoccupation with the first member of these pairs, with the thing-which-is, and has let the second, the to-be of what is, fall into oblivion.  Thus that part of philosophy which is supposed to deal with Being is traditionally called ontology-the science of the thing-which-is-and not einai-logy, which would be the study of the to-be of Being as opposed to beings…What it means is nothing less than this: that from the beginning the thought of Western man has been bound to things, to objects.”

And it’s the priority we’ve given to positing and categorizing various types of beings that has distracted us from really contemplating the grounding for any and all beings, Being itself.  It’s more or less been taken for granted or seen as too tenuous or insubstantial to give any attention to.  We can certainly blame our lack of attention toward Being, at least in part, as a result of how we’ve understood it as contingent on the existence of particular objects:

“Once Being has been understood solely in terms of beings, things, it becomes the most general and empty of concepts: ‘The first object of the understanding,’ says St. Thomas Aquinas, ‘that which the intellect conceives when it conceives of anything.’ “

Aquinas seems to have pointed out the obvious here: we can’t easily think of the concept of Being without thinking of beings, and once we’ve thought of beings, we tend to think of its fact of being as not adding anything useful to our conception of the beings themselves (similar to Kant’s claim about existence as a superfluous or useless concept).  Heidegger wants to turn this common assumption on its head:

“Being is not an empty abstraction but something in which all of us are immersed up to our necks, and indeed over our heads.  We all understand the meaning in ordinary life of the word ‘is,’ though we are not called upon to give a conceptual explanation of it.  Our ordinary human life moves within a preconceptual understanding of Being, and it is this everyday understanding of Being in which we live, move, and have our Being that Heidegger wants to get at as a philosopher.”

Since we live our lives with an implicit understanding of Being and its permeating everything in our experience, it seems reasonable that we can turn our attention toward it and build up a more explicit conceptualization or understanding of it.  To do this carefully, without building in too many abstract assumptions or uncertain inferences, we need to make special use of phenomenology in this task.  Heidegger turns to Edmund Husserl’s work to take this project off the ground.

2. Phenomenology and Human Existence

“Instead of making intellectual speculations about the whole of reality, philosophy must turn, Husserl declared, to a pure description of what is…Heidegger accepts Husserl’s definition of phenomenology: he will attempt to describe, he says, and without any obscuring preconceptions, what human existence is.”

And this means that we need to try and analyze our experience without importing the plethora of assumptions and ideas that we’ve been taught about our existence.  We have to try and do a kind of phenomenological reduction, where we suspend our judgment about the way the world works which will include leaving aside (at least temporarily) most if not all of science and its various theories pertaining to how reality is structured.

Heidegger also makes many references to language in his work and, perhaps sharing a common thread with Wittgenstein’s views, dissects language to get at what we really mean by certain words, revealing important nuances that are taken for granted and how our words and concepts constrain our thinking to some degree:

“Heidegger’s perpetual digging at words to get at their hidden nuggets of meaning is one of his most exciting facets…(The thing for itself) will reveal itself to us, he says, only if we do not attempt to coerce it into one of our ready-made conceptual strait jackets.”

Truth is another concept that needs to be reformulated for Heidegger, and he first points out how the Greek word for truth, aletheia, means “un-hiddenness or “revelation”, and so he believes that truth lies in finding out what has been hidden from us; and this goes against our usual views of what is meant by “truth”, where we most often think of it as a status ascribed to propositions that correspond to actual facts about the world.  Since propositions can’t exist without minds, then modern conceptions of truth are limited in a number of ways, which Heidegger wants to find a way around:

“…truth is therefore, in modern usage, to be found in the mind when it has a correct judgment about what is the case.  The trouble with this view is that it cannot take account of other manifestations of truth.  For example, we speak of the “truth” of a work of art.  A work of art in which we find truth may actually have in it no propositions that are true in this literal sense.  The truth of a work of art is in its being a revelation, but that revelation does not consist in a statement or group of statements that are intellectually correct.  The momentous assertion that Heidegger makes is that truth does not reside primarily in the intellect, but that, on the contrary, intellectual truth is in fact a derivative of a more basic sense of truth.”

This more basic sense of truth will be explored more later on, but the important takeaway is that truth for Heidegger seems to involve the structure of our experience and of the beings within that experience; and it lends itself toward a kind of epistemological relativism, depending on how a human being interprets the world they’re interacting with.

On the other hand, if we think about truth as a matter of deciding what we should trust or doubt in terms of what we’re experiencing, we may wind up in a position of solipsism similar to Descartes’, where we come to doubt the “external world” altogether:

“…Descartes’ cogito ergo sum moment, is the point at which modern philosophy, and with it the modern epoch, begins: man is locked up in his own ego.  Outside him is the doubtful world of things, which his science has not taught him are really not the least like their familiar appearances.  Descartes got the external world back through a belief in God, who in his goodness would not deceive us into believing that this external world existed if it really did not.  But the ghost of subjectivism (and solipsism too) is there and haunts the whole of modern philosophy.”

While Descartes pulled himself out of solipsism through the added assumption of another (benevolent) mind existing that was responsible for any and all experience, I think there’s a far easier and less ad hoc solution to this problem.  First, we need to realize that we wouldn’t be able to tell the difference between an experience of an artificial or illusory external reality and one that was truly real independent of our conscious experience, and this means that solipsism is entirely indeterminable and so it’s almost pointless to worry about; and second, we don’t feel that we are the sole authors of every aspect of our experience anyway, otherwise there would be nothing unpredictable or uncertain with respect to any part of our experience, and therefore we’d know everything that will ever happen before it happens with not so much as an inkling of surprise or confusion.

This is most certainly not the case, which means we can be confident that we are not the sole authors of our experience and therefore there really is a reality or a causal structure that is independent of our own will and expectations.  So it’s conceptually simpler to simply define reality in the most broadest sense of the term, as nothing more or less than what we experience at any point in time.  And because we experience what appear to be other beings that are just like ourselves, each appearing to have their own experiences of reality, it’s going to be more natural and intuitive to treat them as such regardless of whether or not they’re really figments of our imagination or products of some unknown higher level reality like The Matrix.

Heidegger does something similar here, where he views our being in the “external” world as essential to who we are, and thus he doesn’t think we can separate ourselves from the world, like Descartes himself did as a result of his methodological skepticism:

“Heidegger destroys the Cartesian picture at one blow: what characterizes man essentially, he says, is that he is Being-in-the-world.  Leibniz had said that the monad has no windows; and Heidegger’s reply is that man does not look out upon an external world through windows, from the isolation of his ego: he is already out-of-doors.  He is in the world because, existing, he is involved in it totally.”

This is fundamentally different from the traditional conception of an ego or self that may or may not exist in a world; within this conception of human beings, one can’t treat the ego as separable from the world that it’s embedded in.  One way to look at this is to imagine that we were separated from the world forever, and then ask ourselves if the mode of living that remains (if any) in the absence of any world is still fundamentally human.  It’s an interesting thought experiment to imagine us somehow as a disembodied mind “residing” within an infinite void of nothingness, but if this were really done for eternity and not simply for a short while where we knew we could later return to the world we left behind, then we’d have no more interactions or relations with any objects or other beings and no mode of living at all.  I think it’s perfectly reasonable to conclude in this case that we would have lost something that we need in order to truly be a human being.  If one fails to realize this, then they’ve simply failed to truly grasp the conditions called for in the thought experiment.

Heidegger drives this point further by describing existence or more appropriately our Being as analogous to a non-localized field as in theoretical physics:

“Existence itself, according to Heidegger, means to stand outside oneself, to be beyond oneself.  My Being is not something that takes place inside my skin (or inside an immaterial substance inside that skin); my Being, rather, is spread over a field or region which is the world of its care and concern.  Heidegger’s theory of man (and of Being) might be called the Field Theory of Man (or the Field Theory of Being) in analogy with Einstein’s Field Theory of Matter, provided we take this purely as an analogy; for Heidegger would hold it a spurious and inauthentic way to philosophize to derive one’s philosophic conclusions from the highly abstract theories of physics.  But in the way that Einstein took matter to be a field (a magnetic field, say)- in opposition to the Newtonian conception of a body as existing inside its surface boundaries-so Heidegger takes man to be a field or region of Being…Heidegger calls this field of Being Dasein.  Dasein (which in German means Being-there) is his name for man.”

So we can see our state of Being (which Heidegger calls Dasein) as one that’s spread out over a number of different interactions and relations we have with other beings in the world.  In some sense we can think of the sum total of all the actions, goals, and beings that we concern ourselves with as constituting our entire field of Being.  And each being, especially each human being, shares this general property of Being even though each of those beings will be embedded within their own particular region of Being with their own set of concerns (even if these sets overlap in some ways).

What I find really fascinating is Heidegger’s dissolution of the subject-object distinction:

“That Heidegger can say everything he wants to say about human existence without using either “man” or “consciousness” means that the gulf between subject and object, or between mind and body, that has been dug by modern philosophy need not exist if we do not make it.”

I mentioned Wittgenstein earlier, because I think Heidegger’s philosophy has a few parallels.  For instance, Wittgenstein treats language and meaning as inherently connected to how we use words and concepts and thus is dependent on the contextual relations between how or what we communicate and what our goals are, as opposed to treating words like categorical objects with strict definitions and well-defined semantic boundaries.  For Wittgenstein, language can’t be separated from our use of it in the world even though we often treat it as if we can (and assuming we can is what often creates problems in philosophy), and this is similar to how Heidegger describes us human beings as inherently inseparable from our extension in the world as manifested in our many relations and concerns within that world.

Heidegger’s reference to Being as some kind of field or region of concern can be illustrated well by considering how a child comes to learn their own name (or to respond to it anyway):

“He comes promptly enough at being called by name; but if asked to point out the person to whom the name belongs, he is just as likely to point to Mommy or Daddy as to himself-to the frustration of both eager parents.  Some months later, asked the same question, the child will point to himself.  But before he has reached that stage, he has heard his name as naming a field or region of Being with which he is concerned, and to which he responds, whether the call is to come to food, to mother, or whatever.  And the child is right.  His name is not the name of an existence that takes place within the envelope of his skin: that is merely the awfully abstract social convention that has imposed itself not only on his parents but on the history of philosophy.  The basic meaning the child’s name has for him does not disappear as he grows older; it only becomes covered over by the more abstract social convention.  He secretly hears his own name called whenever he hears any region of Being named with which he is vitally involved.”

This is certainly an interesting interpretation of this facet of child development, and we might presume that the current social conventions which eventually attach our name to our body arise out of a number of pragmatic considerations such as wanting to provide a label for each conscious agent that makes decisions and behaves in ways that affect other conscious agents, and because our regions of Being overlap in some ways but not in others (i.e. some of my concerns, though not all of them, are your concerns too), it also makes sense to differentiate between all the different “sets of concerns” that comprise each human being we interact with.  But the way we tend to do this is by abstracting the sets of concerns, localizing them in time and space as “mine” and “yours” rather than the way they truly exist in the world.

Nevertheless, we could (in principle anyway) use an entirely different convention where a person’s name is treated as a representation of a non-localized set of causal structures and concerns, almost as if their body extended in some sense to include all of this, where each concern or facet of their life is like a dynamic appendage emanating out from their most concentrated region of Being (the body itself).  Doing so would markedly change the way we see one another, the way we see objects, etc.

Perhaps our state of consciousness can mislead us into forgetting about the non-locality of our field of Being, because our consciousness is spatially constrained to one locus of experience; and perhaps this is the reason why philosophers going all the way back to the Greeks have linked existence directly with our propensity to reflect in solitude.  But this solitary reflection also misleads us into forgetting the average “everydayness” of our existence, where we are intertwined with our community like a sheep directed by the rest of the herd:

“…none of us is a private Self confronting a world of external objects.  None of us is yet even a Self.  We are each simply one among many; a name among the names of our schoolfellows, our fellow citizens, our community.  This everyday public quality of our existence Heidegger calls “the One.”  The One is the impersonal and public creature whom each of us is even before he is an I, a real I.  One has such-and-such a position in life, one is expected to behave in such-and-such a manner, one does this, one does not do that, etc.  We exist thus in a state of “fallen-ness”, according to Heidegger, in the sense that we are as yet below the level of existence to which it is possible for us to rise.  So long as we remain in the womb of this externalized and public existence, we are spared the terror and the dignity of becoming a Self.”

Heidegger’s concept of fallen-ness illustrates our primary mode of living; sometimes we think of ourselves as individuals striving toward our personal self-authored goals but most of the time we’re succumbing to society’s expectations of who we are, what we should become, and how we ought to judge the value of our own lives.  We shouldn’t be surprised by this fact as we’ve evolved as a social species, which means our inclination toward a herd mentality is in some sense instinctual and therefore becomes the path of least resistance.  But because of our complex cognitive and behavioral versatility, we can also consciously rise above this collective limitation; and even if we don’t strive to, in other cases our position within the safety of the collective is torn away from us as a result of the chaos that reigns in on us from life’s many contingencies:

“But…death and anxiety (and such) intrude upon this fallen state, destroy our sheltered position of simply being one among many, and reveal to us our own existence as fearfully and irremediably our own.  Because it is less fearful to be “the One” than to be a Self, the modern world has wonderfully multiplied all the devices of self-evasion.”

It is only by rising out of this state of fallenness, that is, by our rescuing some measure of true individuality, that Heidegger thinks we can live authentically.  But in any case:

“Whether it be fallen or risen, inauthentic or authentic, counterfeit copy or genuine original, human existence is marked by three general traits: 1) mood or feeling; 2) understanding; 3) speech.  Heidegger calls these existentialia and intends them as basic categories of existence (as opposed to more common ones like quantity, quality, space, time, etc.).”

Heidegger’s use of these traits in describing our existence shouldn’t be confused with some set of internal mental states but rather we need to include them within a conception of Being or Dasein as a field that permeates the totality of our existence.  So when we consider the first trait, mood or feeling, we should think of each human being as being a mood rather than simply having a mood.  According to Heidegger, it is through moods that we feel a sense of belonging to the world, and thus, if we didn’t have any mood at all, then we wouldn’t find ourselves in a world at all.

It’s also important to understand that he doesn’t see moods as some kind of state of mind as we would typically take them to be, but rather that all states of mind presuppose this sense of belonging to a world in the first place; they presuppose a mood in order to have the possibilities for any of those states of mind.  And as Barrett mentions, not all moods are equally important for Heidegger:

“The fundamental mood, according to Heidegger, is anxiety (Angst); he does not choose this as primary out of any morbidity of temperament, however, but simply because in anxiety this here-and-now of our existence arises before us in all its precarious and porous contingency.”

It seems that he finds anxiety to be fundamental in part because it is a mood that opens us up to see Being for what it really is: a field of existence centered around the individual rather than society and its norms and expectations; and this perspective largely results from the fact that when we’re in anxiety all practical significance melts away and the world as it was no longer seems to be relevant.  Things that we took for granted now become salient and there’s a kind of breakdown in our sense of everyday familiarity.  If what used to be significant and insignificant reverse roles in this mood, then we can no longer misinterpret ourselves as some kind of entity within the world (the world of “Them”; the world of externally imposed identity and essence); instead, we finally see ourselves as within, or identical with, our own world.

Heidegger also seems to see anxiety as always there even though it may be covered up (so to speak).  Perhaps it’s hiding most of the time because the fear of being a true Self and facing our individuality with the finitude, contingency, and responsibility that goes along with it is often too much for one to bear, so we blind ourselves by shifting into a number of other, less fundamental moods which enhance the significance of the usual day-to-day world, even if the possibility for Being to “reveal itself” to us, is just below the surface.

If we think of moods as a way of our feeling a sense of belonging within a world, where they effectively constitute the total range of ways that things can have significance for us, then we can see how our moods ultimately affect and constrain our view of the possibilities that our world affords us.  And this is a good segue to consider Heidegger’s second trait of Dasein or human Being, namely that of understanding.

“The “understanding” Heidegger refers to here is not abstract or theoretical; it is the understanding of Being in which our existence is rooted, and without which we could not make propositions or theories that can claim to be “true”.  Whence comes this understanding?  It is the understanding that I have by virtue of being rooted in existence.”

I think Heidegger’s concept of understanding (verstehen) is more or less an intuitive sense of knowing how to manipulate the world in order to make use of it for some goal or other.  By having certain goals, we see the world and the entities in that world colored by the context of those goals.  What we desire then will no doubt change the way the world appears to us in terms of what functionality we can make the most use of, what tools are available to us, and whether we see something as a tool or not.  And even though our moods may structure the space of possibilities that we see the world present to us, it is our realizing what these possibilities are that seems to underlie this form of understanding that Heidegger’s referring to.

We also have no choice but to use our past experience to interpret the world, determine what we find significant and meaningful, and to further drive the evolution of our understanding; and of course this process feeds back in on itself by changing our interpretation of the world which changes our dynamic understanding yet again.  And here we might make the distinction between an understanding of the possible uses for various objects and processes, and the specific interpretation of which possibilities are relevant to the task at hand thereby making use of that understanding.

If I’m sitting at my desk with a cup of coffee and a stack of papers in front of me that need to be sorted, my understanding of these entities affords me a number of possibilities depending on my specific mode of Being or what my current concerns are at the present moment.  If I’m tired or thirsty or what-have-you I may want to make use of that cup of coffee to drink from it, but if the window is open and the wind is starting to blow the stack of papers off my desk, I may decide to use my cup of coffee as a paper weight instead of a vessel to drink from.  And if a fly begins buzzing about while I’m sorting my papers and starts to get on my nerves, distracting me from the task at hand, I may decide to temporarily roll up my stack of papers and use it as a fly swatter.  My understanding of all of these objects in terms of their possibilities allows me to interpret an object as a particular possibility that fits within the context of my immediate wants and needs.

I like to think of understanding and interpretation, whether in the traditional sense or in Heidegger’s terms, as ultimately based on our propensity to make predictions about the world and its causal structure.  As a proponent of the predictive processing framework of brain function, I see the ultimate purpose of brains as being prediction generators, where the brain serves to organize information in order to predict the causal structure of the world we interact with; and this is done at many different levels of abstraction, so we can think of us as making use of many smaller-scale modules of understanding as well as various larger-scale assemblies of those individual modules.  In some cases, the larger-scale assemblies constrain which smaller-scale modules can be used and in other cases, the smaller-scale modules are prioritized which constrain or direct the shape of the larger-scale assemblies.

Another way to say this is to say that our brains make use of a number of lower level and higher level predictions (any of which may change over time based on new experiences or desires) and depending on the context we find ourselves in, one level may have more or less influence on another level.  My lowest level predictions may include things like what each “pixel” in my visual field is going to be from moment to moment, and a slightly higher level prediction may include things like what a specific coffee cup looks like, it’s shape, weight, etc., and slightly higher level predictions may include things like what coffee cups in general look like, their range of shapes, weights, etc., and eventually we get to higher levels of predictions that may include what we expect an object can be used for.

All of these predictions taken as a whole constitute our understanding of the world and the specific sets of predictions that come into play at any point in time, depending on our immediate concerns and where our attention is directed, constitute our interpretation of what we’re presently experiencing.  Barrett points to the fact that what we consider to be truth in the most primitive sense is this intuitive sense of understanding:

“Truth and Being are thus inseparable, given always together, in the simple sense that a world with things in it opens up around man the moment he exists.  Most of the time, however, man does not let himself see what really happens in seeing.”

This process of the world opening itself up to us, such that we have an immediate sense of familiarity with it, is a very natural and automated process and so unless we consciously pull ourselves out of it to try and externally reflect on this situation, we’re just going to take our understanding and interpretation for granted.  But regardless of whether we reflect on it or not, this process of understanding and interpretation is fundamental to how we exist in the world, and thus it is fundamental to our Being in the world.

Speech is the last trait of Dasein that Heidegger mentions, and it is intimately connected with the other two traits, mood and understanding.  So what exactly is he referring to by speech?

“Speech:  Language, for Heidegger, is not primarily a system of sounds or of marks on paper symbolizing those sounds.  Sounds and marks upon paper can become language only because man, insofar as he exists, stands within language.”

This is extremely reminiscent of Wittgenstein once again, since Wittgenstein views language in terms of how it permeates our day-to-day lives within a community or social group of some kind.  Language isn’t some discrete set of symbols with fixed meanings, but rather is a much more dynamic, context-dependent communication tool.  We primarily use language to accomplish some specific goal or another in our day-to-day lives, and this is a fundamental aspect of how we exist as human beings.  Because language is attached to its use, the meaning of any word or utterance (if any) is derived from its use as well.  Sounds and marks on paper are meaningless, unless we first ascribe a meaning to it based on how its been used in the past and how its being used in the present.

And as soon as we pick up language during our child development, it becomes automatically associated with our thought processes where we begin to think in our native language, and both real perceptions and figments of our imagination are instantly attached to some linguistic label in our heads.  If I showed you a picture of an elephant, the word elephant would no doubt enter your stream of consciousness, even without my saying the word or asking you what animal you saw in the picture.  Whether our thinking linguistically comes about as a result of having learned a specific language, or if its an innate feature of our how our brains function, language permeates our thought as well as our interactions with others in the world.

While we tend to think of language as involving words, sounds, and so forth, Heidegger has an interesting take on what else falls under the umbrella of language:

“Two people are talking together.  They understand each other, and they fall silent-a long silence.  This silence is language; it may speak more eloquently than any words.  In their mood they are attuned to each other; they may even reach down into that understanding which, as we have seen above, lies below the level of articulation.  The three-mood, understanding, and speech (a speech here that is silence)-thus interweave and are one.”

I’m not sure if Heidegger is only referring to silence itself here or if he’s also including body language, which often continues even after we’ve stopped talking and which sometimes speaks better than verbal language ever could.  In silence, we often get a chance to simply read another person in a much more basic and instinctual way, ascertaining their mood, and allowing any language that may have recently transpired to be interpreted differently than if a long silence had never occurred.  Heidegger refers to what happens in silence as an attunement between fellow human beings:

“…Nor is this silence merely a gap in our chatter; it is, rather, the primordial attunement of one existent to another, out of which all language-as sounds, marks, and counters-comes.  It is only because man is capable of such silence that he is capable of authentic speech.  If he ceases to be rooted in that silence all his talk becomes chatter.”

We may however also want to consider silence when it is accompanied with action, as our actions speak louder than words and we gain information primarily in this way anyway, by simply watching another do something in the world.  And language can only work within the context of mutual understanding, which necessarily involves action (at some point) and so perhaps we could say that action at least partially constitutes the “silence” that Heidegger refers to.

Heidegger’s entire “Field Theory” of Being relies on context, and Barrett mentions this as well:

“…we might just as well call it a contextual theory of Being.  Being is the context in which all beings come to light-and this means those beings as well that are sounds or marks on paper…Men exist “within language” prior to their uttering sounds because they exist within a mutual context of understanding, which in the end is nothing but Being itself.”

I think that we could interpret this as saying that in order for us to be able to interact with one another and with the world, there’s a prerequisite context of meaning and of what matters to us, already in play in order for those interactions to be possible in the first place.  In other words, whether we’re talking to somebody or simply trying to cook our breakfast, all of our actions presuppose some background of understanding and a sense of what’s significant to us.

3. Death, Anxiety, Finitude

Heidegger views the concept of death as an important one especially as it relates to a proper understanding of the concept of Being, and he also sees the most common understanding of death as fundamentally misguided:

“The authentic meaning of death-“I am to die”-is not as an external and public fact within the world, but as an internal possibility of my own Being.  Nor is it a possibility like a point at the end of a road, which I will in time reach.  So long as I think in this way, I still hold death at a distance outside myself.  The point is that I may die at any moment, and therefore death is my possibility now…Hence, death is the most personal and intimate of possibilities, since it is what I must suffer for myself: nobody else can die for me.”

I definitely see the merit in viewing death as an imminent possibility in order to fully grasp the implications it has on how we should be living our lives.  But most people tend to consider death in very abstract terms where it’s only thought about in the periphery as something that will happen “some day” in the future; and I think we owe this abstraction and partial repression of death to our own fear of death.  In many other cases, this fear of death manifests itself into supernatural beliefs that posit life after death, the resurrection of the dead, immortality, and other forms of magic; and importantly, all of these psychologically motivated tactics prevent one from actually accepting the truth about death as an integral part of our existence as human beings.  By denying the truth about death, one is denying the true value of the life they actually have.  On the other hand, by accepting it as an extremely  personal and intimate attribute of ourselves, we can live more authentically.

We can also see that by viewing death in more personal terms, we are pulled out the world of “Them”, the collective world of social norms and externalized identities, and able to better connect with our sense of being a Self:

“Only by taking my death into myself, according to Heidegger, does an authentic existence become possible for me.  Touched by this interior angel of death, I cease to be the impersonal and social One among many…and I am free to become myself.”

Thinking about death may make us uncomfortable, but if I know I’m going to die and could die at any moment, then shouldn’t I prioritize my personal endeavors to reflect this radical contingency?  It’s hard to argue with that conclusion but even if we should be doing this it’s understandably easy to get lost in our mindless day-to-day routines and superficial concerns, and then we simply lose sight of what we ought to be doing with our limited time.  It’s important to break out of this pattern, or at least to strive to, and in doing so we can begin to center our lives around truly personal goals and projects:

“Though terrifying, the taking of death into ourselves is also liberating: It frees us from servitude to the petty cares that threaten to engulf our daily life and thereby opens us to the essential projects by which we can make our lives personally and significantly our own.  Heidegger calls this the condition of “freedom-toward-death” or “resoluteness.”

The fear of death is also unique and telling in the sense that it isn’t really directed at any object but rather it is directed at Nothingness itself.  We fear that our lives will come to an end and that we’ll one day cease to be.  Heidegger seems to refer to this disposition as anxiety rather than fear, since it isn’t really the fear of anything but rather the fear of nothing at all; we just treat this nothing as if it were an object:

“Anxiety is not fear, being afraid of this or that definite object, but the uncanny feeling of being afraid of nothing at all.  It is precisely Nothingness that makes itself present and felt as the object of our dread.”

Accordingly, the concept of Nothingness is integral to the concept of Being, since it’s interwoven in our existence:

“In Heidegger Nothingness is a presence within our own Being, always there, in the inner quaking that goes on beneath the calm surface of our preoccupation with things.  Anxiety before Nothingness has many modalities and guises: now trembling and creative, now panicky and destructive; but always it is as inseparable from ourselves as our own breathing because anxiety is our existence itself in its radical insecurity.  In anxiety we both are and are not, at one and the same time, and this is our dread.”

What’s interesting about this perspective is the apparent asymmetry between existence and non-existence, at least insofar as it relates to human beings.  Non-existence becomes a principal concern for us as a result of our kind of existence, but non-existence itself carries with it no concerns at all.  In other words, non-existence doesn’t refer to anything at all whereas (human) existence refers to everything as well as to nothing (non-existence).  And it seems to me that anxiety and our relation to Nothingness is entirely dependent on our inherent capacity of imagination, where we use imagination to simulate possibilities, and if this imagination is capable of rendering the possibility of our own existence coming to an end, then that possibility may forcefully reorganize the relative significance of all other products of our imagination.

As confusing as it may sound, death is the possibility of ending all possibilities for any extant being.  If imagining such a possibility were not enough to fundamentally change one’s perspective of their own life, then I think that combining it with the knowledge of its inevitability, that this possibility is also a certainty, ought to.  Another interesting feature of our existence is the fact that death isn’t the only possibility that relates to Nothingness, for anytime we imagine a possibility that has not yet been realized or even if we conceive of an actuality that has been realized, we are making reference to that which is not; we are referring to what we are not, to what some thing or other is not and ultimately to that which does not exist (at a particular time and place).  I only know how to identify myself, or any object in the world for that matter, by distinguishing it from what it is not.

“Man is finite because the “not”-negation-penetrates the very core of his existence.  And whence is this “not” derived?  From Being itself.  Man is finite because he lives and moves within a finite understanding of Being.”

As we can see, Barrett describes how within Heidegger’s philosophy negation is treated as something that we live and move within and this makes sense when we consider what identity itself is dependent on.  And the mention of finitude is interesting because it’s not simply a limitation in the sense of what we aren’t capable of (e.g. omnipotence, immortality, perfection, etc.), but rather that our mode of existence entails making discriminations between what is and what isn’t, between what is possible and what is not, between what is actual and what is not.

4. Time and Temporality; History

The last section on Heidegger concerns the nature of time itself, and it begins by pointing out the relation between negation and our experience of time:

“Our finitude discloses itself essentially in time.  In existing…we stand outside ourselves at once open to Being and in the open clearing of Being; and this happens temporally as well as spatially.  Man, Heidegger says, is a creature of distance: he is perpetually beyond himself, his existence at every moment opening out toward the future.  The future is the not-yet, and the past is the no-longer; and these two negatives-the not-yet and the no-longer-penetrate his existence.  They are his finitude in its temporal manifestation.”

I suppose we could call this projection towards the future the teleological aspect of our Being.  In order to do anything within our existence, we must have a goal or a number of them, and these goals refer to possible future states of existence that can only be confirmed or disconfirmed once our growing past subsumes them as actualities or lost opportunities.  We might even say that the present is somewhat of an illusion (though as far as I know Heidegger doesn’t make this claim) because in a sense we’re always living in the future and in the past, since the person we see ourselves as and the person we want to be are a manifestation of both; whereas the present itself seems to be nothing more than a fleeting moment that only references what’s in front or behind itself.

In addition to this, if we look at how our brain functions from a predictive coding perspective, we can see that our brain generates predictive models based on our past experiences and the models it generates are pure attempts to predict the future; nowhere does the present come into play.  The present as it is experienced by us, is nothing but a prediction of what is yet to come.  I think this is integral to take into account when considering how time is experienced by us and how it should be incorporated into a concept of Being.

Heidegger’s concept of temporality is also tied to the concept of death that we explored earlier.  He sees death as necessary to put our temporal existence into a true human perspective:

“We really know time, says Heidegger, because we know we are going to die.  Without this passionate realization of our mortality, time would be simply a movement of the clock that we watch passively, calculating its advance-a movement devoid of human reasoning…Everything that makes up human existence has to be understood in the light of man’s temporality: of the not-yet, the no-longer, the here-and-now.”

It’s also interesting to consider what I’ve previously referred to as the illusion of persistent identity, an idea that’s been explored by a number of philosophers for centuries: we think of our past self as being the same person as our present or future self.  Now whether the present is actually an illusion or not is irrelevant to this point; the point is that we think of ourselves as always being there in our memories and at any moment of time.  The fact that we are all born into a society that gives us a discrete and fixed name adds to this illusion of our having an identity that is fixed as well.  But we are not the same person that we were ten years ago let alone the same person we were as a five-year old child.  We may share some memories with those previous selves but even those change over time and are often altered and reconstructed upon recall.  Our values, our ontology, our language, our primary goals in life, are all changing to varying degrees over time and we mustn’t lose sight of this fact either.  I think this “dynamic identity” is a fundamental aspect of our Being.

We might help to explain how we’re taken in by the illusion of a persistent identity by understanding once again that our identity is fundamentally projected toward the future.  Since that projection changes over time (as our goals change) and since it is a process that relies on a kind of psychological continuity, we might expect to simply focus on the projection itself rather than what that projection used to be.  Heidegger stresses the importance of the future in our concept of Being:

“Heidegger’s theory of time is novel, in that, unlike earlier philosophers with their “nows,” he gives priority to the future tense.  The future, according to him, is primary because it is the region toward which man projects and in which he defines his own being.  “Man never is, but always is to be,” to alter slightly the famous line of Pope.”

But, since we’re always relying on past experiences in order to determine our future projection, to give it a frame of reference with which we can evaluate that projected future, we end up viewing our trajectory in terms of human history:

“All these things derive their significance from a more basic fact: namely, that man is the being who, however dimly and half-consciously, always understands, and must understand, his own being historically.”

And human history, in a collective sense also plays a role, since our view of ourselves and of the world is unavoidably influenced by the cultural transmission of ideas stemming from our recent past all the way to thousands of years ago.  A lot of that influence has emanated from the Greeks especially (as Barrett has mentioned throughout Irrational Man), and it’s had a substantial impact on our conceptualization of Being as well:

“By detaching the figure from the ground the object could be made to emerge into the daylight of human consciousness; but the sense of the ground, the environing background, could also be lost.  The figure comes into sharper focus, that is, but the ground recedes, becomes invisible, is forgotten.  The Greeks detached beings from the vast environing ground of Being.  This act of detachment was accompanied by a momentous shift in the meaning of truth for the Greeks, a shift which Heidegger pinpoints as taking place in a single passage in Plato’s Republic, the celebrated allegory of the cave.”

Through the advent of reason and forced abstraction, the Greeks effectively separated the object from the context it’s normally embedded within.  But if a true understanding of our Being in the world can only be known contextually, then we can see how Heidegger may see a crucial problem with this school of thought which has propagated itself in Western philosophy ever since Plato.  Even the concept of truth itself had underwent a dramatic change as a result of the Greeks discovering and developing the process of reason:

“The quality of unhiddenness had been considered the mark of truth; but with Plato in that passage truth came to be defined, rather, as the correctness of an intellectual judgment.”

And this concept of unhiddenness seems to be a variation of “subjective truth” or “subjective understanding”, although it shouldn’t be confused with modern subjectivism; in this case the unhiddenness would naturally precipitate from whatever coherency and meaning is immediately revealed to us through our conscious experience.  I think that the redefining of truth had less of an impact on philosophy than Heidegger may have thought; the problem wasn’t the redefining of truth per se but rather the fact that a contextual understanding and subjective experience itself lost the level of importance they once had.  And I’m sure Heidegger would agree that this perceived loss of importance for both subjectivity and for a holistic understanding of human existence has led to a major shift in our view of the world, and a view that has likely resulted in some of the psychological pathologies sprouting up in our age of modernity.

Heidegger thought that allowing Being to reveal itself to us, was analogous to an artist letting the truth reveal itself naturally:

“…the artist, as well as the spectator, must submit patiently and passively to the artistic process, that he must lie in wait for the image to produce itself; that he produces false notes as soon as he tries to force anything; that, in short, he must let the truth of his art happen to him?  All of these points are part of what Heidegger means by our letting Being be.  Letting it be, the artist lets it speak to him and through him; and so too the thinker must let it be thought.”

He seems to be saying that modern ways of thinking are too forceful in the sense of our always prioritizing the subject-object distinction in our way of life.  We’ve let the obsessive organization of our lives and the various beings within those lives drown out our appreciation of, and ability to connect to, Being itself.  I think that another way we could put this is to say that we’ve split ourselves off from the sense of self that has a strong connection to nature and this has disrupted our ability to exist in a mode that’s more in tune with our evolutionary psychology, a kind of harmonious path of least resistance.  We’ve become masters over manipulating our environment while simultaneously losing our innate connection to that environment.  There’s certainly a lesson to be learned here even if the problem is difficult to articulate.

•      •      •      •      •

This concludes William Barrett’s chapter on Martin Heidegger.  I’ve gotta say that I think Heidegger’s views are fascinating as they serve to illustrate a radically different way of viewing the world and human existence, and for those that take his philosophy seriously, it forces us to re-evaluate much of the Western philosophical thought that we’ve been taking for granted, and which has constrained a lot of our thinking.  And I think it’s both refreshing and worthwhile to explore some of these novel ideas as they help us see life through an entirely new lens.  We need a shift in our frame of mind; a need to think outside the box so we have a better chance of finding new solutions to many of the problems we face in today’s world.  In the next post for this series on William Barrett’s Irrational Man, I’ll be looking at the philosophy of Jean-Paul Sartre.

Irrational Man: An Analysis (Part 3, Chapter 7: Kierkegaard)

Part III – The Existentialists

In the last post in this series on William Barrett’s Irrational Man, we examined how existentialism was influenced by a number of poets and novelists including several of the Romantics and the two most famous Russian authors, namely Dostoyevsky and Tolstoy.  In this post, we’ll be entering part 3 of Barrett’s book, and taking a look at Kierkegaard specifically.

Chapter 7 – Kierkegaard

Søren Kierkegaard is often considered to be the first existentialist philosopher and so naturally Barrett begins his exploration of individual existentialists here.  Kierkegaard was a brilliant man with a broad range of interests; he was a poet as well as a philosopher, exploring theology and religion (Christianity in particular), morality and ethics, and various aspects of human psychology.  The fact that he was also a devout Christian can be seen throughout his writings, where he attempted to delineate his own philosophy of religion with a focus on the concepts of faith, doubt, and also his disdain for any organized forms of religion.  Perhaps the most influential core of his work is the focus on individuality, subjectivity, and the lifelong search to truly know oneself.  In many ways, Kierkegaard paved the way for modern existentialism, and he did so with a kind of poetic brilliance that made clever use of both irony and metaphor.

Barrett describes Kierkegaard’s arrival in human history as the onset of an ironic form of intelligence intent on undermining itself:

“Kierkegaard does not disparage intelligence; quite the contrary, he speaks of it with respect and even reverence.  But nonetheless, at a certain moment in history this intelligence had to be opposed, and opposed with all the resources and powers of a man of brilliant intelligence.”

Kierkegaard did in fact value science as a methodology and as an enterprise, and he also saw the importance of objective knowledge; but he strongly believed that the most important kind of knowledge or truth was that which was derived from subjectivity; from the individual and their own concrete existence, through their feelings, their freedom of choice, and their understanding of who they are and who they want to become as an individual.  Since he was also a man of faith, this meant that he had to work harder than most to manage his own intellect in order to prevent it from enveloping the religious sphere of his life.  He felt that he had to suppress the intellect at least enough to maintain his own faith, for he couldn’t imagine a life without it:

“His intellectual power, he knew, was also his cross.  Without faith, which the intelligence can never supply, he would have died inside his mind, a sickly and paralyzed Hamlet.”

Kierkegaard saw faith as of the utmost importance to our particular period in history as well, since Western civilization had effectively become disconnected from Christianity, unbeknownst to the majority of those living in Kierkegaard’s time:

“The central fact for the nineteenth century, as Kierkegaard (and after him Nietzsche, from a diametrically opposite point of view) saw it, was that this civilization that had once been Christian was so no longer.  It had been a civilization that revolved around the figure of Christ, and was now, in Nietzsche’s image, like a planet detaching itself from its sun; and of this the civilization was not yet aware.”

Indeed, in Nietzsche’s The Gay Science, we hear of a madman roaming around one morning who makes such a pronouncement to a group of non-believers in a marketplace:

” ‘Where is God gone?’ he called out.  ‘I mean to tell you!  We have killed him, – you and I!  We are all his murderers…What did we do when we loosened this earth from its sun?…God is dead!  God remains dead!  And we have killed him!  How shall we console ourselves, the most murderous of all murderers?…’  Here the madman was silent and looked again at his hearers; they also were silent and looked at him in surprise.  At last he threw his lantern on the ground, so that it broke in pieces and was extinguished.  ‘I come too early,’ he then said ‘I am not yet at the right time.  This prodigious event is still on its way, and is traveling, – it has not yet reached men’s ears.’ “

Although Nietzsche will be looked at in more detail in part 8 of this post-series, it’s worth briefly mentioning what he was pointing out with this essay.  Nietzsche was highlighting the fact that at this point in our history after the Enlightenment, we had made a number of scientific discoveries about our world and our place in it, and this had made the concept of God somewhat superfluous.  As a result of the drastic rise in secularization, the proportion of people that didn’t believe in God rose substantially; and because Christianity no longer had the theistic foundation it relied upon, all of the moral systems, traditions, and the ultimate meaning in one’s life derived from Christianity had to be re-established if not abandoned altogether.

But many people, including a number of atheists, didn’t fully appreciate this fact and simply took for granted much of the cultural constructs that arose from Christianity.  Nietzsche had a feeling that most people weren’t intellectually fit for the task of re-grounding their values and finding meaning in a Godless world, and so he feared that nihilism would begin to dominate Western civilization.  Kierkegaard had similar fears, but as a man who refused to shake his own belief in God and in Christianity, he felt that it was his imperative to try and revive the Christian faith that he saw was in severe decline.

1.  The Man Himself

“The ultimate source of Kierkegaard’s power over us today lies neither in his own intelligence nor in his battle against the imperialism of intelligence-to use the formula with which we began-but in the religious and human passion of the man himself, from which the intelligence takes fire and acquires all its meaning.”

Aside from the fact that Kierkegaard’s own intelligence was primarily shaped and directed by his passion for love and for God, I tend to believe that intelligence is in some sense always subservient to the aims of one’s desires.  And if desires themselves are derivative of, or at least intimately connected to, feeling and passion, then a person’s intelligence is going to be heavily guided by subjectivity; not only in terms of the basic drives that attempt to lead us to a feeling of homeostasis but also in terms of the psychological contentment resulting from that which gives our lives meaning and purpose.

For Kierkegaard, the only way to truly discover what the meaning for one’s own life is, or to truly know oneself, is to endure the painful burden of choice eventually leading to the elimination of a number of possibilities and to the creation of a number of actualities.  One must make certain significant choices in their life such that, once those choices are made, they cannot be unmade; and every time this is done, one is increasingly committing oneself to being (or to becoming) a very specific self.  And Kierkegaard thought that renewing one’s choices daily in the sense of freely maintaining a commitment to them for the rest of one’s life (rather than simply forgetting that those choices were made and moving on to the next one) was the only way to give those choices any meaning, let alone any prolonged meaning.  Barrett mentions the relation of choice, possibility, and reality as it was experienced by Kierkegaard:

“The man who has chosen irrevocably, whose choice has once and for all sundered him from a certain possibility for himself and his life, is thereby thrown back on the reality of that self in all its mortality and finitude.  He is no longer a spectator of himself as a mere possibility; he is that self in its reality.”

As can be seen with Kierkegaard’s own life, some of these choices (such as breaking off his engagement with Regine Olsen) can be hard to live with, but the pain and suffering experienced in our lives is still ours; it is still a part of who we are as an individual and further affirms the reality of our choices, often adding an inner depth to our lives that we may otherwise never attain.

“The cosmic rationalism of Hegel would have told him his loss was not a real loss but only the appearance of loss, but this would have been an abominable insult to his suffering.”

I have to agree with Kierkegaard here that the felt experience one has is as real as anything ever could be.  To say otherwise, that is, to negate the reality of this or that felt experience is to deny the reality of any felt experience whatsoever; for they all precipitate from the same subjective currency of our own individual consciousness.  We can certainly distinguish between subjective reality and objective reality, for example, by evaluating which aspects of our experience can and cannot be verified through a third-party or through some kind of external instrumentation and measurement.  But this distinction only helps us in terms of fine-tuning our ability to make successful predictions about our world by better understanding its causal structure; it does not help us determine what is real and what is not real in the broadest sense of the term.  Reality is quite simply what we experience; nothing more and nothing less.

2. Socrates and Hegel; Existence and Reason

Barrett gives us an apt comparison between Kierkegaard and Socrates:

“As the ancient Socrates played the gadfly for his fellow Athenians stinging them into awareness of their own ignorance, so Kierkegaard would find his task, he told himself, in raising difficulties for the easy conscience of an age that was smug in the conviction of its own material progress and intellectual enlightenment.”

And both philosophers certainly made a lasting impact with their use of the Socratic method; Socrates having first promoted this method of teaching and argumentation, and Kierkegaard making heavy use of it in his first published work Either/Or, and in other works.

“He could teach only by example, and what Kierkegaard learned from the example of Socrates became fundamental for his own thinking: namely, that existence and a theory about existence are not one and the same, any more than a printed menu is as effective a form of nourishment as an actual meal.  More than that: the possession of a theory about existence may intoxicate the possessor to such a degree that he forgets the need of existence altogether.”

And this problem, of not being able to see the forest for the trees, plagues many people that are simply distracted by the details that are uncovered when they’re simply trying to better understand the world.  But living a life of contentment and deeply understanding life in general are two very different things and you can’t invest more in one without retracting time and effort from the other.  Having said that, there’s still some degree of overlap between these two goals; contentment isn’t likely to be maximally realized without some degree of in-depth understanding of the life you’re living and the experiences you’ve had.  As Socrates famously put it: “the unexamined life is not worth living.”  You just don’t want to sacrifice too many of the experiences just to formulate a theory about them.

How does reason, rationality, and thought relate to any theories that address what is actually real?  Well, the belief in a rational cosmos, such as that held by Hegel, can severely restrict one’s ontology when it comes to the concept of existence itself:

“When Hegel says, “The Real is rational, and the rational is real,” we might at first think that only a German idealist with his head in the clouds, forgetful of our earthly existence, could so far forget all the discords, gaps, and imperfections in our ordinary experience.  But the belief in a completely rational cosmos lies behind the Western philosophic tradition; at the very dawn of this tradition Parmenides stated it in his famous verse, “It is the same thing that can be thought and that can be.”  What cannot be thought, Parmenides held, cannot be real.  If existence cannot be thought, but only lived, then reason has no other recourse than to leave existence out of its picture of reality.”

I think what Parmenides said would have been more accurate (if not correct) had he rephrased it just a little differently: “It is the same thing that can be thought consciously experienced and that can be.”  Rephrasing it as such allows for a much more inclusive conception of what is considered real, since anything that is within the possibility of conscious experience is given an equal claim to being real in some way or another; which means that all the irrational thoughts or feelings, the gaps and lack of coherency in some of our experiences, are all taken into account as a part of reality. What else could we mean by saying that existence must be lived, other than the fact that existence must be consciously experienced?  If we mean to include the actions we take in the world, and thus the bodily behavior we enact, this is still included in our conscious experience and in the experiences of other conscious agents.  So once the entire experience of every conscious agent is taken into account, what else could there possibly be aside from this that we can truly say is necessary in order for existence to be lived?

It may be that existence and conscious experience are not identical, even if they’re always coincident with one another; but this is in part contingent on whether or not all matter and energy in the universe is conscious or not.  If consciousness is actually universal, then perhaps existence and some kind of experientiality or other (whether primitive or highly complex) are in fact identical with one another.  And if not, then it is still the existence of those entities which are conscious that should dominate our consideration since that is the only kind of existence that can actually be lived at all.

If we reduce our window of consideration from all conscious experience down to merely the subset of reason or rationality within that experience, then of course there’s going to be a problem with structuring theories about the world within such a limitation; for anything that doesn’t fit within its purview simply disappears:

“As the French scientist and philosopher Emile Meyerson says, reason has only one means of accounting for what does not come from itself, and that is to reduce it to nothingness…The process is still going on today, in somewhat more subtle fashion, under the names of science and Positivism, and without invoking the blessing of Hegel at all.”

Although, in order to be charitable to science, we ought to consider the upsurge of interest and development in the scientific fields of psychology and cognitive science in the decades following the writing of Barrett’s book; for consciousness and the many aspects of our psyche and our experience have taken on a much more important role in terms of what we are valuing in science and the kinds of phenomena we’re trying so hard to understand better.  If psychology and the cognitive and neurosciences include the entire goings-on of our brain and overall subjective experience, and if all the information that is processed therein is trying to be accounted for, then science should no longer be considered cut-off from, or exclusive to, that which lies outside of reason, rationality, or logic.

We mustn’t confuse or conflate reason with science even if science includes the use of reason, for science can investigate the unreasonable; it can provide us with ways of making more and more successful predictions about the causal structure of our experience even as they relate to emotions, intuition, and altered or transcendent states of consciousness like those stemming from meditation, religious experiences or psycho-pharmacological substances.  And scientific fields like moral and positive psychology are also better informing us of what kinds of lifestyles, behaviors, character traits and virtues lead to maximal flourishing and to the most fulfilling lives.  So one could even say that science has been serving as a kind of bridge between reason and existence; between rationality and the other aspects of our psyche that make life worth living.

Going back to the relation between reason and existence, Barrett mentions the conceptual closure of the former to the latter as argued by Kant:

“Kant declared, in effect, that existence can never be conceived by reason-though the conclusions he drew from this fact were very different from Kierkegaard’s.  “Being”, says Kant, “is evidently not a real predicate, or concept of something that can be added to the concept of a thing.”  That is, if I think of a thing, and then think of that thing as existing, my second concept does not add any determinate characteristic to the first…So far as thinking is concerned, there is no definite note or characteristic by which, in a concept, I can represent existence as such.”

And yet, we somehow manage to be able to distinguish between the world of imaginary objects and that of non-imaginary objects (at least most of the time); and we do this using the same physical means of perception in our brain.  I think it’s true to say that both imaginary and non-imaginary objects exist in some sense; for both exist physically as representations in our brains such that we can know them at all, even if they differ in terms of whether the representations are likely to be shared by others (i.e. how they map onto what we might call our external reality).

If I conceive of an apple sitting on a table in front of me, and then I conceive of an apple sitting on a table in front of me that you would also agree is in fact sitting on the table in front of me, then I’ve distinguished conceptually between an imaginary apple and one that exists in our external reality.  And since I can’t ever be certain whether or not I’m hallucinating that there’s an actual apple on the table in front of me (or any other aspect of my experienced existence), I must accept that the common thread of existence, in terms of what it really means for something to exist or not, is entirely grounded on its relation to (my own) conscious experience.  It is entirely grounded on our individual perception; on the way our own brains make predictions about the causes of our sensory input and so forth.

“If existence cannot be represented in a concept, he says (Kierkegaard), it is not because it is too general, remote, and tenuous a thing to be conceived of but rather because it is too dense, concrete, and rich.  I am; and this fact that I exist is so compelling and enveloping a reality that it cannot be reproduced thinly in any of my mental concepts, though it is clearly the life-and-death fact without which all my concepts would be void.”

I actually think Kierkegaard was closer to the mark than Kant was, for he claimed that it was not so much that reason reduces existence to nothingness, but rather that existence is so tangible, rich and complex that reason can’t fully encompass it.  This makes sense insofar as reason operates through the principle of reduction, abstraction, and the dissecting of a holistic experience into parts that relate to one another in a certain way in order to make sense of that experience.  If the holistic experience is needed to fully appreciate existence, then reason alone isn’t going to be up to the task.  But reason also seems to unify our experiences, and if this unification presupposes existence in order to make sense of that experience then we can’t fully appreciate existence without reason either.

3. Aesthetic, Ethical, Religious

Kierkegaard lays out three primary stages of living in his philosophy: namely, the aesthetic, the ethical, and the religious.  While there are different ways to interpret this “stage theory”, the most common interpretation treats these stages like a set of concentric circles or spheres where the aesthetic is in the very center, the ethical contains the aesthetic, and the religious subsumes both the ethical and the aesthetic.  Thus, for Kierkegaard, the religious is the most important stage that, in effect, supersedes the others; even though the religious doesn’t eliminate the other spheres, since a religious person is still capable of being ethical or having aesthetic enjoyment, and since an ethical person is still capable of aesthetic enjoyment, etc.

In a previous post where I analyzed Kierkegaard’s Fear and Trembling, I summarized these three stages as follows:

“…The aesthetic life is that of sensuous or felt experience, infinite potentiality through imagination, hiddenness or privacy, and an overarching egotism focused on the individual.  The ethical life supersedes or transcends this aesthetic way of life by relating one to “the universal”, that is, to the common good of all people, to social contracts, and to the betterment of others over oneself.  The ethical life, according to Kierkegaard, also consists of public disclosure or transparency.  Finally, the religious life supersedes the ethical (and thus also supersedes the aesthetic) but shares some characteristics of both the aesthetic and the ethical.

The religious, like the aesthetic, operates on the level of the individual, but with the added component of the individual having a direct relation to God.  And just like the ethical, the religious appeals to a conception of good and evil behavior, but God is the arbiter in this way of life rather than human beings or their nature.  Thus the sphere of ethics that Abraham might normally commit himself to in other cases is thought to be superseded by the religious sphere, the sphere of faith.  Within this sphere of faith, Abraham assumes that anything that God commands is Abraham’s absolute duty to uphold, and he also has faith that this will lead to the best ends…”

While the aesthete generally tends to live life in search of pleasure, always trying to flee away from boredom in an ever-increasing fit of desperation to continue finding moments of pleasure despite the futility of such an unsustainable goal, Kierkegaard also claims that the aesthete includes the intellectual who tries to stand outside of life; detached from it and only viewing it as a spectator rather than a participant, and categorizing each experience as either interesting or boring, and nothing more.  And it is this speculative detachment from life, which was an underpinning of Western thought, that Kierkegaard objected to; an objection that would be maintained within the rest of existential philosophy that was soon to come.

If the aesthetic is unsustainable or if someone (such as Kierkegaard) has given up such a life of pleasure, then all that remains is the other spheres of life.  For Kierkegaard, the ethical life, at least on its own, seemed to be insufficient for making up what was lost in the aesthetic:

“For a really passionate temperament that has renounced the life of pleasure, the consolations of the ethical are a warmed-over substitute at best.  Why burden ourselves with conscience and responsibility when we are going to die, and that will be the end of it?  Kierkegaard would have approved of the feeling behind Nietzsche’s saying, ‘God is dead, everything is permitted.’ ” 

One conclusion we might arrive at, after considering Kierkegaard’s attitude toward the ethical life, is that he may have made a mistake when he renounced the life of pleasure entirely.  Another conclusion we might make is that Kierkegaard’s conception of the ethical is incomplete or misguided.  If he honestly asks, in the hypothetical absence of God or any option for a religious life, why we ought to burden ourselves with conscience and responsibility, this seems to betray a fundamental flaw in his moral and ethical reasoning.  Likewise for Nietzsche, and Dostoyevsky for that matter, where they both echoed similar sentiments: “If God does not exist, then everything is permissible.”

As I’ve argued elsewhere (here, and here), the best form any moral theory can take (such that it’s also sufficiently motivating to follow) is going to be centered around the individual, and it will be grounded on the hypothetical imperative that maximizes their personal satisfaction and life fulfillment.  If some behaviors serve toward best achieving this goal and other behaviors detract from it, as a result of our human psychology, sociology, and thus as a result of the finite range of conditions that we thrive within as human beings, then regardless of whether a God exists or not, everything is most certainly not permitted.  Nietzsche was right however when he claimed that the “death of God” (so to speak) would require a means of re-establishing a ground for our morals and values, but this doesn’t mean that all possible grounds for doing so have an equal claim to being true nor will they all be equally efficacious in achieving one’s primary moral objective.

Part of the problem with Kierkegaard’s moral theorizing is his adoption of Kant’s universal maxim for ethical duty: “Act only according to that maxim whereby you can at the same time will that it should become a universal law.”  The problem is that this maxim or “categorical imperative” (the way it’s generally interpreted at least) doesn’t take individual psychological and circumstantial idiosyncrasies into account.  One can certainly insert these idiosyncrasies into Kant’s formulation, but that’s not how Kant intended it to be used nor how Kierkegaard interpreted it.  And yet, Kierkegaard seems to smuggle in such exceptions anyway, by having incorporated it into his conception of the religious way of life:

“An ethical rule, he says, expresses itself as a universal: all men under such-and-such circumstances ought to do such and such.  But the religious personality may be called upon to do something that goes against the universal norm.”

And if something goes against a universal norm, and one feels that they ought to do it anyway (above all else), then they are implicitly denying that a complete theory of ethics involves exclusive universality (a categorical imperative); rather, it must require taking some kind of individual exceptions into account.  Kierkegaard seems to be prioritizing the individual in all of his philosophy, and yet he didn’t think that a theory of ethics could plausibly account for such prioritization.

“The validity of this break with the ethical is guaranteed, if it ever is, by only one principle, which is central to Kierkegaard’s existential philosophy as well as to his Christian faith-the principle, namely, that the individual is higher than the universal. (This means also that the individual is always of higher value than the collective).”

I completely agree with Kierkegaard that the individual is always of higher value than the collective; and this can be shown by any utilitarian moral theory that forgets to take into account the psychological state of those individual actors and moral agents carrying out some plan to maximize happiness or utility for the greatest number of people.  If the imperative comes down to my having to do something such that I can no longer live with myself afterward, then the moral theory has failed miserably.  Instead, I should feel that I did the right thing in any moral dilemma (when thinking clearly and while maximally informed of the facts), even when every available option is less than optimal.  We have to be able to live with our own actions, and ultimately with the kind of person that those actions have made us become.  The concrete self that we alone have conscious access to takes on a priority over any abstract conception of other selves or any kind of universality that references only a part of our being.

“Where then as an abstract rule it commands something that goes against my deepest self (but it has to be my deepest self, and herein the fear and trembling of the choice reside), then I feel compelled out of conscience-a religious conscience superior to the ethical-to transcend that rule.  I am compelled to make an exception because I myself am an exception; that is, a concrete being whose existence can never be completely subsumed under any universal or even system of universals.”

Although I agree with Kierkegaard here for the most part, in terms of our giving the utmost importance to the individual self, I don’t think that a religious conscience is something one can distinguish from their conscience generally; rather, it would just be one’s conscience, albeit one altered by a set of religious beliefs, which may end up changing how one’s conscience operates but doesn’t change the fact that it is still their conscience nevertheless.

For example, if two people differ with respect to their belief in souls, where one has a religious belief that a fertilized egg has a soul and the other person only believes that people with a capacity for consciousness or a personality have souls (or have no soul at all), then that difference in belief may affect their conscience differently if both parties were to, for example, donate a fertilized egg to a group of stem-cell researchers.  The former may feel guilty afterward (knowing that the egg they believe to be inhabited by a soul may be destroyed), whereas the latter may feel really good about themselves for aiding important life-saving medical research.  This is why one can only make a proper moral assessment when they are taking seriously what is truly factual about the world (what is supported by evidence) and what is a cherished religious or otherwise supernatural belief.  If one’s conscience is primarily operating on true evidenced facts about the world (along with their intuition), then their conscience and what some may call their religious conscience should be referring to the exact same thing.

Despite the fact that viable moral theories should be centered around the individual rather than the collective, this doesn’t mean that we shouldn’t try and come up with sets of universal moral rules that ought to be followed most of the time.  For example, a rule like “don’t steal from others” is a good rule to follow most of the time, and if everyone in a society strives to obey such a rule, that society will be better off overall as a result; but if my child is starving and nobody is willing to help in any way, then my only option may be to steal a loaf of bread in order to prevent the greater moral crime of letting a child starve.

This example should highlight a fundamental oversight regarding Kant’s categorical imperative as well: you can build in any number of exceptions within a universal law to make it account for personal circumstances and even psychological idiosyncrasies, thus eliminating the kind of universality that Kant sought to maintain.  For example, if I willed it to be a universal law that “you shouldn’t take your own life,” I could make this better by adding an exception to it so that the law becomes “you shouldn’t take your own life unless it is to save the life of another,” or even more individually tailored to “you shouldn’t take your own life unless not doing so will cause you to suffer immensely or inhibit your overall life satisfaction and life fulfillment.”  If someone has a unique life history or psychological predisposition whereby taking their own life is the only option available to them lest they suffer needlessly, then they ought to take their own life regardless of whatever categorical imperative they are striving to uphold.

There is however still an important reason for adopting Kant’s universal maxim (at least generally speaking): the fact that universal laws like the Golden Rule provide people with an easy heuristic to quickly ascertain the likely moral status of any particular action or behavior.  If we try and tailor in all sorts of idiosyncratic exceptions (with respect to yourself as well as others), it makes the rules much more complicated and harder to remember; instead, one should use the universal rules most of the time and only when they see a legitimate reason to question it, should they consider if an exception should be made.

Another important point regarding ethical behavior or moral dilemmas is the factor of uncertainty in our knowledge of a situation:

“But even the most ordinary people are required from time to time to make decisions crucial for their own lives, and in such crises they know something of the “suspension of the ethical” of which Kierkegaard writes.  For the choice in such human situations is almost never between a good and an evil, where both are plainly as such and the choice therefore made in all the certitude of reason; rather it is between rival goods, where one is bound to do some evil either way, and where the ultimate outcome and even-of most of all-our own motives are unclear to us.  The terror of confronting oneself in such a situation is so great that most people panic and try to take cover under any universal rule that will apply, if only it will save them from the task of choosing themselves.”

And rather than making these tough decisions themselves, a lot of people would prefer for others to tell them what’s right and wrong behavior such as getting these answers from a religion, from one’s parents, from a community, etc.; but this negates the intimate consideration of the individual where each of these difficult choices made will lead them down a particular path in their life and shape who they become as a person.  The same fear drives a lot of people away from critical thinking, where many would prefer to have people tell them what’s true and false and not have to think about these things for themselves, and so they gravitate towards institutions that say they “have all the answers” (even if many that fear critical thinking wouldn’t explicitly say that this is the case, since it is primarily a manifestation of the unconscious).  Kierkegaard highly valued these difficult moments of choice and thought they were fundamental to being a true self living an authentic life.

But despite the fact that universal ethical rules are convenient and fairly effective to use in most cases, they are still far from perfect and so one will find themselves in a situation where they simply don’t know which rule to use or what to do, and one will just have to make a decision that they think will be the most easy to live with:

“Life seems to have intended it this way, for no moral blueprint has ever been drawn up that covers all the situations for us beforehand so that we can be absolutely certain under which rule the situation comes.  Such is the concreteness of existence that a situation may come under several rules at once, forcing us to choose outside any rule, and from inside ourselves…Most people, of course, do not want to recognize that in certain crises they are being brought face to face with the religious center of their existence.”

Now I wouldn’t call this the religious center of one’s existence but rather the moral center of one’s existence; it is simply the fact that we’re trying to distinguish between universal moral prescriptions (which Kierkegaard labels as “the ethical”) and those that are non-universal or dynamic (which Kierkegaard labels as “the religious”).  In any case, one can call this whatever they wish, as long as they understand the distinction that’s being made here which is still an important one worth making.  And along with acknowledging this distinction between universal and individual moral consideration, it’s also important that one engages with the world in a way where our individual emotional “palette” is faced head on rather than denied or suppressed by society or its universal conventions.

Barrett mentions how the denial of our true emotions (brought about by modernity) has inhibited our connection to the transcendent, or at least, inhibited our appreciation or respect for the causal forces that we find to be greater than ourselves:

“Modern man is farther from the truth of his own emotions than the primitive.  When we banish the shudder of fear, the rising of the hair of the flesh in dread, or the shiver of awe, we shall have lost the emotion of the holy altogether.”

But beyond acknowledging our emotions, Kierkegaard has something to say about how we choose to cope with them, most especially that of anxiety and despair:

“We are all in despair, consciously or unconsciously, according to Kierkegaard, and every means we have of coping with this despair, short of religion, is either unsuccessful or demoniacal.”

And here is another place where I have to part ways with Kierkegaard despite his brilliance in examining the human condition and the many complicated aspects of our psychology; for relying on religion (or more specifically, relying on religious belief) to cope with despair is but another distraction from the truth of our own existence.  It is an inauthentic way of living life since one is avoiding the way the world really is.  I think it’s far more effective and authentic for people to work on changing their attitude toward life and the circumstances they find themselves in without sacrificing a reliable epistemology in the process.  We need to provide ourselves with avenues for emotional expression, work to increase our mindfulness and positivity, and constantly strive to become better versions of ourselves by finding things we’re passionate about and by living a philosophically examined life.  This doesn’t mean that we have to rid ourselves of the rituals, fellowship, and meditative benefits that religion offer; but rather that we should merely dispense with the supernatural component and the dogma and irrationality that’s typically attached to and promoted by religion.

4. Subjective and Objective Truth

Kierkegaard ties his conception of the religious life to the meaning of truth itself, and he distinguishes this mode of living with the concept of religious belief.  While one can assimilate a number of religious beliefs just as one can do with non-religious beliefs, for Kierkegaard, religion itself is something else entirely:

“If the religious level of existence is understood as a stage upon life’s way, then quite clearly the truth that religion is concerned with is not at all the same as the objective truth of a creed or belief.  Religion is not a system of intellectual propositions to which the believer assents because he knows it to be true, as a system of geometry is true; existentially, for the individual himself, religion means in the end simply to be religious.  In order to make clear what it means to be religious, Kierkegaard has to reopen the whole question of the meaning of truth.”

And his distinction between objective and subjective truth is paramount to understanding this difference.  One could say perhaps that by subjective truth he is referring to a truth that must be embodied and have an intimate relation to the individual:

“But the truth of religion is not at all like (objective truth): it is a truth that must penetrate my own personal existence, or it is nothing; and I must struggle to renew it in my life every day…Strictly speaking, subjective truth is not a truth that I have, but a truth that I am.”

The struggle for renewal goes back to Kierkegaard’s conception of how the meaning a person finds in their life is in part dependent on some kind of personal commitment; it relies on making certain choices that one remakes day after day, keeping these personal choices in focus so as to not lose sight of the path we’re carving out for ourselves.  And so it seems that he views subjective truth as intimately connected to the meaning we give our lives, and to the kind of person that we are now.

Perhaps another way we can look at this conception, especially as it differs from objective truth, is to examine the relation between language, logic, and conscious reasoning on the one hand, and intuition, emotion, and the unconscious mind on the other.  Objective truth is generally communicable, it makes explicit predictions about the causal structure of reality, and it involves a way of unifying our experiences into some coherent ensemble; but subjective truth involves felt experience, emotional attachment, and a more automated sense of familiarity and relation between ourselves and the world.  And both of these facets are important for our ability to navigate the world effectively while also feeling that we’re psychologically whole or complete.

5. The Attack Upon Christendom

Kierkegaard points out an important shift in modern society that Barrett mentioned early on in this book; the move toward mass society, which has effectively eaten away at our individuality:

“The chief movement of modernity, Kierkegaard holds, is a drift toward mass society, which means the death of the individual as life becomes ever more collectivized and externalized.  The social thinking of the present age is determined, he says, by what might be called the Law of Large Numbers: it does not matter what quality each individual has, so long as we have enough individuals to add up to a large number-that is, to a crowd or mass.”

And false metrics of success like economic growth and population growth have definitely detracted from the quality each of our lives is capable of achieving.  And because of our inclinations as a social species, we are (perhaps unconsciously) drawn towards the potential survival benefits brought about by joining progressively larger and larger groups.  In terms of industrialization, we’ve been using technology to primarily allow us to support more people on the globe and to increase the output of each individual worker (to benefit the wealthiest) rather than substantially reducing the number of hours worked per week or eliminating poverty outright.  This has got to be the biggest failure of the industrial revolution and of capitalism (when not regulated properly), and one that’s so often taken for granted.

Because of the greed that’s consumed the moral compass of those at the top of our sociopolitical hierarchy, our lives have been funneled into a military-industrial complex that will only surrender our servitude when the rich eventually stop asking for more and more of the fruits of our labor.  And by the push of marketing and social pressure, we’re tricked into wanting to maintain society the way it is; to continue to buy more consumable garbage that we don’t really need and to be complacent with such a lifestyle.  The massive externalization of our psyche has led to a kind of, as Barrett put it earlier, spiritual poverty.  And Kierkegaard was well aware of this psychological degradation brought on by modernity’s unchecked collectivization.

Both Kierkegaard and Nietzsche also saw a problem with how modernity had effectively killed God; though Kierkegaard and Nietzsche differed in their attitudes toward organized religion, Christianity in particular:

“The Grand Inquisitor, the Pope of Popes, relieves men of the burden of being Christian, but at the same time leaves them the peace of believing they are Christians…Nietzsche, the passionate and religious atheist, insisted on the necessity of a religious institution, the Church, to keep the sheep in peace, thus putting himself at the opposite extreme from Kierkegaard; Dostoevski in his story of the Grand Inquisitor may be said to embrace dialectically the two extremes of Kierkegaard and Nietzsche.  The truth lies in the eternal tension between Christ and the Grand Inquisitor.  Without Christ the institution of religion is empty and evil, but without the institution as a means of mitigating it the agony in the desert of selfhood is not viable for most men.”

Modernity had helped produce organized religion, thus diminishing the personal, individualistic dimension of spirituality which Kierkegaard saw as indispensable; but modernity also facilitated the “death of God” making even organized religion increasingly difficult to adhere to since the underlying theistic foundation was destroyed for many.  Nietzsche realized the benefits of organized religion since so many people are unable to think critically for themselves, are unable to find an effective moral framework that isn’t grounded on religion, and are unable to find meaning or stability in their lives that isn’t grounded on belief in God or in some religion or other.  In short, most people aren’t able to deal with the burdens realized within existentialism.

Due to the fact that much of European culture and so many of its institutions had been built around Christianity, this made the religion much more collectivized and less personal, but it also made Christianity more vulnerable to being uprooted by new ideas that were given power from the very same collective.  Thus, it was the mass externalization of religion that made religion that much more accessible to the externalization of reason, as exemplified by scientific progress and technology.  Reason and religion could no longer co-exist in the same way they once had because the externalization drastically reduced our ability to compartmentalize the two.  And this made it that much harder to try and reestablish a more personal form of Christianity, as Kierkegaard had been longing for.

Reason also led us to the realization of our own finitude, and once this view was taken more seriously, it created yet another hurdle for the masses to maintain their religiosity; for once death is seen as inevitable, and immortality accepted as an impossibility, one of the most important uses for God becomes null and void:

“The question of death is thus central to the whole of religious thought, is that to which everything else in the religious striving is an accessory: ‘If there is no immortality, what use is God?’ “

The fear of death is a powerful motivator for adopting any number of beliefs that might help to manage the cognitive dissonance that results from it, and so the desire for eternal happiness makes death that much more frightening.  But once death is truly accepted, then many of the other beliefs that were meant to provide comfort or consolation are no longer necessary or meaningful.  For Kierkegaard, he didn’t think we could cope with the despair of an inevitable death without a religion that promised some way to overcome it and to transcend our life and existence in this world, and he thought Christianity was the best religion to accomplish this goal.

It is on this point in particular, how he fails to properly deal with death, that I find Kierkegaard to lack an important strength as an existentialist; as it seems that in order to live an authentic life, a person must accept death, that is, they must accept the finitude of our individual human existence.  As Heidegger would later go on to say, buried deep within the idea of death as the possibility of impossibility is a basis for affirming one’s own life, through the acceptance of our mortality and the uncertainty that surrounds it.

Click here for the next post in this series, part 8 on Nietzsche’s philosophy.

Irrational Man: An Analysis (Part 2, Chapter 4: “The Sources of Existentialism in the Western Tradition”)

In the previous post in this series on William Barrett’s Irrational Man, I explored Part 1, Chapter 3: The Testimony of Modern Art, where Barrett illustrates how existentialist thought is best exemplified in modern art.  The feelings of alienation, discontent, and meaninglessness pervade a number of modern expressions, and so as is so often the case throughout history, we can use artistic expression as a window to peer inside our evolving psyche and witness the prevailing views of the world at any point in time.

In this post, I’m going to explore Part II, Chapter 4: The Sources of Existentialism in the Western Tradition.  This chapter has a lot of content and is quite dense, and so naturally this fact is reflected in the length of this post.

Part II: “The Sources of Existentialism in the Western Tradition”

Ch. 4 – Hebraism and Hellenism

Barrett begins this chapter by pointing out two forces governing the historical trajectory of Western civilization and Western thought: Hebraism and Hellenism.  He mentions an excerpt of Matthew Arnold’s, in his book Culture and Anarchy; a book that was concerned with the contemporary situation occurring in nineteenth-century England, where Arnold writes:

“We may regard this energy driving at practice, this paramount sense of the obligation of duty, self-control, and work, this earnestness in going manfully with the best light we have, as one force.  And we may regard the intelligence driving at those ideas which are, after all, the basis of right practice, the ardent sense for all the new and changing combinations of them which man’s development brings with it, the indomitable impulse to know and adjust them perfectly, as another force.  And these two forces we may regard as in some sense rivals–rivals not by the necessity of their own nature, but as exhibited in man and his history–and rivals dividing the empire of the world between them.  And to give these forces names from the two races of men who have supplied the most splendid manifestations of them, we may call them respectively the forces of Hebraism and Hellenism…and it ought to be, though it never is, evenly and happily balanced between them.”

And while we may have felt a stronger attraction to one force over the other at different points in our history, both forces have played an important role in how we’ve structured our individual lives and society at large.  What distinguishes these two forces ultimately comes down to the difference between doing and knowing; between the Hebrew’s concern for practice and right conduct, and the Greek’s concern for knowledge and right thinking.  I can’t help but notice this Hebraistic influence in one of the earliest expressions of existentialist thought when Soren Kierkegaard (in one of his earlier journals) had said: “What I really need is to get clear about what I must do, not what I must know, except insofar as knowledge must precede every act”.  Here we see that some set of moral virtues are what form the fundamental substance and meaning of life within Hebraism (and by extension, Kierkegaard’s philosophy), in contrast with the Hellenistic subordination of the moral virtues to those of the intellectual variety.

I for one am tempted to mention Aristotle here, since he was a Greek philosopher, yet one who formulated and extolled moral virtues, forming the foundation for most if not all modern systems of virtue ethics; despite all of his work on science, logic and epistemology.  And sure enough, Arnold does mention Aristotle briefly, but he says that for Aristotle the moral virtues are “but the porch and access to the intellectual (virtues), and with these last is blessedness.”  So it’s still fair to say that the intellectual virtues were given priority over the moral virtues within Greek thought, even if moral virtues were an important element, serving as a moral means to a combined moral-intellectual end.  We’re still left then with a distinction of what is prioritized or what the ultimate teleology is for Hebraism and Hellenism: moral man versus intellectual man.

One perception of Arnold’s that colors his overall thesis is a form of uneasiness that he sees as lurking within the Hebrew conception of man; an uneasiness stemming from a conception of man that has been infused with the idea of sin, which is simply not found in Greek philosophy.  Furthermore, this idea of sin that pervades the Hebraistic view of man is not limited to one’s moral or immoral actions; rather it penetrates into the core being of man.  As Barrett puts it:

“But the sinfulness that man experiences in the Bible…cannot be confined to a supposed compartment of the individual’s being that has to do with his moral acts.  This sinfulness pervades the whole being of man: it is indeed man’s being, insofar as in his feebleness and finiteness as a creature he stands naked in the presence of God.”

So we have a predominantly moral conception of man within Hebraism, but one that is amalgamated with an essential finitude, an acknowledgement of imperfection, and the expectation of our being morally flawed human beings.  Now when we compare this to the philosophers of Ancient Greece, who had a more idealistic conception of man, where humans were believed to have the capacity to access and understand the universe in its entirety, then we can see the groundwork that was laid for somewhat rivalrous but nevertheless important motivations and contributions to the cultural evolution of Western civilization: science and philosophy from the Greeks, and a conception of “the Law” entrenched in religion and religious practice from the Hebrews.

1. The Hebraic Man of Faith

Barrett begins here by explaining how the Law, though important for its effects on having bound the Jewish community together for centuries despite their many tribulations as a people, the Law is not central to Hebraism but rather the basis of the Law is what lies at its center.  To see what this basis is, we are directed to reread the Book of Job in the Hebrew Bible:

“…reread it in a way that takes us beyond Arnold and into our own time, reread it with an historical sense of the primitive or primary mode of existence of the people who gave expression to this work.  For earlier man, the outcome of the Book of Job was not such a foregone conclusion as it is for us later readers, for whom centuries of familiarity and forgetfulness have dulled the violence of the confrontation between man and God that is central to the narrative.”

Rather than simply taking the commandments of one’s religion for granted and following them without pause, Job’s face-to-face confrontation with his Creator and his demand for justification was in some sense the first time the door had been opened to theological critique and reflection.  The Greeks did something similar where eventually they began to apply reason and rationality to examine religion, stepping outside the confines of simply blindly following religious traditions and rituals.  But unlike the Greek, the Hebrew does not proceed with this demand of justification through the use of reason but rather by a direct encounter of the person as a whole (Job, and his violence, passion, and all the rest) with an unknowable and awe-inspiring God.  Job doesn’t solve his problem with any rational resolution, but rather by changing his entire character.  His relation to God involves a mutual confrontation of two beings in their entirety; not just a rational facet of each being looking for a reasonable explanation from one another, but each complete being facing one another, prepared to defend their entire character and identity.

Barrett mentions the Jewish philosopher Martin Buber here to help clarify things a little, by noting that this relation between Job and God is a relation between an I and a Thou (or a You).  Since Barrett doesn’t explain Buber’s work in much detail, I’ll briefly digress here to explain a few key points.  For those unfamiliar with Buber’s work, the I-Thou relation is a mode of living that is contrasted with another mode centered on the connection between an I and an It.  Both modes of living, the I-Thou and the I-It, are, according to Buber, the two ways that human beings can address existence; the two modes of living required for a complete and fulfilled human being.  The I-It mode encompasses the world of experience and sensation; treating entities as discrete objects to know about or to serve some use.  The I-Thou mode on the other hand encompasses the world of relations itself, where the entities involved are not separated by some discrete boundary, and where a living relationship is acknowledged to exist between the two; this mode of existence requires one to be an active participant rather than merely an objective observer.

It is Buber’s contention that modern life has entirely ignored the I-Thou relation, which has led to a feeling of unfulfillment and alienation from the world around us.   The first mode of existence, that of the I-It, involves our acquiring data from the world, analyzing and categorizing it, and then theorizing about it; and this leads to a subject-object separation, or an objectification of what is being experienced.  According to Buber, modern society almost exclusively uses this mode to engage with the world.  In contrast, with the I-Thou relation the I encounters the object or entity such that both are transformed by the relation; and there is a type of holism at play here where the You or Thou is not simply encountered as a sum of its parts but in its entirety.  To put it another way, it’s as if the You encountered were the entire universe, or that somehow the universe existed through this You.

Since this concept is a little nebulous, I think the best way to summarize Buber’s main philosophical point here is to say that we human beings find meaning in our lives through our relationships, and so we need to find ways of engaging the world such as to maximize our relationships with it; and this is not limited to forming relationships with fellow human beings, but with other animals, inanimate objects, etc., even if these relationships differ from one another in any number of ways.

I actually find some relevance between Buber’s “I and Thou” conception and Nietzsche’s idea of eternal recurrence: the idea that given an infinite amount of time and a finite number of ways that matter and energy can be arranged, anything that has ever happened or that ever will happen, will recur an infinite number of times.  In Nietzsche’s The Gay Science, he mentions how the concept of eternal recurrence was, to him, horrifying and paralyzing.  But, he also concluded that the desire for an eternal return or recurrence would show the ultimate affirmation of one’s life:

“What, if some day or night a demon were to steal after you into your loneliest loneliness and say to you: ‘This life as you now live it and have lived it, you will have to live once more and innumerable times more’ … Would you not throw yourself down and gnash your teeth and curse the demon who spoke thus?  Or have you once experienced a tremendous moment when you would have answered him: ‘You are a god and never have I heard anything more divine.

The reason I find this relevant to Buber’s conception is two-fold: first of all, the fact that the universe is causally connected makes the universe inseparable to some degree, where each object or entity could be seen to, in some sense, represent the rest of the whole; and secondly, if one is to wish for the eternal recurrence, then they have an attitude toward the world that is non-resistant, and that can learn to accept the things that are out of one’s control.  The universe effectively takes on the character of fate itself, and offers an opportunity to a being such as ourselves, to have a kind of “faith in fate”; to have a relation of trust with the universe as it is, as it once was, and as it will be in the future.

Now the kind of faith I’m speaking of here isn’t a brand that contradicts reason or evidence, but rather is simply a form of optimism and acceptance that colors one’s expectations and overall experience.  And since we are a part of this universe too, our attitude towards it should in many ways reflect our overall relation to it; which brings us back to Buber, where any encounter I might have with the universe (or any “part” of it) is an encounter with a You, that is to say, it is an I-Thou relation.

This “faith in fate” concept I just alluded to is a good segue to return back to the relation between Job and his God within Hebraism, as it was a relation of never-ending faith.  But importantly, as Barrett points out, this faith of Job’s takes on many shapes including that of anger, dismay, revolt, and confusion.  For Job says, “Though he slay me, yet will I trust in him…but I will maintain my own ways before him.”  So Job’s faith is his maintaining a form of trust in his Creator, even though he says that he will also retain his own identity, his dispositions, and his entire being while doing so.  And this trust ultimately forms the relation between the two.  Barrett describes the kind of faith at play here as more fundamental and primary than that which many modern-day religious proponents would lay claim to.

“Faith is trust before it is belief-belief in the articles, creeds, and tenets of a Church with which later religious history obscures this primary meaning of the word.  As trust, in the sense of the opening up of one being toward another, faith does not involve any philosophical problem about its position relative to faith and reason.  That problem comes up only later when faith has become, so to speak, propositional, when it has expressed itself in statements, creeds, systems.  Faith as a concrete mode of being of the human person precedes faith as the intellectual assent to a proposition, just as truth as a concrete mode of human being precedes the truth of any proposition.”

Although I see faith as belief as fundamentally flawed and dangerous, I can certainly respect the idea of faith as trust, and consequently I can respect the idea of faith as a concrete mode of being; where this mode of being is effectively an attitude of openness taken towards another.  But, whereas I agree with Barrett’s claim that truth as a concrete mode of human being precedes the truth of any proposition, in the sense that a basis and desire for truth are needed prior to evaluating the truth value of any proposition, I don’t believe one can ever justifiably make the succession from faith as trust to faith as belief for the simple reason that if one has good reason to trust another, then they don’t need faith to mediate any beliefs stemming from that trust.  And of course, what matters most here is describing and evaluating what the “Hebrew man of faith” consists of, rather than criticizing the concept of faith itself.

Another interesting historical development stemming from Hebraism pertains to the concept of faith as it evolved within Protestantism.  As Barrett tells us:

“Protestantism later sought to revive this face-to-face confrontation of man with his God, but could produce only a pallid replica of the simplicity, vigor, and wholeness of this original Biblical faith…Protestant man would never have dared confront God and demand an accounting of His ways.  That era in history had long since passed by the time we come to the Reformation.”

As an aside, it’s worth mentioning here that Christianity actually developed as a syncretism between Hebraism and Hellenism; the two very cultures under analysis in this chapter.  By combining Jewish elements (e.g. monotheism, the substitutionary atonement of sins through blood-magic, apocalyptic-messianic resurrection, interpretation of Jewish scriptures, etc.) with Hellenistic religious elements (e.g. dying-and-rising savior gods, virgin birth of a deity, fictive kinship, etc.), the cultural diffusion that occurred resulted in a religion that was basically a cosmopolitan personal salvation cult.

But eventually Christianity became a state religion (after the age of Constantine), resulting in a theocracy that heavily enforced a particular conception of God onto the people.  Once this occurred, the religion was now fully contained within a culture that made it very difficult to question or confront anyone about these conceptions, in order to seek justification for them.  And it may be that the intimidation propagated by the prevailing religious authorities became conflated with an attribute of God; where a fear of questioning any conception of God became integrated in a theological belief about God.

Perhaps it was because the primary conceptions of God, once Christianity entered the Medieval Period, were more externally imposed on everybody rather than discovered in a more personal and introspective way (even if unavoidably initiated and reinforced by the external culture), thus externalizing the attributes of God such that they became more heavily influenced by the perceptibly unquestionable claims of those in power.  Either way, the historical contingencies surrounding the advent of Christianity involved sectarian battles with the winning sect using their dogmatic authority to suppress the views (and the Christian Gospels/scriptures) of the other sects.  And this may have inhibited people from ever questioning their God or demanding justification for what they interpreted to be God’s actions.

One final point in this section that I’d like to highlight is in regard to the concept of knowledge as it relates to Hebraism.  Barrett distinguishes this knowledge from that of the Greeks:

“We have to insist on a noetic content in Hebraism: Biblical man too had his knowledge, though it is not the intellectual knowledge of the Greek.  It is not the kind of knowledge that man can have through reason alone, or perhaps not through reason at all; he has it rather through body and blood, bones and bowels, through trust and anger and confusion and love and fear; through his passionate adhesion in faith to the Being whom he can never intellectually know.  This kind of knowledge a man has only through living, not reasoning, and perhaps in the end he cannot even say what it is he knows; yet it is knowledge all the same, and Hebraism at its source had this knowledge.”

I may not entirely agree with Barrett here, but any disagreement is only likely to be a quibble over semantics, relating to how he and I define noetic and how we each define knowledge.  The word noetic actually derives from the Greek adjective noētikos which means “intellectual”, and thus we are presumably considering the intellectual knowledge found in Hebraism.  Though this knowledge may not be derived from reason alone, I don’t think (as Barrett implies above) that it’s even possible to have any noetic content without at least some minimal use of reason.  It may be that the kind of intellectual content he’s alluding to is that which results from a kind of synthesis between reason and emotion or intuition, but it would seem that reason would still have to be involved, even if it isn’t as primary or dominant as in the intellectual knowledge of the Greek.

With regard to how Barrett and I are each defining knowledge, I must say that just as most other philosophers have done, including those going all the way back to Plato, one must distinguish between knowledge and all other beliefs because knowledge is merely a subset of all of one’s beliefs (otherwise one would be saying that any belief whatsoever is considered knowledge).  To distinguish knowledge as a subset of one’s beliefs, my definition of knowledge can be roughly defined as:

“Recognized patterns of causality that are stored into memory for later recall and use, that positively and consistently correlate with reality, and for which that correlation has been validated by having made successful predictions and/or successfully accomplishing goals through the use of said recalled patterns.”

My conception of knowledge also takes what one might call unconscious knowledge into account (which may operate more automatically and less explicitly than conscious knowledge); as long as it is acquired information that allows you to accomplish goals and make successful predictions about the causal structure of your experience (whether internal feelings or other mental states, or perceptions of the external world), it counts as knowledge nevertheless.  Now there may be different degrees of reliability and usefulness of different kinds of knowledge (such as intuitive knowledge versus rational or scientific knowledge), but those distinctions don’t need to be parsed out here.

What Barrett seems to be describing here as a Hebraistic form of knowledge is something that is deeply embodied in the human being; in the ways that one lives their life that don’t involve, or aren’t dominated by, reason or conscious abstractions.  Instead, there seems to be a more organic process involved of viewing the world and interacting with it in a manner relying more heavily on intuition and emotionally-driven forms of expression.  But, in order to give rise to beliefs that cohere with one another to some degree, to form some kind of overarching narrative, reason (I would argue) is still employed.  There’s still some logical structure to the web of beliefs found therein, even if reason and logic could be used more rigorously and explicitly to turn many of those beliefs on their head.

And when it comes to the Hebraistic conception of God, including the passionate adhesion in faith to a Being whom one can never intellectually know (as Barrett puts it), I think this can be better explained by the fact that we do have reason and rationality, unlike most other animals, as well as cognitive biases such as hyperactive agency detection.  It seems to me that the Hebraistic God concept is more or less derived from an agglomeration of the unknown sources of one’s emotional experiences (especially those involving an experience of the transcendent) and the unpredictable attributes of life itself, then ascribing agency to that ensemble of properties, and (to use Buber’s terms) establishing a relationship with that perceived agency; and in this case, the agent is simply referred to as God (Yahweh).

But in order to infer the existence of such an ensemble, it would seem to require a process of abstracting from particular emotional and unpredictable elements and instances of one’s experience to conclude some universal source for all of them.  Perhaps if this process is entirely unconscious we can say that reason wasn’t at all involved in it, but I suspect that the ascription of agency to something that is on par with the universe itself and its large conjunction of properties which vary over space and time, including its unknowable character, is more likely mediated by a combination of cognitive biases, intuition, emotion, and some degree of rational conscious inference.  But Barrett’s point still stands that the noetic content in Hebraism isn’t dominated by reason as in the case of the Greeks.

2. Greek Reason

Even though existential philosophy is largely a rebellion against the Platonic ideas that have permeated Western thought, Barrett reminds us that there is still some existential aspect to Plato’s works.  Perhaps this isn’t surprising once one realizes that he actually began his adult life aspiring to be a dramatic poet.  Eventually he abandoned this dream undergoing a sort of conversion and decided to dedicate the rest of his life toward a search for wisdom as per the inspiration of Socrates.  Even though he engaged in a life long war against the poets, he still maintained a piece of his poetic character in his writings, including up to the end when he wrote about the great creation myth, Timaeus.

By far, Plato’s biggest contribution to Western thought was his differentiating rational consciousness from the rest of our human psyche.  Prior to this move, rationality was in some sense subsumed under the same umbrella as emotion and intuition.  And this may be one of the biggest changes to how we think, and one that is so often taken for granted.  Barrett describes the position we’re in quite well:

“We are so used today to taking our rational consciousness for granted, in the ways of our daily life we are so immersed in its operations, this it is hard at first for us to imagine how momentous was this historical happening among the Greeks.  Steeped as our age is in the ideas of evolution, we have not yet become accustomed to the idea that consciousness itself is something that has evolved through long centuries and that even today, with us, is still evolving.  Only in this century, through modern psychology, have we learned how precarious a hold consciousness may exert upon life, and we are more acutely aware therefore what a precious deal of history, and of effort, was required for its elaboration, and what creative leaps were necessary at certain times to extend it beyond its habitual territory.”

Barrett’s mention of evolution is important here, because I think we ought to distinguish between the two forms of evolution that have affected how we think and experience the world.  On the one hand, we have our biological evolution to consider, where our brains have undergone dramatic changes in terms of the level of consciousness we actually experience and the degree of causal complexity that the brain can model; and on the other hand, we have our cultural evolution to consider, where our brains have undergone a number of changes in terms of the kinds of “programs” we run on it, how those programs are run and prioritized, and how we process and store information in various forms of external hardware.

In regard to biological evolution, long before our ancestors evolved into humans, the brain had nothing more than a protoself representation of our body and self (to use the neuroscientist Antonio Damasio’s terminology); it had nothing more than an unconscious state that served as a sort of basic map in the brain tracking the overall state of the body as a single entity in order to accomplish homeostasis.  Then, beyond this protoself there evolved a more sophisticated level of core consciousness where the organism became conscious of the feelings and emotions associated with the body’s internal states, and also became conscious that her feelings and thoughts were her own, further enhancing the sense of self, although still limited to the here-and-now or the present moment.  Finally, beyond this layer of self there evolved an extended consciousness: a form of consciousness that required far more memory, but which brought the organism into an awareness of the past and future, forming an autobiographical self with a perceptual simulator (imagination) that transcended space and time in some sense.

Once humans developed language, then we began to undergo our second form of evolution, namely culture.  After cultural evolution took off, human civilization led to a plethora of new words, concepts, skills, interests, and ways of looking at and structuring the world.  And this evolutionary step was vastly accelerated by written language, the scientific method, and eventually the invention of computers.  But in the time leading up to the scientific revolution, the industrial revolution, and finally the advent of computers and the information age, it was arguably Plato’s conceptualization of rational consciousness that paved the way forward to eventually reach those technological and epistemological feats.

It was Plato’s analysis of the human psyche and his effectively distilling the process of reason and rationality from the rest of our thought processes that allowed us to manipulate it, to enhance it, and to explore the true possibilities of this markedly human cognitive capacity.  Aristotle and many others since have merely built upon Plato’s monumental work, developing formal logic, computation, and other means of abstract analysis and information processing.  With the explicit use of reason, we’ve been able to overcome many of our cognitive biases (serving as a kind of “software patch” to our cognitive bugs) in order to discover many true facts about the world, allowing us to unlock a wealth of knowledge that had previously been inaccessible to us.  And it’s important to recognize that Plato’s impact on the history of philosophy has highlighted, more than anything else, our overall psychic evolution as a species.

Despite all the benefits that culture has brought us, there has been one inherent problem with the cultural evolution we’ve undergone: a large desynchronization between our cultural and biological evolution.  That is to say, our culture has evolved far, far faster than our biology ever could, and thus our biology hasn’t kept up with, or adapted us to, the cultural environment we’ve been creating.  And I believe this is a large source of our existential woes; for we have become a “fish out of water” (so to speak) where modern civilization and the way we’ve structured our day-to-day lives is incredibly artificial, filled with a plethora of supernormal stimuli and other factors that aren’t as compatible with our natural psychology.  It makes perfect sense then that many people living in the modern world have had feelings of alienation, loneliness, meaninglessness, anxiety, and disorientation.  And in my opinion, there’s no good or pragmatic way to fix this aside from engineering our genes such that our biology is able to catch up to our ever-changing cultural environment.

It’s also important to recognize that Plato’s idea of the universal, explicated in his theory of Forms, was one of the main impetuses for contemporary existential philosophy; not for its endorsement but rather because the idea that a universal such as “humanity” was somehow more real than any actual individual person fueled a revolt against such notions.  And it wasn’t until Kierkegaard and Nietzsche appeared on the scene in the nineteenth century where we saw an explicit attempt to reverse this Platonic idea; where the individual was finally given precedence and priority over the universal, and where a philosophy of existence was given priority over one of essence.  But one thing the existentialists maintained, as derived from Plato, was his conception that philosophizing was a personal means of salvation, transformation, and a concrete way of living (though this was, according to Plato, best exemplified by his teacher Socrates).

As for the modern existentialists, it was Kierkegaard who eventually brought the figure of Socrates back to life, long after Plato’s later portrayal of Socrates, which had effectively dehumanized him:

“The figure of Socrates as a living human presence dominates all the earlier dialogues because, for the young Plato, Socrates the man was the very incarnation of philosophy as a concrete way of life, a personal calling and search.  It is in this sense too that Kierkegaard, more than two thousand years later, was to revive the figure of Socrates-the thinker who lived his thought and was not merely a professor in an academy-as his precursor in existential thinking…In the earlier, so-called “Socratic”, dialogues the personality of Socrates is rendered in vivid and dramatic strokes; gradually, however, he becomes merely a name, a mouthpiece for Plato’s increasingly systematic views…”

By the time we get to Plato’s student, Aristotle, philosophy had become a purely theoretical undertaking, effectively replacing the subjective qualities of a self-examined life with a far less visceral objective analysis.  Indeed, by this point in time, as Barrett puts it: “…the ghost of the existential Socrates had at last been put to rest.”

As in all cases throughout history, we must take the good with the bad.  And we very likely wouldn’t have the sciences today had it not been for Plato detaching reason from the poetic, religious, and mythological elements of culture and thought, thus giving reason its own identity for the first time in history.  Whatever may have happened, we need to try and understand Greek rationalism as best we can such that we can understand the motivations of those that later objected to it, especially within modern existentialism.

When we look back to Aristotle’s Nicomachean Ethics, for example, we find a fairly balanced perspective of human nature and the many motivations that drive our behavior.  But, in evaluating all the possible goods that humans can aim for, in order to derive an answer to the ethical question of what one ought to do above all else, Aristotle claimed that the highest life one could attain was the life of pure reason, the life of the philosopher and the theoretical scientist.  Aristotle thought that reason was the highest part of our personality, where one’s capacity for reason was treated as the center of one’s real self and the core of their identity.  It is this stark description of rationalism that diffused through Western philosophy until bumping heads with modern existentialist thought.

Aristotle’s rationalism even permeated the Christianity of the Medieval period, where it maintained an albeit uneasy relationship between faith and reason as the center of the human personality.  And the quest for a complete view of the cosmos further propagated a valuing of reason as the highest human function.  The inherent problems with this view simply didn’t surface until some later thinkers began to see human existence and the potential of our reason as having a finite character with limitations.  Only then was the grandiose dream of having a complete knowledge of the universe and of our existence finally shattered.  Once this goal was seen as unattainable, we were left alone on an endless road of knowledge with no prospects for any kind of satisfying conclusion.

We can certainly appreciate the value of theoretical knowledge, and even develop a passion for discovering it, but we mustn’t lose sight of the embodied, subjective qualities of our human nature; nor can we successfully argue any longer that the latter can be dismissed due to a goal of reaching some absolute state of knowledge or being.  That goal is not within our reach, and so trying to make arguments that rely on its possibility is nothing more than an exercise of futility.

So now we must ask ourselves a very important question:

“If man can no longer hold before his mind’s eye the prospect of the Great Chain of Being, a cosmos rationally ordered and accessible from top to bottom to reason, what goal can philosophers set themselves that can measure up to the greatness of that old Greek ideal of the bios theoretikos, the theoretical life, which has fashioned the destiny of Western man for millennia?”

I would argue that the most important goal for philosophy has been and always will be the quest for discovering as many moral facts as possible, such that we can attain eudaimonia as Aristotle advocated for.  But rather than holding a life of reason as the greatest good to aim for, we should simply aim for maximal life fulfillment and overall satisfaction with one’s life, and not presuppose what will accomplish that.  We need to use science and subjective experience to inform us of what makes us happiest and the most fulfilled (taking advantage of the psychological sciences), rather than making assumptions about what does this best and simply arguing from the armchair.

And because our happiness and life fulfillment are necessarily evaluated through subjectivity, we mustn’t make the same mistakes of positivism and simply discard our subjective experience.  Rather we should approach morality with the recognition that we are each an embodied subject with emotions, intuitions, and feelings that motivate our desires and our behavior.  But we should also ensure that we employ reason and rationality in our moral analysis, so that our irrational predispositions don’t lead us farther away from any objective moral truths waiting to be discovered.  I’m sympathetic to a quote of Kierkegaard’s that I mentioned at the beginning of this post:

“What I really need is to get clear about what I must do, not what I must know, except insofar as knowledge must precede every act.”

I agree with Kierkegaard here, in that moral imperatives are the most important topic in philosophy, and should be the most important driving force in one’s life, rather than simply a drive for knowledge for it’s own sake.  But of course, in order to get clear about what one must do, one first has to know a number of facts pertaining to how any moral imperatives are best accomplished, and what those moral imperatives ought to be (as well as which are most fundamental and basic).  I think the most basic axiomatic moral imperative within any moral system that is sufficiently motivating to follow is going to be that which maximizes one’s life fulfillment; that which maximizes one’s chance of achieving eudaimonia.  I can’t imagine any greater goal for humanity than that.

Here is the link for part 5 of this post series.

It’s Time For Some Philosophical Investigations

Ludwig Wittgenstein’s Philosophical Investigations is a nice piece of work where he attempts to explain his views on language and the consequences of this view on various subjects like logic, semantics, cognition, and psychology.  I’ve mentioned some of his views very briefly in a couple of earlier posts, but I wanted to delve into his work in a little more depth here and comment on what strikes me as most interesting.  Lately, I’ve been looking back at some of the books I’ve read from various philosophers and have been wanting to revisit them so I can explore them in more detail and share how they connect to some of my own thoughts.  Alright…let’s begin.

Language, Meaning, & Their Probabilistic Attributes

He opens his Philosophical Investigations with a quote from St. Augustine’s Confessions that describes how a person learns a language.  St. Augustine believed that this process involved simply learning the names of objects (for example, by someone else pointing to the objects that are so named) and then stringing them together into sentences, and Wittgenstein points out that this is true to some trivial degree but it overlooks a much more fundamental relationship between language and the world.  For Wittgenstein, the meaning of words can not simply be attached to an object like a name can.  The meaning of a word or concept has much more of a fuzzy boundary as it depends on a breadth of context or associations with other concepts.  He analogizes this plurality in the meanings of words with the relationship between members of a family.  While there may be some resemblance between different uses of a word or concept, we aren’t able to formulate a strict definition to fully describe this resemblance.

One problem then, especially within philosophy, is that many people assume that the meaning of a word or concept is fixed with sharp boundaries (just like the fixed structure of words themselves).  Wittgenstein wants to dispel people of this false notion (much as Nietzsche tried to do before him) so that they can stop misusing language, as he believed that this misuse was the cause of many (if not all) of the major problems that had cropped up in philosophy over the centuries, particularly in metaphysics.  Since meaning is actually somewhat fluid and can’t be accounted for by any fixed structure, Wittgenstein thinks that any meaning that we can attach to these words is ultimately going to be determined by how those words are used.  Since he ties meaning with use, and since this use is something occurring in our social forms of life, it has an inextricably external character.  Thus, the only way to determine if someone else has a particular understanding of a word or concept is through their behavior, in response to or in association with the use of the word(s) in question.  This is especially important in the case of ambiguous sentences, which Wittgenstein explores to some degree.

Probabilistic Shared Understanding

Some of what Wittgenstein is trying to point out here are what I like to refer to as the inherently probabilistic attributes of language.  And it seems to me to be probabilistic for a few different reasons, beyond what Wittgenstein seems to address.  First, there is no guarantee that what one person means by a word or concept exactly matches the meaning from another person’s point of view, but at the very least there is almost always going to be some amount of semantic overlap (and possibly 100% in some cases) between the two individual’s intended meanings, and so there is going to be some probability that the speaker and the listener do in fact share a complete understanding.  It seems reasonable to argue that simpler concepts will have a higher probability of complete semantic overlap whereas more complex concepts are more likely to leave a gap in that shared understanding.  And I think this is true even if we can’t actually calculate what any of these probabilities are.

Now my use of the word meaning here differs from Wittgenstein’s because I am referring to something that is not exclusively shared by all parties involved and I am pointing to something that is internal (a subjective understanding of a word) rather than external (the shared use of a word).  But I think this move is necessary if we are to capture all of the attributes that people explicitly or implicitly refer to with a concept like meaning.  It seems better to compromise with Wittgenstein’s thinking and refer to the meaning of a word as a form of understanding that is intimately connected with its use, but which involves elements that are not exclusively external.

We can justify this version of meaning through an example.  If I help teach you how to ride a bike and explain that this activity is called biking or to bike, then we can use Wittgenstein’s conception of meaning and it will likely account for our shared understanding, and so I have no qualms about that, and I’d agree with Wittgenstein that this is perhaps the most important function of language.  But it may be the case that you have an understanding that biking is an activity that can only happen on Tuesdays, because that happened to be the day that I helped teach you how to ride a bike.  Though I never intended for you to understand biking in this way, there was no immediate way for me to infer that you had this misunderstanding on the day I was teaching you this.  I could only learn of this fact if you explicitly explained to me your understanding of the term with enough detail, or if I had asked you additional questions like whether or not you’d like to bike on a Wednesday (for example), with you answering “I don’t know how to do that as that doesn’t make any sense to me.”  Wittgenstein doesn’t account for this gap in understanding in his conception of meaning and I think this makes for a far less useful conception.

Now I think that Wittgenstein is still right in the sense that the only way to determine someone else’s understanding (or lack thereof) of a word is through their behavior, but due to chance as well as where our attention is directed at any given moment, we may never see the right kinds of behavior to rule out any number of possible misunderstandings, and so we’re apt to just assume that these misunderstandings don’t exist because language hides them to varying degrees.  But they can and in some cases do exist, and this is why I prefer a conception of meaning that takes these misunderstandings into account.  So I think it’s useful to see language as probabilistic in the sense that there is some probability of a complete shared understanding underlying the use of a word, and thus there is a converse probability of some degree of misunderstanding.

Language & Meaning as Probabilistic Associations Between Causal Relations

A second way of language being probabilistic is due to the fact that the unique meanings associated with any particular use of a word or concept as understood by an individual are derived from probabilistic associations between various inferred causal relations.  And I believe that this is the underlying cause of most of the problems that Wittgenstein was trying to address in this book.  He may not have been thinking about the problem in this way, but it can account for the fuzzy boundary problem associated with the task of trying to define the meaning of words since this probabilistic structure underlies our experiences, our understanding of the world, and our use of language such that it can’t be represented by static, sharp definitions.  When a person is learning a concept, like redness, they experience a number of observations and infer what is common to all of those experiences, and then they extract a particular subset of what is common and associate it with a word like red or redness (as opposed to another commonality like objectness, or roundness, or what-have-you).  But in most cases, separate experiences of redness are going to be different instantiations of redness with different hues, textures, shapes, etc., which means that redness gets associated with a large range of different qualia.

If you come across a new qualia that seems to more closely match previous experiences associated with redness rather than orangeness (for example), I would argue that this is because the brain has assigned a higher probability to that qualia being an instance of redness as opposed to, say, orangeness.  And the brain may very well test a hypothesis of the qualia matching the concept of redness versus the concept of orangeness, and depending on your previous experiences of both, and the present context of the experience, your brain may assign a higher probability of orangeness instead.  Perhaps if a red-orange colored object is mixed in with a bunch of unambiguously orange-colored objects, it will be perceived as a shade of orange (to match it with the rest of the set), but if the case were reversed and it were mixed in with a bunch of unambiguously red-colored objects, it will be perceived as a shade of red instead.

Since our perception of the world depends on context, then the meanings we assign to words or concepts also depends on context, but not only in the sense of choosing a different use of a word (like Wittgenstein argues) in some language game or other, but also by perceiving the same incoming sensory information as conceptually different qualia (like in the aforementioned case of a red-orange colored object).  In that case, we weren’t intentionally using red or orange in a different way but rather were assigning one word or the other to the exact same sensory information (with respect to the red-orange object) which depended on what else was happening in the scene that surrounded that subset of sensory information.  To me, this highlights how meaning can be fluid in multiple ways, some of that fluidity stemming from our conscious intentions and some of it from unintentional forces at play involving our prior expectations within some context which directly modify our perceived experience.

This can also be seen through Wittgenstein’s example of what he calls a duckrabbit, an ambiguous image that can be perceived as a duck or a rabbit.  I’ve taken the liberty of inserting this image here along with a copy of it which has been rotated in order to more strongly invoke the perception of a rabbit.  The first image no doubt looks more like a duck and the second image, more like a rabbit.

Now Wittgenstein says that when one is looking at the duckrabbit and sees a rabbit, they aren’t interpreting the picture as a rabbit but are simply reporting what they see.  But in the case where a person sees a duck first and then later sees a rabbit, Wittgenstein isn’t sure what to make of this.  However, he claims to be sure that whatever it is, it can’t be the case that the external world stays the same while an internal cognitive change takes place.  Wittgenstein was incorrect on this point because the external world doesn’t change (in any relevant sense) despite our seeing the duck or seeing the rabbit.  Furthermore, he never demonstrates why two different perceptions would require a change in the external world.  The fact of the matter is, you can stare at this static picture and ask yourself to see a duck or to see a rabbit and it will affect your perception accordingly.  This is partially accomplished by you mentally rotating the image in your imagination and seeing if that changes how well it matches one conception or the other, and since it matches both conceptions to a high degree, you can easily perceive it one way or the other.  Your brain is simply processing competing hypotheses to account for the incoming sensory information, and the top-down predictions of rabbitness or duckness (which you’ve acquired over past experiences) actually changes the way you perceive it with no change required in the external world (despite Wittgenstein’s assertion to the contrary).

To give yet another illustration of the probabilistic nature of language, just imagine the head of a bald man and ask yourself, if you were to add one hair at a time to this bald man’s head, at what point does he lose the property of baldness?  If hairs were slowly added at random, and you could simply say “Stop!  Now he’s no longer bald!” at some particular time, there’s no doubt in my mind that if this procedure were repeated (even if the hairs were added non-randomly), you would say “Stop!  Now he’s no longer bald!” at a different point in this transition.  Similarly if you were looking at a light that was changing color from red to orange, and were asked to say when the color has changed to orange, you would pick a point in the transition that is within some margin of error but it wouldn’t be an exact, repeatable point in the transition.  We could do this thought experiment with all sorts of concepts that are attached to words, like cat and dog and, for example, use a computer graphic program to seamlessly morph a picture of a cat into a picture of a dog and ask at what point did the cat “turn into” a dog?  It’s going to be based on a probability of coincident features that you detect which can vary over time.  Here’s a series of pictures showing a chimpanzee morphing into Bill Clinton to better illustrate this point:

At what point do we stop seeing a picture of a chimpanzee and start seeing a picture of something else?  When do we first see Bill Clinton?  What if I expanded this series of 15 images into a series of 1000 images so that this transition happened even more gradually?  It would be highly unlikely to pick the exact same point in the transition two times in a row if the images weren’t numbered or arranged in a grid.  We can analogize this phenomenon with an ongoing problem in science, known as the species problem.  This problem can be described as the inherent difficulty of defining exactly what a species is, which is necessary if one wants to determine if and when one species evolves into another.  This problem occurs because the evolutionary processes giving rise to new species are relatively slow and continuous whereas sorting those organisms into sharply defined categories involves the elimination of that generational continuity and replacing it with discrete steps.

And we can see this effect in the series of images above, where each picture could represent some large number of generations in an evolutionary timeline, where each picture/organism looks roughly like the “parent” or “child” of the picture/organism that is adjacent to it.  Despite this continuity, if we look at the first picture and the last one, they look like pictures of distinct species.  So if we want to categorize the first and last picture as distinct species, then we create a problem when trying to account for every picture/organism that lies in between that transition.  Similarly words take on an appearance of strict categorization (of meaning) when in actuality, any underlying meaning attached is probabilistic and dynamic.  And as Wittgenstein pointed out, this makes it more appropriate to consider meaning as use so that the probabilistic and dynamic attributes of meaning aren’t lost.

Now you may think you can get around this problem of fluidity or fuzzy boundaries with concepts that are simpler and more abstract, like the concept of a particular quantity (say, a quantity of four objects) or other concepts in mathematics.  But in order to learn these concepts in the first place, like quantity, and then associate particular instances of it with a word, like four, one had to be presented with a number of experiences and infer what was common to all of those experiences (as was the case with redness mentioned earlier).  And this inference (I would argue) involves a probabilistic process as well, it’s just that the resulting probability of our inferring particular experiences as an instance of four objects is incredibly high and therefore repeatable and relatively unambiguous.  Therefore that kind of inference is likely to be sustained no matter what the context, and it is likely to be shared by two individuals with 100% semantic overlap (i.e. it’s almost certain that what I mean by four is exactly what you mean by four even though this is almost certainly not the case for a concept like love or consciousness).  This makes mathematical concepts qualitatively different from other concepts (especially those that are more complex or that more closely map on to reality), but it doesn’t negate their having a probabilistic attribute or foundation.

Looking at the Big Picture

Though this discussion of language and meaning is not an exhaustive analysis of Wittgenstein’s Philosophical Investigations, it represents an analysis of the main theme present throughout.  His main point was to shed light on the disparity between how we often think of language and how we actually use it.  When we stray away from the way it is actually used in our everyday lives, in one form of social life or other, and instead misuse it such as in philosophy, this creates all sorts of problems and unwarranted conclusions.  He also wants his readers to realize that the ultimate goal of philosophy should not be to try and make metaphysical theories and deep explanations underlying everyday phenomena, since these are often born out of unwarranted generalizations and other assumptions stemming from how our grammar is structured.  Instead we ought to subdue these temptations to generalize and subdue our temptations to be dogmatic and instead use philosophy as a kind of therapeutic tool to keep our abstract thinking in check and to better understand ourselves and the world we live in.

Although I disagree with some of Wittgenstein’s claims about cognition (in terms of how intimately it is connected to the external world) and take some issue with his arguably less useful conception of meaning, he makes a lot of sense overall.  Wittgenstein was clearly hitting upon a real difference between the way actual causal relations in our experience are structured and how those relations are represented in language.  Personally, I think that work within philosophy is moving in the right direction if the contributions made therein lead us to make more successful predictions about the causal structure of the world.  And I believe this to be so even if this progress includes generalizations that may not be exactly right.  As long as we can structure our thinking to make more successful predictions, then we’re moving forward as far as I’m concerned.  In any case, I liked the book overall and thought that the interlocutory style gave the reader a nice break from the typical form seen in philosophical argumentation.  I highly recommend it!

Looking Beyond Good & Evil

Nietzche’s Beyond Good and Evil serves as a thorough overview of his philosophy and is definitely one of his better works (although I think Thus Spoke Zarathustra is much more fun to read).  It’s definitely worth reading (if you haven’t already) to put a fresh perspective on many widely held philosophical assumptions that are often taken for granted.  While I’m not interested in all of the claims he makes (and he makes many, in nearly 300 aphorisms spread over nine chapters), there are at least several ideas that I think are worth analyzing.

One theme presented throughout the book is Nietzsche’s disdain for classical conceptions of truth and any form of absolutism or dogmatism.  With regard to truth or his study on the nature of truth (what we could call his alethiology), he subscribes to a view that he coined as perspectivism.  For Nietzsche, there are no such things as absolute truths but rather there are only different perspectives about reality and our understanding of it.  So he insists that we shouldn’t get stuck in the mud of dogmatism, and should instead try to view what is or isn’t true with an open mind and from as many points of view as possible.

Nietzsche’s view here is in part fueled by his belief that the universe is in a state of constant change, as well as his belief in the fixity of language.  Since the universe is in a state of constant change, language generally fails to capture this dynamic essence.  And since philosophy is inherently connected to the use of language, false inferences and dogmatic conclusions will often manifest from it.  Wittgenstein belonged to a similar school of thought (likely building off of Nietzsche’s contributions) where he claimed that most of the problems in philosophy had to do with the limitations of language, and therefore, that those philosophical problems could only be solved (if at all) through an in-depth evaluation of the properties of language and how they relate to its use.

This school of thought certainly has merit given the facts of language having more of a fixed syntactic structure yet also having a dynamic or probabilistic semantic structure.  We need language to communicate our thoughts to one another, and so this requires some kind of consistency or stability in its syntactic structure.  But the meaning behind the words we use is something that is established through use, through context, and ultimately through associations between probabilistic conceptual structures and relatively stable or fixed visual and audible symbols (written or spoken words).  Since the semantic structure of language has fuzzy boundaries, and yet is attached to relatively fixed words and grammar, it produces the illusion of a reality that is essentially unchanging.  And this results in the kinds of philosophical problems and dogmatic thinking that Nietzsche warns us of.

It’s hard to disagree with Nietzsche’s view that dogmatism is to be avoided at all costs, and that absolute truths are generally specious at best (let alone dangerous), and philosophy owes a lot to Nietzsche for pointing out the need to reject this kind of thinking.  Nietzsche’s rejection of absolutism and dogmatism is made especially clear in his views on the common conceptions of God and morality.  He points out how these concepts have changed a lot over the centuries (where, for example, the meaning of good has undergone a complete reversal throughout its history), and this is despite the fact that throughout that time, the proponents of those particular views of God or morality believe that these concepts have never changed and will never change.

Nietzsche believes that all change is ultimately driven by a will to power, where this will is a sort of instinct for autonomy, and which also consists of a desire to impose one’s will onto others.  So the meaning of these concepts (such as God or morality) and countless others have only changed because they’ve been re-appropriated by different or conflicting wills to power.  As such, he thinks that the meaning and interpretation of a concept illustrate the attributes of the particular will making use of those concepts, rather than some absolute truth about reality.  I think this idea makes a lot of sense if we regard the will to power as not only encompassing the desires belonging to any individual (most especially their desire for autonomy), but also the inferences they’ve made about reality, it’s apparent causal relations, etc., which provide the content for those desires.  So any will to power is effectively colored by the world view held by the individual, and this world view or set of inferred causal relations includes one’s inferences pertaining to language and any meaning ascribed to the words and concepts that one uses.

Even more interesting to me is the distinction Nietzsche makes between what he calls an unrefined use or version of the will to power and one that is refined.  The unrefined form of a will to power takes the desire for autonomy and directs it outward (perhaps more instinctually) in order to dominate the will of others.  The refined version on the other hand takes this desire for autonomy and directs it inward toward oneself, manifesting itself as a kind of cruelty which causes a person to constantly struggle to make themselves stronger, more independent, and to provide them with a deeper character and perhaps even a deeper understanding of themselves and their view of the world.  Both manifestations of the will to power seem to try and simply maximize one’s power over as much as possible, but the latter refined version is believed by Nietzsche to be superior and ultimately a more effective form of power.

We can better illustrate this view by considering a person who tries to dominate others to gain power in part because they lack the ability to gain power over their own autonomy, and then compare this to a person who gains control over their own desires and autonomy and therefore doesn’t need to compensate for any inadequacy by dominating others.  A person who feels a need to dominate others is in effect dependent on those subordinates (and dependence implies a certain lack of power), but a person who increases their power over themselves gains more independence and thus a form of freedom that is otherwise not possible.

I like this internal/external distinction that Nietzsche makes, but I’d like to build on it a little and suggest that both expressions of a will to power can be seen as complementary strategies to fulfill one’s desire for maximal autonomy, but with the added caveat that this autonomy serves to fulfill a desire for maximal causal power by harnessing as much control over our experience and understanding of the world as possible.  On the one hand, we can try and change the world in certain ways to fulfill this desire (including through the domination of other wills to power), or we can try and change ourselves and our view of the world (up to and including changing our desires if we find them to be misdirecting us away from our greatest goal).  We may even change our desires such that they are compatible with an external force attempting to dominate us, thus rendering the external domination powerless (or at least less powerful than it was), and then we could conceivably regain a form of power over our experience and understanding of the world.

I’ve argued elsewhere that I think that our view of the world as well as our actions and desires can be properly described as predictions of various causal relations (this is based on my personal view of knowledge combined with a Predictive Processing account of brain function).  Reconciling this train of thought with Nietzsche’s basic idea of a will to power, I think we could say that our will to power depends on our predictions of the world and its many apparent causal relations.  We could equate maximal autonomy with maximal predictive success (including the predictions pertaining to our desires). Looking at autonomy and a will to power in this way, we can see that one is more likely to make successful predictions about the actions of another if they subjugate the other’s will to power by their own.  And one can also increase the success of their predictions by changing them in the right ways, including increasing their complexity to better match the causal structure of the world, and by changing our desires and actions as well.

Another thing to consider about a desire for autonomy is that it is better formulated if it includes whatever is required for its own sustainability.  Dominating other wills to power will often serve to promote a sustainable autonomy for the dominator because then those other wills aren’t as likely to become dominators themselves and reverse the direction of dominance, and this preserves the autonomy of the dominating will to power.  This shows how this particular external expression of a will to power could be naturally selected for (under certain circumstances at least) which Nietzsche himself even argued (though in an anti-Darwinian form since genes are not the unit of selection here, but rather behaviors).  This type of behavioral selection would explain it’s prevalence in the animal kingdom including in a number of primate species aside from human beings.  I do think however that we’ve found many ways of overcoming the need or impulse to dominate and it has a lot to do with having adopted social contract theory, since in my view it provides a way of maximizing the average will to power for all parties involved.

Coming back to Nietzsche’s take on language, truth, and dogmatism, we can also see that an increasingly potent will to power is more easily achievable if it is able to formulate and test new tentative predictions about the world, rather than being locked in to some set of predictions (which is dogmatism at it’s core).  Being able to adapt one’s predictions is equivalent to considering and adopting a new point of view, a capability which Nietzsche described as inherent in any free spirit.  It also amounts to being able to more easily free ourselves from the shackles of language, just as Nietzsche advocated for, since new points of view affect the meaning that we ascribe to words and concepts.  I would add to this, the fact that new points of view can also increase our chances of making more successful predictions that constitute our understanding of the world (and ourselves), because we can test them against our previous world view and see if this leads to more or less error, better parsimony, and so on.

Nietzsche’s hope was that one day all philosophy would be flexible enough to overcome its dogmatic prejudices, its claims of absolute truths, including those revolving around morality and concepts like good and evil.  He sees these concepts as nothing more than superficial expressions of one particular will to power or another, and thus he wants philosophy to eventually move itself beyond good and evil.  Personally, I am a proponent of an egoistic goal theory of morality, which grounds all morality on what maximizes the satisfaction and life fulfillment of the individual (which includes cultivating virtues such as compassion, honesty, and reasonableness), and so I believe that good and evil, when properly defined, are more than simply superficial expressions.

But I agree with Nietzsche in part, because I think these concepts have been poorly defined and misused such that they appear to have no inherent meaning.  And I also agree with Nietzsche in that I reject moral absolutism and its associated dogma (as found in religion most especially), because I believe morality to be dynamic in various ways, contingent on the specific situations we find ourselves in and the ever-changing idiosyncrasies of our human psychology.  Human beings often find themselves in completely novel situations and cultural evolution is making this happen more and more frequently.  And our species is also changing because of biological evolution as well.  So even though I agree that moral facts exist (with an objective foundation), and that a subset of these facts are likely to be universal due to the overlap between our biology and psychology, I do not believe that any of these moral facts are absolute because there are far too many dynamic variables for an absolute morality to work even in principle.  Nietzsche was definitely on the right track here.

Putting this all together, I’d like to think that our will to power has an underlying impetus, namely a drive for maximal satisfaction and life fulfillment.  If this is true then our drive for maximal autonomy and control over our experience and understanding of the world serves to fulfill what we believe will maximize this overall satisfaction and fulfillment.  However, when people are irrational, dogmatic, and/or are not well-informed on the relevant facts (and this happens a lot!), this is more likely to lead to an unrefined will to power that is less conducive to achieving that goal, where dominating the wills of others and any number of other immoral behaviors overtakes the character of the individual.  Our best chance of finding a fulfilling path in life (while remaining intellectually honest) is going to require an open mind, a willingness to criticize our own beliefs and assumptions, and a concerted effort to try and overcome our own limitations.  Nietzsche’s philosophy (much of it at least) serves as a powerful reminder of this admirable goal.

Atheism, Morality, and Various Thoughts of the Day…

I’m sick of anti-intellectuals and the rest in their assuming that all atheists are moral Nihilists, moral relativists, post/modernists, proponents of scientism, etc. ‘Dat ain’t the case. Some of us respect philosophy and understand fully well that even science requires an epistemological, metaphysical, and ethical foundation, in order to work at all and to ground all of its methodologies.  Some atheists are even keen to some form of panpsychism (like Chalmers’ or Strawson’s views).

Some of us even ascribe to a naturalistic worldview that holds onto meaning, despite the logical impossibility of libertarian free will (hint: it has to do with living a moral life which means to live a fulfilling life and maximizing one’s satisfaction through a rational assessment of all the available information — which entails BAYESIAN reasoning — including a rational assessment of the information pertaining to one’s own subjective experience of fulfillment and sustainable happiness). Some of us atheists/philosophical naturalists/what-have-you are moral realists as well and therefore reject relativism, believing that objective moral facts DO in fact exist (and therefore science can find them), even if many of those facts are entailed within a situational ethical framework. Some of us believe that at least some number of moral facts are universal, but this shouldn’t be confused with moral absolutism since both are merely independent subsets of realism. I find absolutism to be intellectually and morally repugnant and epistemologically unjustifiable.

Also, a note for any theists out there: when comparing arguments for and against the existence of a God or gods (and the “Divine Command Theory” that accompanies said belief), keep in mind that an atheist need only hold a minimalist position on the issue (soft atheism) and therefore the entire burden of proof lies on the theist to support their extraordinary claim(s) with an extraordinary amount of evidentiary weight. While I’m willing to justify a personal belief in hard atheism (the claim that “God does not exist”), the soft atheist need only point out that they lack a belief in God because no known proponent for theism has yet met the burden of proof for supporting their extraordinary claim that “God does exist”. As such, any justified moral theory of what one ought to do (above all else) including but certainly not limited to who one votes for, how we treat one another, what fundamental rights we should have, etc., must be grounded on claims of fact that have met their burden of proof. Theism has not done this and the theist can’t simply say “Prove God doesn’t exist”, since this would require proving a null hypothesis which is not possible, even if it can be proven false. So rather than trying to unjustifably shift the burden of proof onto the atheist, the theist must satisfy the burden of proof for their positive claim on the existence of a god(s).

A more general goal needed to save our a$$es from self-destruction is for more people to dabble in philosophy. I argue that it should even become a core part of educational curricula (especially education on minimizing logical fallacies/cognitive biases and education on moral psychology) to give us the best chance of living a life that is at least partially examined through internal rational reflection and discourse with those that are willing to engage with us. To give us the best chance of surviving the existential crisis that humanity (and many more species that share this planet with us) are in. We need more people to be encouraged to justify what they think they ought to do above all else.