Irrational Man: An Analysis (Part 4, Chapter 11: The Place of the Furies)

In my last post in this series on William Barrett’s Irrational Man, I examined some of the work of existential philosopher Jean-Paul Sartre, which concluded part 3 of Barrett’s book.  The final chapter, Ch. 11: The Place of the Furies, will be briefly examined here and this will conclude my eleven part post-series.  I’ve enjoyed this very much, but it’s time to move on to new areas of interest, so let’s begin.

1. The Crystal Palace Unmanned

“The fact is that a good dose of intellectualism-genuine intellectualism-would be a very helpful thing in American life.  But the essence of the existential protest is that rationalism can pervade a whole civilization, to the point where the individuals in that civilization do less and less thinking, and perhaps wind up doing none at all.  It can bring this about by dictating the fundamental ways and routines by which life itself moves.  Technology is one material incarnation of rationalism, since it derives from science; bureaucracy is another, since it aims at the rational control and ordering of social life; and the two-technology and bureaucracy-have come more and more to rule our lives.”

Regarding the importance and need for more intellectualism in our society, I think this can be better described as the need for more critical thinking skills and the need for people to be able to discern fact from fiction, to recognize their own cognitive biases, and to respect the adage that extraordinary claims require extraordinary evidence.  At the same time, in order to appreciate the existentialist’s concerns, we ought to recognize that there are aspects of human psychology including certain psychological needs that are inherently irrational, including with respect to how we structure our lives, how we express our emotions and creativity, how we maintain a balance in our psyche, etc.  But, since technology is not only a material incarnation of rationalism, but also an outlet for our creativity, there has to be a compromise here where we shouldn’t want to abandon technology, but simply to keep it in check such that it doesn’t impede our psychological health and our ability to live a fulfilling life.

“But it is not so much rationalism as abstractness that is the existentialists’ target; and the abstractness of life in this technological and bureaucratic age is now indeed something to reckon with.  The last gigantic step forward in the spread of technologism has been the development of mass art and mass media of communication: the machine no longer fabricates only material products; it also makes minds. (stereotypes, etc.).”

Sure enough, we’re living in a world where many of our occupations are but one of many layers of abstraction constituting our modern “machine of civilization”.  And the military industrial complex that has taken over the modern world has certainly gone beyond the mass production of physical stuff to be consumed by the populace, and now includes the mass production and viral dissemination of memes as well.  Ideas can spread like viruses and in our current globally interconnected world (which Barrett hadn’t yet seen to the same degree when writing this book), the spread of these ideas is much faster and influential on culture than ever before.  The degree of indoctrination, and the perpetuated cycles of co-dependence between citizens and the corporatocratic, sociopolitical forces ruling our lives from above, have resulted in making our way of life and our thinking much more collective and less personal than at any other time in human history.

“Kierkegaard condemned the abstractness of his time, calling it an Age of Reflection, but what he seems chiefly to have had in mind was the abstractness of the professorial intellectual, seeing not real life but the reflection of it in his own mind.”

Aside from the increasingly abstract nature of modern living then, there’s also the abstractness that pervades our thinking about life, which detracts from our ability to actually experience life.  Kierkegaard had a legitimate point here, by pointing out the fact that theory cannot be a complete substitute for practice; that thought cannot be a complete substitute for action.  We certainly don’t want the reverse either, since action without sufficient forethought leads to foolishness and bad consequences.

I think the main point here is that we don’t want to miss out on living life by thinking about it too much.  Since living a philosophically examined life is beneficial, it remains an open question exactly what balance is best for any particular individual to live the most fulfilling life.  In the mean time we ought to simply recognize that there is the potential for an imbalance, and to try our best to avoid it.

“To be rational is not the same as to be reasonable.  In my time I have heard the most hair-raising and crazy things from very rational men, advanced in a perfectly rational way; no insight or feelings had been used to check the reasoning at any point.”

If you ignore our biologically-grounded psychological traits, or ignore the fact that there’s a finite range of sociological conditions for achieving human psychological health and well-being, then you can’t develop any theory that’s supposed to apply to humans and expect it to be reasonable or tenable.  I would argue that ignoring this subjective part of ourselves when making theories that are supposed to guide our behavior in any way is irrational, at least within the greater context of aiming to have not only logically valid arguments but logically sound arguments as well.  But, if we’re going to exclude the necessity for logical soundness in our conception of rationality, then the point is well taken.  Rationality is a key asset in responsible decision making but it should be used in a way that relies on or seeks to rely on true premises, taking our psychology and sociology into account.

“The incident (making hydrogen bombs without questioning why) makes us suspect that, despite the increase in the rational ordering of life in modern times, men have not become the least bit more reasonable in the human sense of the word.  A perfect rationality might not even be incompatible with psychosis; it might, in fact, even lead to the latter.”

Again, I disagree with Barrett’s use or conception of rationality here, but semantics aside, his main point still stands.  I think that we’ve been primarily using our intelligence as a species to continue to amplify our own power and maximize our ability to manipulate the environment in any way we see fit.  But as our technological capacity advances, our cultural evolution is getting increasingly out of sync with our biological evolution, and we haven’t been anywhere close to sufficient in taking our psychological needs and limitations into account as we continue to engineer the world of tomorrow.

What we need is rationality that relies on true or at least probable premises, as this combination should actually lead a person to what is reasonable or likely to be reasonable.  I have no doubt that without making use of a virtue such as reasonableness, rationality can become dangerous and destructive, but this is the case with every tool we use whether it’s rationality or other capacities both mental and material; tools can always be harmful when misused.

“If, as the Existentialists hold, an authentic life is not handed to us on a platter but involves our own act of self-determination (self-finitization) within our time and place, then we have got to know and face up to that time, both in its (unique) threats and its promises.”

And our use of rationality on an individual level should be used to help reach this goal of self-finitization, so that we can balance the benefits of the collective with the freedom of each individual that makes up that collective.

“I for one am personally convinced that man will not take his next great step forward until he has drained to the lees the bitter cup of his own powerlessness.”

And when a certain kind of change is perceived as something to be avoided, the only way to go through with it is by perceiving the status quo as something that needs to be avoided even more so, so that a foray out of our comfort zone is perceived as an actual improvement to our way of life.  But as long as we see ourselves as already all powerful and masters over our domain, we won’t take any major leap in a new direction of self and collective improvement.  We need to come to terms with our current position in the modern world so that we can truly see what needs repair.  Whether or not we succumb to one or both of the two major existential threats facing our species, climate change and nuclear war, is contingent on whether or not we set our eyes on a new prize.

“Sartre recounts a conversation he had with an American while visiting in this country.  The American insisted that all international problems could be solved if men would just get together and be rational; Sartre disagreed and after a while discussion between them became impossible.  “I believe in the existence of evil,” says Sartre, “and he does not.” What the American has not yet become aware of is the shadow that surrounds all human Enlightenment.”

Once again, if rationality is accompanied with true premises that take our psychology into account, then international problems could be solved (or many of them at least), but Sartre is also right insofar as there are bad ideas that exist, and people that have cultivated their lives around them.  It’s not enough to have people thinking logically, nor is some kind of rational idealism up to the task of dealing with human emotion, cognitive biases, psychopathy, and other complications in human behavior that exist.

The crux of the matter is that some ideas hurt us and other ideas help us, with our perception of these effects being the main arbiter driving our conception of what is good and evil; but there’s also a disparity between what people think is good or bad for them and what is actually good or bad for them.  I think that one could very plausibly argue that if people really knew what was good or bad for them, then applying rationality (with true or plausible premises) would likely work to solve a number of issues plaguing the world at large.

“…echoing the Enlightenment’s optimistic assumption that, since man is a rational animal, the only obstacles to his fulfillment must be objective and social ones.”

And here’s an assumption that’s certainly difficult to ground since it’s based on false premises, namely that humans are inherently rational.  We are unique in the animal kingdom in the sense that we are the only animal (or one of only a few animals) that have the capacity for rational thought, foresight, and the complex level of organization made possible from its use.  I also think that the obstacles to fulfillment are objective since they can be described as facts pertaining to our psychology, sociology, and biology, even if our psychology (for example) is instantiated in a subjective way.  In other words, our subjectivity and conscious experiences are grounded on or describable in objective terms relating to how our particular brains function, how humans as a social species interact with one another, etc.  But, fulfillment can never be reached let alone maximized without taking our psychological traits and idiosyncrasies into account, for these are the ultimate constraints on what can make us happy, satisfied, fulfilled, and so on.

“Behind the problem of politics, in the present age, lies the problem of man, and this is what makes all thinking about contemporary problems so thorny and difficult…anyone who wishes to meddle in politics today had better come to some prior conclusions as to what man is and what, in the end, human life is all about…The speeches of our politicians show no recognition of this; and yet in the hands of these men, on both sides of the Atlantic, lies the catastrophic power of atomic energy.”

And as of 2018, we’ve seen the Doomsday clock now reach two minutes to midnight, having inched one minute closer to our own destruction since 2017.  The dominance hierarchy being led and reinforced by the corporatocratic plutocracy are locked into a narrow form of tunnel vision, hell bent on maintaining if not exacerbating the wealth and power disparities that plague our country and the world as a whole, despite the fact that this is not sustainable in the long run, nor best for the fulfillment of those promoting it.

We the people do share a common goal of trying to live a good, fulfilling life; to have our basic needs met, and to have no fewer rights than anybody else in society.  You’d hardly know that this common ground exists between us when looking at the state of our political sphere, likely as polarized now in the U.S. (if not more so) than even during the Civil War.  Clearly, we have a lot of work to do to reevaluate what our goals ought to be, what our priorities ought to be, and we need a realistic and informed view of what it means to be human before any of these goals can be realized.

“Existentialism is the counter-Enlightenment come at last to philosophic expression; and it demonstrates beyond anything else that the ideology of the Enlightenment is thin, abstract, and therefore dangerous.”

Yes, but the Enlightenment has also been one of the main driving forces leading us out of theocracy, out of scientific illiteracy, and towards an appreciation of reason and evidence (something the U.S. at least, is in short supply of these days), and thus it has been crucial in giving us the means for increasing our standard of living, and solving many of our problems.  While the technological advancements derived from the Enlightenment have also been a large contributor to many of our problems, the current existential threats we face including climate change and nuclear war are more likely to be solved by new technologies, not an abolition of technology nor an abolition of the Enlightenment-brand of thinking that led to technological progress.  We simply need to better inform our technological goals of the actual needs and constraints of human beings, our psychology, and so on.

“The finitude of man, as established by Heidegger, is perhaps the death blow to the ideology of the Enlightenment, for to recognize this finitude is to acknowledge that man will always exist in untruth as well as truth.  Utopians who still look forward to a future when all shadows will be dispersed and mankind will dwell in a resplendent Crystal Palace will find this recognition disheartening.  But on second thought, it may not be such a bad thing to free ourselves once and for all from the worship of the idol of progress; for utopianism-whether the brand of Marx or or Nietzsche-by locating the meaning of man in the future leaves human beings here and how, as well as all mankind up to this point, without their own meaning.  If man is to be given meaning, the Existentialists have shown us, it must be here and now; and to think this insight through is to recast the whole tradition of Western thought.”

And we ought to take our cue from Heidegger, at the very least, to admit that we are finite, our knowledge is limited, and it always will be.  We will not be able to solve every problem, and we would do ourselves and the world a lot better if we admitted our own limitations.  But to avoid being overly cynical and simply damning progress altogether, we need to look for new ways of solving our existential problems.  Part of the solution that I see for humanity moving forward is going to be a combination of advancements in a few different fields.

By making better use of genetic engineering, we’ll one day have the ability to change ourselves in remarkable ways in order to become better adapted to our current world.  We will be able to re-sync our biological evolution with our cultural evolution so we no longer feel uprooted, like a fish out of water.  Continuing research in neuroscience will allow us to learn more about how our brains function and how to optimize that functioning.  Finally, the strides we make in computing and artificial intelligence should allow us to vastly improve our simulation power and arm us with greater intelligence for solving all the problems that we face.

Overall, I don’t see progress as the enemy, but rather that we have an alignment problem between our current measures of progress, and what will actually lead to maximally fulfilling lives.

“The realization that all human truth must not only shine against an enveloping darkness, but that such truth is even shot through with its own darkness may be depressing, and not only to utopians…But it has the virtue of restoring to man his sense of the primal mystery surrounding all things, a sense of mystery from which the glittering world of his technology estranges him, but without which he is not truly human.”

And if we actually use technology to change who we are as human beings, by altering the course of natural selection and our ongoing evolution (which is bound to happen with or without our influence, for better or worse), then it’s difficult to say what the future really holds for us.  There are cultural and technological forces that are leading to transhumanism, and this may mean that one day “human beings” (or whatever name is chosen for the new species that replaces us) will be inherently rational, or whatever we’d like our future species to be.  We’ve stumbled upon the power to change our very nature, and so it’s far too naive, simplistic, unimaginative, and short-sighted to say that humans will “always” or “never” be one way or another.  Even if this were true, it wouldn’t negate the fact that one day modern humans will be replaced by a superseding species which has different qualities than what we have now.

2. The Furies

“…Existentialism, as we have seen, seeks to bring the whole man-the concrete individual in the whole context of his everyday life, and in his total mystery and questionableness-into philosophy.”

This is certainly an admirable goal to combat simplistic abstractions of what it means to be human.  We shouldn’t expect to be able to abstract a certain capacity of human beings (such as rationality), consider it in isolation (no consideration of context), formulate a theory around that capacity, and then expect to get a result that is applicable to human beings as they actually exist in the world.  Furthermore, all of the uncertainties and complexities in our human experiences, no matter how difficult they may be to define or describe, should be given their due consideration in any comprehensive philosophical system.

“In modern philosophy particularly (philosophy since Descartes), man has figured almost exclusively as an epistemological subject-as an intellect that registers sense-data, makes propositions, reasons, and seeks the certainty of intellectual knowledge, but not as the man underneath all this, who is born, suffers, and dies…But the whole man is not whole without such unpleasant things as death, anxiety, guilt, fear and trembling, and despair, even though journalists and the populace have shown what they think of these things by labeling any philosophy that looks at such aspects of human life as “gloomy” or “merely a mood of despair.”  We are still so rooted in the Enlightenment-or uprooted in it-that these unpleasant aspects of life are like Furies for us: hostile forces from which we would escape (the easiest way is to deny that the Furies exist).”

My take on all of this is simply that multiple descriptions of human existence are needed to account for all of our experiences, thoughts, values, and behavior.  And it is what we value given our particular subjectivity that needs to be primary in these descriptions, and primary with respect to how we choose to engineer the world we live in.  Who we are as a species is a complex mixture of good and bad, lightness and darkness, and stability and chaos; and we shouldn’t deny any of these attributes nor repress them simply because they make us uncomfortable.  Instead, we would do much better to face who we are head on, and then try to make our lives better while taking our limitations into account.

“We are the children of an enlightenment, one which we would like to preserve; but we can do so only by making a pact with the old goddesses.  The centuries-long evolution of human reason is one of man’s greatest triumphs, but it is still in process, still incomplete, still to be.  Contrary to the rationalist tradition, we now know that it is not his reason that makes man man, but rather that reason is a consequence of that which really makes him man.  For it is man’s existence as a self-transcending self that has forged and formed reason as one of its projects.”

This is well said, although I’d prefer to couch this in different terms: it is ultimately our capacity to imagine that makes us human and able to transcend our present selves by pointing toward a future self and a future world.  We do this in part by updating our models of the world, simulating new worlds, and trying to better understand the world we live in by engaging with it, re-shaping it, and finding ways of better predicting its causal structure.  Reason and rationality have been integral in updating our models of the world, but they’ve also been high-jacked to some degree by a kind of super-normal stimuli reinforced by technology and our living in a world that is entirely alien to our evolutionary niche, and which have largely dominated our lives in a way that overlooks our individualism, emotions, and feelings.

Abandoning reason and rationality is certainly not the answer to this existential problem; rather, we just need to ensure that they’re being applied in a way that aligns with our moral imperatives and that they’re ultimately grounded on our subjective human psychology.

Advertisement

Irrational Man: An Analysis (Part 2, Chapter 5: “Christian Sources”)

In the previous post in this series on William Barrett’s Irrational Man, we explored some of the Hebraistic and Hellenistic contributions to the formation and onset of existentialism within Western thought.  In this post, I’m going to look at chapter 5 of Barrett’s book, which takes a look at some of the more specifically Christian sources of existentialism in the Western tradition.

1. Faith and Reason

We should expect there to be some significant overlap between Christianity and Hebraism, since the former was a dissenting sect of the latter.  One concept worth looking at is the dichotomy of faith versus reason, where in Christianity the man of faith is far and above the man of reason.  Even though faith is heavily prized within Hebraism as well, there are still some differences between the two belief systems as they relate to faith, with these differences stemming from a number of historical contingencies:

“Ancient Biblical man knew the uncertanties and waverings of faith as a matter of personal experience, but he did not yet know the full conflict of faith with reason because reason itself did not come into historical existence until later, with the Greeks.  Christian faith is therefore more intense than that of the Old Testament, and at the same time paradoxical: it is not only faith beyond reason but, if need be, against reason.”

Once again we are brought back to Plato, and his concept of rational consciousness and the explicit use of reason; a concept that had never before been articulated or developed.  Christianity no longer had reason hiding under the surface or homogeneously mixed-in with the rest of our mental traits; rather, it was now an explicit and specific way of thinking and processing information that was well known and that couldn’t simply be ignored.  Not only did the Christian have to hold faith above this now-identified capacity of thinking logically, but they also had to explicitly reject reason if it contradicted any belief that was grounded on faith.

Barrett also mentions here the problem of trying to describe the concept of faith in a language of reason, and how the opposition between the two is particularly relevant today:

“Faith can no more be described to a thoroughly rational mind than the idea of colors can be conveyed to a blind man.  Fortunately, we are able to recognize it when we see it in others, as in St. Paul, a case where faith had taken over the whole personality.  Thus vital and indescribable, faith partakes of the mystery of life itself.  The opposition between faith and reason is that between the vital and the rational-and stated in these terms, the opposition is a crucial problem today.”

I disagree with Barrett to some degree here as I don’t think it’s possible to be able to recognize a quality in others (let alone a quality that others can recognize as well) that is entirely incommunicable.  On the contrary, if you and I can both recognize some quality, then we should be able to describe it even if only in some rudimentary way.  Now this doesn’t mean that I think a description in words can do justice to every conception imaginable, but rather that, as human beings there is an inherent ability to communicate fairly well that which is involved in common modes of living and in our shared experiences; even very complex concepts that have somewhat of a fuzzy boundary.

It may be that a thoroughly rational mind will not appreciate faith in any way, nor think it is of any use, nor have had any personal experience with it; but it doesn’t mean that they can’t understand the concept, or what a mode of living dominated by faith would look like.  I also don’t think it’s fair to say that the opposition between faith and reason is that between the vital and the rational, even though this may be a real view for some.  If by vital, Barrett is alluding to what is essential to our psyche, then I don’t think he can be talking about faith, at least not in the sense of faith as belief without good reason.  But I’m willing to concede his point if instead by vital he is simply referring to an attitude that is spirited, vibrant, energetic, and thus involving some lively form of emotional expression.

He delves a little deeper into the distinction between faith and reason when he says:

“From the point of view of reason, any faith, including the faith in reason itself, is paradoxical, since faith and reason are fundamentally different functions of the human psyche.”

It seems that Barrett is guilty of making a little bit of a false equivocation here between two uses of the word faith.  I’d actually go so far as to say that the two uses of the word faith are best described in the way that Barrett himself alluded to in the last chapter: faith as trust and faith as belief.  On the one hand, we have faith as belief where some belief or set of beliefs is maintained without good reason; and on the other hand, we have faith as trust where, in the case of reason, our trust in reason is grounded on the fact that its use leads to successful predictions about the causal structure of our experience.  So there isn’t really any paradox at all when it comes to “faith” in reason, because such a faith need not involve adopting some belief without good reason, but rather there is, at most, a trust in reason based on the demonstrable and replicable success it has provided us in the past.

Barrett is right however, when he says that faith and reason are fundamentally different functions of the human psyche.  But this doesn’t mean that they are isolated from one another: for example, if one believes that having faith in God will help them to secure some afterlife in heaven, and if one also desires an afterlife in heaven, then the use of reason can actually be employed to reinforce a faith in God (as it did for me, back when I was a Christian).  This doesn’t mean that faith in God is reasonable, but rather that when certain beliefs are isolated or compartmentalized in the right way (even in the case of imagining hypothetical scenarios), reason can be used to process a logical relation between them, even if those beliefs are themselves derived from illogical cognitive biases.  To see that this is true, one need only realize that an argument can be logically valid even if the premises are not logically sound.

To be as charitable as possible with the concept of faith (at least, one that is more broadly construed), I will make one more point that I think is relevant here: having faith as a form of positivity or optimism in one’s outlook on life, given the social and personal circumstances that one finds themselves in, is perfectly rational and reasonable.  It is well known that people tend to be happier and have more fulfilling lives if they steer clear of pessimism and simply try and stay as positive as possible.  One primary reason for this has to do with what I have previously called socio-psychological feedback loops (what I would also call an evidence-based form of Karma).  Thus, one can have an empirically demonstrable good reason to have an attitude that is reminiscent of faith, yet without having to sacrifice an epistemology that is reliable and which consistently tracks on to reality.

When it comes to the motivating factors that lead people to faith, existentialist thought can shed some light on what some of these factors may be.  If we consider Paul the Apostle, for example, which Barrett mentioned earlier, he also says:

“The central fact for his faith is that Jesus did actually rise from the dead, and so that death itself is conquered-which is what in the end man most ardently longs for.”

The finite nature of man, not only with regard to the intellect but also with respect to our lives in general, is something that existentialism both recognizes and sees as an integral starting point to philosophically build off of.  And I think it should come as no surprise that the fear of death in particular is likely to be a motivating factor for most if not all supernatural beliefs found within any number of religions, including the belief in possibly resurrecting one from the dead (as in the case of St. Paul’s belief in Jesus), the belief in spirits, souls, or other disembodied minds, or any belief in an afterlife (including reincarnation).

We can see that faith is, largely anyway, a means of reducing the cognitive dissonance that results from existential elements of our experience; it is a means of countering or rejecting a number of facts surrounding the contingencies and uncertainties of our existence.  From this fact, we can then see that by examining what faith is compensating for, one can begin to extract what the existentialists eventually elaborated on.  In other words, within existentialism, it is fully accepted that we can’t escape death (for example), and that we have to face death head-on (among other things) in order to live authentically; therefore having faith in overcoming death in any way is simply a denial of at least one burden willingly taken on by the existentialist.

Within early Christianity, we can also see some parallels between an existentialist like Kierkegaard and an early Christian author like Tertullian.  Barrett gives us an excerpt from Tertullian’s De Carne Christi:

“The Son of God was crucified; I am unashamed of it because men must needs be ashamed of it.  And the Son of God died; it is by all means to be believed, because it is absurd.  And He was buried and rose again; the fact is certain because it is impossible.”

This sounds a lot like Kierkegaard, despite the sixteen centuries of separation between these two figures, and despite the fact that Tertullian was writing when Christianity was first expanding and most aggressive and Kierkegaard writing more towards the end of Christianity when it had already drastically receded under the pressure of secularization and modernity.  In this quote of Tertullian’s, we can see a kind of submission to paradox, where it is because a particular proposition is irrational or unreasonable that it becomes an article of faith.  It’s not enough to say that faith is simply used to arrive at a belief that is unreasonable or irrational, but rather that its attribute of violating reason is actually taken as a reason to use it and to believe that the claims of faith are in fact true.  So rather than merely seeing this as an example of irrationality, this actually makes for a good example of the kind of anti-rational attitudes that contributed to the existentialist revolt against some dominant forms of positivism; a revolt that didn’t really begin to precipitate until Kierkegaard’s time.

Although anti-rationalism is often associated with existentialism, I think one would be making an error if they equivocated the two or assumed that anti-rationalism was integral or necessary to existentialism; though it should be said that this is contingent on how exactly one is defining both anti-rationalism (and by extension rationalism) and existentialism.  The two terms have a variety of definitions (as does positivism, with Barrett giving us one particularly narrow definition back in chapter one).  If by rationalism, one is referring to the very narrow view that rationality is the only means of acquiring knowledge, or a view that the only thing that matters in human thought or in human life is rationality, or even a view that humans are fundamentally or primarily rational beings; then I think it is fair to say that this is largely incompatible with existentialism, since the latter is often defined as a philosophy positing that humans are not primarily rational, and that subjective meaning (rather than rationality) plays a much more primary role in our lives.

However, if rationality is simply taken as a view that reason and rational thought are integral components (even if not the only components) for discovering much of the knowledge that exists about ourselves and the universe at large, then it is by all means perfectly compatible with most existentialist schools of thought.  But in order to remain compatible, some priority needs to be given to subjective experience in terms of where meaning is derived from, how our individual identity is established, and how we are to direct our lives and inform our system of ethics.

The importance of subjective experience, which became a primary assumption motivating existentialist thought, was also appreciated by early Christian thinkers such as St. Augustine.  In fact, Barrett goes so far as to say that the interiorization of experience and the primary focus on the individual over the collective was unknown to the Greeks and didn’t come about until Christianity had begun to take hold in Western culture:

“Where Plato and Aristotle had asked the question, What is man?, St. Augustine (in the Confessions) asks, Who am I?-and this shift is decisive.  The first question presupposed a world of objects, a fixed natural and zoological order, in which man was included; and when man’s precise place in that order had been found, the specifically differentiating characteristic of reason was added.  Augustine’s question, on the other hand, stems from an altogether different, more obscure and vital center within the questioner himself: from an acutely personal sense of dereliction and loss, rather than from the detachment with which reason surveys the world of objects in order to locate its bearer, man, zoologically within it.”

So rather than looking at man as some kind of well-defined abstraction within a categorical hierarchy of animals and objects, and one bestowed with a unique faculty of rational consciousness, man is looked at from the inside, from the perspective of, and identified as, an embodied individual with an experience that is deeply rooted and inherently emotional.  And in Augustine’s question, we also see an implication that man can’t be defined in a way that the Greeks attempted to do because in asking the question, Who am I?, man became separated from the kind of zoological order that all other animals adhered to.  It is the unique sense of self and the (often foreboding) awareness of this self then, as opposed to the faculty of reason, that Augustine highlighted during his moment of reflection.

Though Augustine was cognizant of the importance of exploring the nature of our personal, human existence (albeit for him, with a focus on religious experience in particular), he also explored aspects of human existence on the cosmic scale.  But when he went down this cosmic road of inquiry, he left the nature of personal lived existence by the wayside and instead went down a Neo-Platonist path of theodicy.  Augustine was after all a formal theologian, and one who tried to come to grips with the problem of evil in the world.  But he employed Greek metaphysics for this task, and tried to use logic mixed with irrational faith-based claims about the benevolence of God, in order to show that God’s world wasn’t evil at all.  Augustine did this by eliminating evil from his conception of existence such that all evil was simply a lack of being and hence a form of non-being and therefore non-existent.  This was supposed to be our consolation: ignore any apparent evil in the world because we’re told that it doesn’t really exist; all in the name of trying to justify the cosmos as good and thus to maintain a view that God is good.

This is such a shame in the end, for in Augustine’s attempt at theodicy, he simply downplayed and ignored the abominable and precarious aspects of our existence, which are as real a part of our experience as anything could be.  Perhaps Augustine was motivated to use rationalism here in order to feel comfort and safety in a world where many feel alienated and alone.  But as Barrett says:

“…reason cannot give that security; if it could, faith would be neither necessary nor so difficult.  In the age-old struggle between the rational and the vital, the modern revolt against theodicy (or, equally, the modern recognition of its impossibility) is on the side of the vital…”

And so once again we see that a primary motivation for faith is because many cannot get the consolation they long for through the use of reason.  They can’t psychologically deal with the way the world really is and so they either use faith to believe the world is some other way or they try to use reason along with some number of faith-based premises, in evaluating our apparent place in the world.  St. Augustine actually thought that he could harmonize faith and reason (or what Barrett referred to as the vital and the rational) and this set the stage for the next thousand years of Christian thought.  Once faith claims became more explicit in the Church’s various articles of dogma, one was left to exercise rationality as they saw fit, as long as it was within the confines of that faith and dogma.  And this of course led to many medieval thinkers to mistake faith for reason (in many cases at least), reinforced by the fact that the use of reason was operating under faith-based premises.

The problematic relation between the vital and the rational didn’t disappear even after many a philosopher assumed the two forces were in complete harmony and agreement; instead the clash resurfaced in a debate between Voluntarism and Intellectualism, with our attention now turned toward St. Thomas Aquinas.  As an intellectualist, St. Thomas tried to argue that the intellect in man is prior to the will of man because the intellect determines the will, since we can only desire what we know.  Scotus on the other hand, a voluntarist, responded to this claim and said that the will determines what ideas the intellect turns toward, and thus the will ends up determining what the intellect comes to know.

As an aside, I think that Scotus was more likely correct here, because while the intellect may inform the will (and thus help to direct the will), there is something far more fundamental and unconscious at play that drives our engagement with the world as an organism evolved to survive.  For example, we have a curiosity for both novelty (seeking new information) and familiarity (a maximal understanding of incoming information) and basic biologically-driven desires for satisfaction and homeostasis.  These drives simply don’t depend on the intellect even though they can make use of it, and these drives will continue to operate even if the intellect does not.  I also disagree with St. Thomas’ claim that one can only desire that which they already know, for one can desire knowledge for it’s own sake without knowing exactly what knowledge is yet to come (again from an independent drive, which we could call the will), and one can also desire a change in the world from the way they currently know it to be to some other as yet unspecified way (e.g. to desire a world without a specific kind of suffering, even though one may not know exactly what that would be like, having never lived such a life before).

The crux of the debate between the primacy of the will versus the intellect can perhaps be re-framed accordingly:

“…not in terms of whether will is to be given primacy over the intellect, or the intellect over the will-these functions being after all but abstract fragments of the total man-but rather in terms of the primacy of the thinker over his thoughts, of the concrete and total man himself who is doing the thinking…the fact remains that Voluntarism has always been, in intention at least, an effort to go beyond the thought to the concrete existence of the thinker who is thinking that thought.”

Here again I see the concept of an embodied and evolved organism and a reflective self at play, where the contents of consciousness cannot be taken on their own, nor as primary, but rather they require a conscious agent to become manifest, let alone to have any causal power in the world.

2. Existence vs. Essence

There is a deeper root to the problem underlying the debate between St. Thomas and Scotus and it has to do with the relation between essence and existence; a relation that Sartre (among others) emphasized as critical to existentialism.  Whereas the essence of a thing is simply what the thing is (its essential properties for example), the existence of a thing simply refers to the brute fact that the thing is.  A very common theme found throughout Sartre’s writings is the basic contention that existence precedes essence (though, for him, this only applies to the case of man).  Barrett gives a very simplified explanation of such a thesis:

“In the case of man, its meaning is not difficult to grasp.  Man exists and makes himself to be what he is; his individual essence or nature comes to be out of his existence; and in this sense it is proper to say that existence precedes essence.  Man does not have a fixed essence that is handed to him ready-made; rather, he makes his own nature out of his freedom and the historical conditions in which he is placed.”

I think this concept also makes sense when we put it into the historical context going back to ancient Greece.  Whereas the Greeks tried to put humanity within a zoological order and categorize us by some essential qualities, the existentialists that later came on the scene rejected such an idea almost as a kind of categorical error.  It was simply not possible to classify human beings like you could inanimate objects or other animals that didn’t have the behavioral complexity and a will to define themselves as we do.

Barrett also explains that the essence versus existence relation breaks down into two different but related questions:

1) Does existence have primacy over essence, or the reverse?

2) In actual existing things is there a real distinction between the two?  Or are they merely different points of view that the mind takes toward the same existing thing?

In an attempt at answering these questions, I think it’s reasonable to say that essence has more to do with potentiality (a logically possible set of abstractions or properties) and existence has more to do with actuality (a logically and physically possible past or present state of being).  I also think that existence is necessary in order for any essence to be real in any way, and not because an actual object needs to exist in order to instantiate a physical instance of some specific essence, but because any essence is merely an abstraction created by a conscious agent who inferred it in the first place.  Quite simply, there is no Platonic realm for the essence to “exist” within and so it needs an existing consciousness to produce it.  So I would say that existence has primacy over essence since an existent is needed for an essence to be conceived, let alone instantiated at all.

As for the second question, I think there is a real distinction between the two and that they are different points of view that the mind takes toward the same existing thing.  I don’t see this as an either/or dichotomy.  The mind can certainly acknowledge the existence of some object, animal, or person, without giving much thought to what its essence is (aside from its being distinct enough to consider it a thing), but it can never acknowledge the essence of an actual object, animal, or person without also acknowledging its existence.  There’s also something relevant to be said about how our minds differentiate between an actual perception and an imagined one, even between an actual life and an imagined one, or an actual person and an imagined one; something that we do all the time.

On the other side of this issue, within the debate between essentialism and existentialism, are the questions of whether or not there are any essential qualities of human beings that make them human, and if so, whether or not any of these essential qualities are fixed for human beings.  Most people are familiar with the idea of human nature, whereby there are some biologically-grounded innate predispositions that affect our behavior in some ways.  We are evolved animals after all, and the only way for evolution to work is to have genes providing different phenotypic effects on the body and behavior of a population of organisms.  Since about half of our own genes are used to code for our brain’s development, its basic structure, and functionality, it should seem obvious to anyone that we’re bound to have some set of innate behavioral tendencies as well as some biological limitations on the space of possibilities for our behavior.

Thus, any set of innate behavioral tendencies that are shared by healthy human beings would constitute our human nature.  And furthermore, this behavioral overlap would be a fixed attribute of human nature insofar as the genes coding for that innate behavioral overlap don’t evolve beyond a certain degree.  Human nature is simply an undeniable fact about us grounded in our biology.  If our biology changes enough, then we will eventually cease to be human, or if we still call ourselves “human” at that point in time then we will cease to have the same nature since we will no longer be the same species.  But as long as that biological evolution remains under a certain threshold, we will have some degree of fixity in our human nature.

But unlike most other organisms on earth, human beings have an incredibly complex brain as well as a number of culturally inherited cognitive programs that provide us with a seemingly infinite number of possible behaviors and subsequent life trajectories.  And I think this fact has fueled some of the disagreement between the existentialists and the essentialists.  The essentialists (or many of them at least) have maintained that we have a particular human nature, which I think is certainly true.  But, as per the existentialists, we must also acknowledge just how vast our space of possibilities really is and not let our human nature be the ultimate arbiter of how we define ourselves.  We have far more freedom to choose a particular life project and way of life than one would be led to believe given some preconceived notion of what is essentially human.

3. The Case of Pascal

Science brought about a dramatic change to Western society, effectively carrying us out of the Middle Ages within a century; but this change also created the very environment that made modern Existentialism possible.  Cosmology, for example, led 17th century mathematician and theologian, Blaise Pascal to say that “The silence of these infinite spaces (outer space) frightens me.”  In this statement he was expressing the general reaction of humanity to the world that science had discovered; a world that made man feel homeless and insignificant.

As a religious man, Pascal was also seeking to reconcile this world discovered by science with his own faith.  This was a difficult task, but he found some direction in looking at the wretchedness of the human condition:

“In Pascal’s universe one has to search much more desperately to find any signposts that would lead the mind in the direction of faith.  And where Pascal finds such a signpost, significantly enough, is in the radically miserable condition of man himself.  How is it that this creature who shows everywhere, in comparison with other animals and with nature itself, such evidence marks of grandeur and power is at the same time so feeble and miserable?  We can only conclude, Pascal says, that man is rather like a ruined or disinherited nobleman cast out from the kingdom which ought to have been his (his fundamental premise is the image of man as a disinherited being).”

Personally I see this as indicative of our being an evolved species, albeit one with an incredibly rich level of consciousness and intelligence.  In some sense our minds’ inherent capacities have a potential that is far and beyond what most humans ever come close to actualizing, and this may create a feeling of our having lost something that we deserve; some possible yet unrealized world that would make more sense for us to have given the level of power and potential we possess.  And perhaps this lends itself to a feeling of angst or a lack of fulfillment as well.

As Pascal himself said:

“The natural misfortune of our mortal and feeble condition is so wretched that when we consider it closely, nothing can console us.” 

Furthermore, Barrett reminds us of Pascal having mentioned the human tendency of escaping from this close consideration of our condition through the two “sovereign anodynes” of habit and diversion:

“Both habit and diversion, so long as they work, conceal from man “his nothingness, his forlornness, his inadequacy, his impotence and his emptiness.”  Religion is the only possible cure for this desperate malady that is nothing other than our ordinary mortal existence itself.”

Barrett describes our state of cultivating various habits and distractions as a kind of concealment of the ever-narrowing space of possibilities in our lives, as each day passes us by with some set of hopes and dreams forever lost in the past.  I think it would be fair to say however, that the psychological benefit we gain from at least many of our habits and distractions warrants our having them in the first place.  As an analogy, if one day I have been informed by my doctor that I’m going to die from a terminal illness in a few months (thus drastically narrowing the future space of possibilities in my life), is there really a problem with me trying to keep my mind off of such a morbid fate, such that I can make the most of the time that I have left?  I certainly wouldn’t advocate burying one’s head in the sand either, nor having irrational expectations, nor living in denial; and so I think the time will be used best if I don’t completely forget about my timeline coming to an end either, because then I can more adequately appreciate that remaining time.

Moreover, I don’t agree that religion is the only possible cure for dealing with our “ordinary mortal existence”.  I think that psychological and moral development are the ultimate answers here, where a critical look at oneself and an ongoing attempt to become a better version of oneself in order to maximize one’s life fulfillment is key.  The world can be any way that it is but that doesn’t change the fact that certain attitudes toward that world and certain behaviors can make our experience of the world, as it truly is, better or worse.  Falling back on religion to console us is, to use Pascal’s own words, nothing but another habit and form of distraction, and one where we sacrifice knowing and facing the truth of our condition for some kind of ignorant bliss that relies on false beliefs about the world.  And I don’t think we can live authentically if we don’t face the condition we’re in as a species, and more importantly as individuals; by trying to believe as many true things and as few false things as possible.

When it comes to the human mind and how we deal with or process information about our everyday lives or aspects of our overall condition, it can be a rather messy process with very complicated concepts, many of which that have fuzzy boundaries and so can’t be dealt with in the same way that we deal with analyses relying strictly on logic.  In some cases of analyzing our lives, we don’t know exactly what the premises could even be, let alone how to arrange them within a logical structure to arrive at some kind of certain conclusion.  And this differs a lot from other areas of inquiry like, say, mathematics.  As Pascal was delving deeper into the study of our human condition, he realized this difference and posited that there was an interesting distinction in terms of what the mind can comprehend in these different contexts: a mathematical mind and an intuitive mind.  Within a field of study such as mathematics, it is our mind’s ability to logically comprehend various distinct and well defined ideas that we make use of, whereas in other more concrete domains of our experience, we rely quite heavily on intuition.

Not only is logic not very useful in the more concrete and complex domains of our lives, but in many cases it may actually get in the way of seeing things clearly since our intuition, though less reliable than logic, is more up to the task of dealing with complexities that haven’t yet been parsed out into well-defined ideas and concepts.  Barrett describes this a little differently than I would when he says:

“What Pascal had really seen, then, in order to have arrived at this distinction was this: that man himself is a creature of contradictions and ambivalences such as pure logic can never grasp…By delimiting a sphere of intuition over against that of logic, Pascal had of course set limits to human reason. “

In the case of logic, one is certainly limited in its use when the concepts involved are probabilistic (rather than being well-defined or “black and white”) and there are significant limitations when our attitudes toward those concepts vary contextually.  We are after all very complex creatures with a number of competing goals and with a prioritization of those goals that varies over time.  And of course we have a number of value judgments and emotional dispositions that motivate our actions in often unpredictable ways.  So it seems perfectly reasonable to me that one should acknowledge the limits we have in our use of logic and reason.

Barrett makes another interesting claim about the limits of reason when he says:

“Three centuries before Heidegger showed, through a learned and laborious exegesis, that Kant’s doctrine of the limitations of human reason really rests on the finitude of our human existence, Pascal clearly saw that the feebleness of our reason is part and parcel of the feebleness of our human condition generally.  Above all, reason does not get at the heart of religious experience.”

And I think the last sentence is most important here, where we try and understand that the limitations of reason include the inability to examine all that matters and all that is meaningful within religious experience.  Even though I am a kind of champion for reason, in the sense that I find it to be in short supply when it matters most and in the sense that I often advertise the dire need for critical thinking skills and the need for more education to combat our cognitive biases, I for one have never made the assumption that reason could ever accomplish such an arduous task of dissecting religious experience in some reductionist way while retaining or accounting for everything of value within it.  I have however used reason to analyze various religious beliefs and claims about the world in order to discover their consequences on, or incompatibilities with, a number of other beliefs.

Theologians do something similar in their attempts to prove the existence of God, by beginning with some set of presuppositions (about God or the supernatural), and then applying reason to those faith-based premises to see what would result if they were in fact true.  And Pascal saw this mental exercise as futile and misguided as well because it misses the entire point of religion:

“In any case, God as the object of a rigorous demonstration, even supposing such a demonstration were forthcoming, would have nothing to do with the living needs of religion.”

I admire the fact that Pascal was honest here about what really matters most when it comes to religion and religious beliefs.  It doesn’t matter whether or not God exists, or whether any religious claim or other is actually true or false; rather, it is the living need of religion that he perceives that ultimately motivates one to adhere to it.  Thus, theology doesn’t really matter to him because those who want to be convinced by the arguments for God’s existence will be convinced simply because they desire the conclusion insofar as they believe it will give them consolation from the human condition they find themselves in.

To add something to the bleak picture of just how contingent our existence is, Barrett mentions a personal story that Pascal shared about his having had a brush with death:

“While he was driving by the Seine one day, his carriage  suddenly swerved, the door was flung open, and Pascal almost catapulted down the embankment to his death.  The arbitrariness and suddenness of this near accident became for him another lightning flash of revelation.  Thereafter he saw Nothingness as a possibility that lurked, so to speak, beneath our feet, a gulf and an abyss into which we might tumble at any moment.  No other writer has expressed more powerfully than Pascal the radical contingency that lies at the heart of human existence-a contingency that may at any moment hurl us all unsuspecting into non-being…The idea of Nothingness or Nothing had up to this time played no role at all in Western philosophy…Nothingness had suddenly and drastically revealed itself to him.”

And here it is: the concept of Nothingness which has truly grounded a bulk of the philosophical constructs found within existentialism.  To add further to this, Pascal also saw that one’s inevitable death wasn’t the whole picture of our contingency, but only a part of it; it was our having been born at a particular time, place, and within a certain culture that vastly reduces the number of possible life trajectories one can take, and thus vastly reduces our freedom in establishing our own identity.  Our life begins with contingency and ends with it also.

Pascal also acknowledged our existence as lying in the middle of the universe, in terms of being between that of the infinitesimal and microscopic and that of the seemingly infinite lying at the cosmological scale.  Within this middle position, he saw man as being “an All in relation to Nothingness, a Nothingness in relation to the All.”  I think it’s also worth pointing out that it is because of our evolutionary history and the “middle-position” we evolved within that causes the drastic misalignment between the world we were in some sense meant to understand, and the micro and cosmic scales we discovered in the last several hundred years.

Aside from any feelings of alienation and homelessness that these different scales have precipitated, we simply don’t have an intuition that evolved a need to understand existence at the infinitesimal quantum level nor at the cosmic relativistic level, which is why these discoveries in physics are difficult to understand even by experts in the field.  We should expect these discoveries to be vastly perplexing and counter-intuitive, for our consciousness evolved to deal with life at a very finite scale; yet another example of our finitude as human beings.

It was this very same fact, that we gained so much information about our world through these kinds of discoveries, including major advancements made in mathematics, that also promoted the assumption that human nature could be perfected through the universal application of reason.

I’ll finish the post on this chapter with one final quote from Barrett that I found insightful:

“Poets are witnesses to Being before the philosophers are able to bring it into thought.  And what these particular poets were struggling to reveal, in this case, were the very conditions of Being that are ours historically today.  They were sounding, in poetic terms, the premonitory chords of our own era.”

It is the voice of the poets that we first hear lamenting about the Enlightenment and what reason seems to have done, or at least what it had begun to do.  They were some of the most important contemporary precursors to the later existentialists and philosophers that would soon begin to process (in far more detail and with far more precision) the full effects of modernity on the human psyche.

Irrational Man: An Analysis (Part 1, Chapter 2: “The Encounter with Nothingness”)

In the first post in this series on William Barrett’s Irrational Man, I explored Part 1, Chapter 1: The Advent of Existentialism, where Barrett gives a brief description of what he believes existentialism to be, and the environment it evolved within.  In this post, I want to explore Part I, Chapter 2: The Encounter with Nothingness.

Ch. 2 – The Encounter With Nothingness

Barrett talks about the critical need for self-analysis, despite the fact that many feel that this task has already been accomplished and that we’ve carried out this analysis exhaustively.  But Barrett sees this as contemporary society’s running away from the facts of our own ignorance.  Modern humankind, it seems to Barrett, is in even more need to question their identity for we seem to understand ourselves even less than when we first began to question who we are as a species.

1. The Decline of Religion

Ever since the end of the Middle Ages, religion as a whole has been on the decline.  This decrease in religiosity (particularly in Western civilization) became most prominent during the Enlightenment.  As science began to take off, the mechanistic structure and qualities of the universe (i.e. its laws of nature) began to reveal themselves in more and more detail.  This in turn led to a replacement of a large number of superstitious and supernatural religious beliefs about the causes for various phenomena with scientific explanations that could be empirically verified and tested.  Throughout this process, as theological explanations became replaced more and more with naturalistic explanations, the presumed role of God and the Church began to evaporate.  Thus, at the purely intellectual level, we underwent a significant change in terms of how we viewed the world and subsequently how we viewed the nature of human beings and our place in the world.

But, as Barrett points out:

“The waning of religion is a much more concrete and complex fact than a mere change in conscious outlook; it penetrates the deepest strata of man’s total psychic life…Religion to medieval man was not so much a theological system as a solid psychological matrix surrounding the individual’s life from birth to death, sanctifying and enclosing all its ordinary and extraordinary occasions in sacrament and ritual.”

We can see here how the role of religion has changed to some degree from medieval times to the present day.  Rather than simply being a set of beliefs that identified a person with a particular group and which had soteriological, metaphysical, and ethical significance to the believer (as it is more so in modern times), it used to be a complete system or a total solution for how one was to live their life.  And it also provided a means of psychological stability and coherence by providing a ready-made narrative of the essence of man; a sense of familiarity and a pre-defined purpose and structure that didn’t have to be constructed from scratch by the individual.

While the loss of the Church involved losing an entire system of dogmatic teachings, symbols, and various rites and sacraments, the most important loss according to Barrett was the loss of a concrete connection to a transcendent realm of being.  We were now set free such that we had to grapple with the world on our own, with all its precariousness, and to deal head-on with the brute facts of our own existence.

What I find most interesting in this chapter is when Barrett says:

“The rationalism of the medieval philosophers was contained by the mysteries of faith and dogma, which were altogether beyond the grasp of human reason, but were nevertheless powerfully real and meaningful to man as symbols that kept the vital circuit open between reason and emotion, between the rational and non-rational in the human psyche.”

And herein lies the crux of the matter; for Barrett believes that religion’s greatest function historically was its serving as a bridge between the rational and non-rational elements of our psychology, and also its serving as a barrier that limited the effective reach and power of our rationality over the rest of our psyches and our view of the world.  I would go even further to suggest that it may have allowed our emotional expression to more harmoniously co-exist and work with our reason instead of primarily being at odds with it.

I agree with Barrett’s point here in that religion often promulgates ideas and practices that appeal to many of our emotional dispositions and intuitions, thus allowing people to express certain emotional states and to maintain comforting intuitions that might otherwise be hindered or subjugated by reason and rationality.  And it has also provided a path for reason to connect to the unreasonable to some degree; as a means of minimizing the need to compartmentalize rationality from the emotional or irrational influences on a person’s belief systems.  By granting people an opportunity to combine reason and emotion in some way, where this reason could be used to try and make some sense of emotion and to give it some kind of validation without having to reject reason completely, religion has been effective (historically anyway) in helping people to avoid the discomfort of rejecting beliefs that they know to be reasonable (many of these beliefs at least) while also being able to avoid the discomfort of inadequate emotional/non-rational expression.

Once religion began to go by the wayside, due in large part to the accumulated knowledge acquired through reason and scientific progress, it became increasingly difficult to square the evidence and arguments that were becoming more widely known with many of the claims that religion and the Church had been propagating for centuries.  Along with this growing invalidation or loss of credibility came the increased need to compartmentalize reason and rationality from emotionally and irrationally-derived beliefs and experiences.  And this difficulty led to a decline in religiosity for many, which was accompanied with the loss in any emotional and irrational/non-rational expression that religion had once offered the masses.  Once reason and rationality expanded beyond a certain threshold, it effectively popped the religious bubble that had previously contained it, causing many to begin to feel homeless, out of place, and in many ways incomplete in the new world they now found themselves living in.

2. The Rational Ordering of Society

The organization of our lives has been, historically at least, a relatively organic process where it had a kind of self-guiding, pragmatic, and intuitive structure and evolution.  But once we approached the modern age, as Barrett points out, we saw a drastic shift toward an increasingly rational form of organization, where efficiency and a sort of technical precision began to dominate the overall direction of society and the lives of each individual.  The rise of capitalism was a part of this cultural evolutionary process (as was, I would argue, the Industrial Revolution), and only further enhanced the power and influence of reason and rationality over our day-to-day lives.

The collectivization and distribution of labor involved in the mass production of commodities and various products had taken us entirely out of our agrarian and hunter-gatherer roots.  We no longer lived off of the land so to speak, and were no longer ensconced within the kinds of natural scenery while performing the types of day-to-day tasks that our species had adapted to over its long-term evolutionary history.  And with this collectivization, we also lost a large component of our individuality; a component that is fairly important in human psychology.

Barrett comments on how we’ve accepted modern society as normal, relatively unaware of our ancestral roots and our previous way of life:

“We are so used to the fact that we forget it or fail to perceive that the man of the present day lives on a level of abstraction altogether beyond the man of the past.”

And he goes on to talk about how our ratcheting forward in terms of technological progress and any mechanistic societal re-structuring is what gives us our incredible power over our environment but at the cost of feeling rootless and without any concrete sense of feeling, when it’s needed now more than ever.

Perhaps a more interesting point he makes is with respect to how our increased mechanization and collectivization has changed a fundamental part of how our psyche and identity operate:

“Not only can the material wants of the masses be satisfied to a degree greater than ever before, but technology is fertile enough to generate new wants that it can also satisfy…All of this makes for an extraordinary externalization of life in our time. “

And it is this externalization of our identity and psychology, manifested in ever-growing and ever-changing sets of material objects and information flow, that is interesting to ponder over.  It reminds me somewhat of Richard Dawkins’ concept of an extended phenotype, where the effects of an organism’s genes aren’t merely limited to the organism’s body, but rather they extend into how the organism structures its environment. While this term is technically limited to behaviors that have a direct bearing on the organism’s survival, I prefer to think of this extended phenotype as encompassing everything the organism creates and the totality of its behaviors.

The reason I mention this concept is because I think it makes for a useful analogy here.  For in the earlier evolution of organisms on earth, the genes’ effects or the resulting phenotypes were primarily manifested as the particular body and bodily behavior of the organism, and as organisms became more complex (particularly those that evolved brains and a nervous system), that phenotype began to extend itself into the abilities of an organism to make external structures out of raw materials found in its environment.  And more and more genetic resources were allotted to the organism’s brain which made this capacity for environmental manipulation possible.  As this change occurred, the previous boundaries that defined the organism vanished as the external constructions effectively became an extension of the organism’s body.  But with this new capacity came a loss of intimacy in the sense that the organism wasn’t connected to these external structures in the same way it was connected to its own feelings and internal bodily states; and these external structures also lacked the privacy and hidden qualities inherent in an organism’s thoughts, feelings, and overall subjective experience.

Likewise, as we’ve evolved culturally, eventually gaining the ability to construct and mass-produce a plethora of new material goods, we began to dedicate a larger proportion of our attention on these different external objects, wanting more and more of them well past what we needed for survival.  And we began to invest or incorporate more of ourselves, including our knowledge and information, in these externalities, forcing us to compensate by investing less and less in our internal, private states and locally stored knowledge.  Now it would be impractical if not impossible for an organism to perform increasingly complex behaviors and to continuously increase its ability to manipulate its own environment without this kind of trade-off occurring in terms of its identity, and how it distributes its limited psychological resources.

And herein lies the source of our seemingly fractured psyche: the natural selection of curiosity, knowledge accumulation, and behavioral complexity for survival purposes has become co-opted for just about any purpose imaginable, since the hardware and schema required for the former has a capacity that transcends its evolutionary purpose and that transcends the finite boundaries, the guiding constraints, and the essential structure of the lives we once had in our evolutionary past.  Now we’ve moved beyond what used to be a kind of essential quality and highly predictable trajectory of our lives and of our species, and we’ve moved into the unknown; from the realm of the finite and the familiar to the seemingly infinite realm of the unknown.

A big part of this externalization has manifested itself in the new ways we acquire, store, and share information, such as with the advent of mass media.  As Barrett puts it:

“…journalism enables people to deal with life more and more at second hand.  Information usually consists of half-truths, and “knowledgability” becomes a substitute for real knowledge.  Moreover, popular journalism has by now extended its operations into what were previously considered the strongholds of culture-religion, art, philosophy…It becomes more and more difficult to distinguish the secondhand from the real thing, until most people end by forgetting there is such a distinction.”

I think this ties well into what Barrett mentioned previously when he talked about how modern civilization is built on increasing levels of abstraction.  The very information we’re absorbing, in order to make sense of and deal with a large aspect of our contemporary world, is second hand at best.  The information we rely on has become increasingly abstracted, manipulated, reinterpreted, and distorted.  The origin of so much of this information is now at least one level away from our immediate experience, giving it a quality that is disconnected, less important, and far less real than it otherwise would be.  But we often forget that there’s any epistemic difference between our first-hand lived experience and the information that arises from our mass media.

To add to Barrett’s previous description of existentialism as a reaction against positivism, he also mentions Karl Jaspers’ views of existentialism, which he described as:

“…a struggle to awaken in the individual the possibilities of an authentic and genuine life, in the fact of the great modern drift toward a standardized mass society.”

Though I concur with Jaspers’ claim that modernity has involved a shift toward a standardized mass society in a number of ways, I also think that it has provided the means for many more ways of being unique, many more possible talents and interests for one to explore, and many more kinds of goals to choose from for one’s life project(s).  Collectivization and distribution of labor and the technology that has precipitated from it have allowed many to avoid spending all day hunting and gathering food, making or cleaning their clothing, and other tasks that had previously consumed most of one’s time.

Now many people (in the industrialized world at least) have the ability to accumulate enough free time to explore many other types of experiences, including reading and writing, exploring aspects of our existence with highly-focused introspective effort (as in philosophy), creating or enjoying vast quantities of music and other forms of art, listening to and telling stories, playing any number of games, and thousands of other activities.  And even though some of these activities have been around for millennia, many of them have not (or there was little time for them), and of those that have been around the longest, there were still far fewer choices than what we have on offer today.  So we mustn’t forget that many people develop a life-long passion for at least some of these experiences that would never have been made possible without our modern society.

The issue I think lies in the balance or imbalance between standardization and collectivization on the one hand (such that we reap the benefits of more free time and more recreational choices), and opportunities for individualistic expression on the other.  And of the opportunities that exist for individualistic expression, there is still the need to track the psychological consequences that result from them so we can pick more psychologically fulfilling choices; so that we can pick choices that better allow us to keep open that channel between reason and emotion and between the rational and the non-rational/irrational that religion once provided, as Barrett mentioned earlier.

We also have to accept the fact that the findings in science have largely dehumanized or inhibited the anthropomorphization of nature, instead showing us that the universe is indifferent to us and to our goals; that humans and life in general are more of an aberration than anything else within a vast cosmos that is inhospitable to life.  Only after acknowledging the situation we’re in can we fully appreciate the consequences that modernity has had on upending the comforting structure that religion once gave to humans throughout their lives.  As Barrett tell us:

“Science stripped nature of its human forms and presented man with a universe that was neutral, alien, in its vastness and force, to his human purposes.  Religion, before this phase set in, had been a structure that encompassed man’s life, providing him with a system of images and symbols by which he could express his own aspirations toward psychic wholeness.  With the loss of this containing framework man became not only a dispossessed but a fragmentary being.”

Although we can’t unlearn the knowledge that has caused religion to decline including that which has had a bearing on the questions dealing with our ultimate meaning and purpose, we can certainly find new ways of filling the psychological void felt by many as a result of this decline.  The modern world has many potential opportunities for psychologically fulfilling projects in life, and these opportunities need to be more thoroughly explored.  But, existentialist thought rightly reminds us of how the fruits of our rational and enlightened philosophy have been less capable of providing as satisfying an answer to the question “What are human beings?” as religion once gave.  Along with the fruits of the Enlightenment came a lack of consolation, and a number of painful truths pertaining to the temporal and contingent nature of our existence, previously thought to possess both an eternal and necessary character.  Overall, this cultural change and accumulation of knowledge effectively forced humanity out of its comfort zone.  Barrett described the situation quite well when he said:

“In the end, he [modern man] sees each man as solitary and unsheltered before his own death.  Admittedly, these are painful truths, but the most basic things are always learned with pain, since our inertia and complacent love of comfort prevent us from learning them until they are forced upon us.”

Modern humanity then, has become alienated from God, from nature, and from the complex social machinery that produces the goods and services that he both wants and needs.  Even worse yet however, is the alienation from one’s own self that has occurred as humans have found themselves living in a society that expects each of them to perform some specific function in life (most often not of their choosing), and this leads to society effectively identifying each person as this function, forgetting or ignoring the real person buried underneath.

3.  Science and Finitude

In this last section of chapter two, Barrett discusses some of the ultimate limitations in our use of reason and in the scope of knowledge we can obtain from our scientific and mathematical methods of inquiry.  Reason itself is described as the product of a creature “whose psychic roots still extend downward into the primeval soil,” and is thus a creation from an animal that is still intimately connected to an irrational foundation; an animal still possessing a set of instincts arising from its evolutionary origins.

We see this core presence of irrationality in our day-to-day lives whenever we have trouble trying to employ reason, such as when it conflicts with our emotions and intuitions.  And Barrett brings up the optimism in the confirmed rationalist and their belief that they may still be able to one day overcome all of these obstacles of irrationality by simply employing reason in a more clever way than before.  Clearly Barrett doesn’t share this optimism of the rationalist and he tries to support his pessimism by pointing out a few limitations of reason as suggested within the work of Immanuel Kant, modern physics, and mathematics.

In Kant’s Critique of Pure Reason, he lays out a substantial number of claims and concepts relating to metaphysics and epistemology, where he discusses the limitations of both reason and the senses.  Among other things, he claims that our reason is limited by certain a priori intuitional forms or predispositions about space and time (for example) that allow us to cognize any thing at all.  And so if there are any “things in themselves”, that is, real objects underlying whatever appears to us in the external world, where these objects have qualities and attributes that are independent of our experience, then we can never know anything of substance about these underlying features.  We can never see an object or understand it in any way without incorporating a large number of intuitional forms and assumptions in order to create that experience at all; we can never see the world without seeing it through the limited lens of our minds and our body’s sensory machinery.

For Kant, this also means that we may often use reason erroneously to make unjustified claims of knowledge pertaining to the transcendent, particularly within metaphysics and theology.  When reason is applied to ideas that can’t be confirmed through sensory experience, or that lie outside of our realm of possible experience (such as the idea that cause and effect laws govern every interaction in the universe, something we can never know through experience), it leads to knowledge claims that it can’t justify.  Another limitation of reason, according to Kant, is that it operates through an a priori assumption of unification in our experience, and so the categories and concepts that we infer to exist based on reason are limited by this underlying unifying principle.  Science has added to the rigidity and specificity of this unification, by going beyond what we’ve unified through our unmodified experience (i.e. seeing the sun “rise” and “set” every day), that is, experience without the use of any instruments, telescopes, microscopes, etc. (where the use of these instruments has helped give us more data showing that the earth rotates on an axis rather than the sun revolving around the earth).  Nevertheless, unification is the ultimate goal of reason whether applied with or without a strict scientific method.

Then within physics, we find another limitation in terms of our possible understanding of the world.  Heisenberg’s Uncertainty Principle showed us that we are unable to know complementary parameters pertaining to a particle with an arbitrarily high level of precision.  We eventually hit a limit where, for example, if we know the position of a particle (such as an electron) with a high degree of accuracy at some particular time, then we’re unable to know the momentum of that particle with the same accuracy.  The more we know about one complementary parameter, the less we know about the other.  Barrett describes this discovery in physics as showing that nature may be irrational and chaotic at its most fundamental level and then he says:

“What is remarkable is that here, at the very farthest reaches of precise experimentation, in the most rigorous of the natural sciences, the ordinary and banal fact of our human limitations emerges.”

This is indeed interesting because for a long while many rationalists and positivists held that our potential knowledge was unlimited (or finite, yet complete) and that science was the means to gain this open-ended or complete knowledge of the universe.  Then quantum physics delivered a major blow to this assumption, showing us instead that our knowledge is inherently limited based on how the universe is causally structured.  However, it’s worth pointing out that this is not a human limitation per se, but a limitation of any conscious system acquiring knowledge in our universe.  It wouldn’t matter if humans had different brains, better technology, or used artificial intelligence to perform the computations or measurements.  There is simply a limit on what can ever be measured; a limit on what can ever be known about the state of any system that is being studied.

Godel’s Incompleteness Theorem likewise showed how there were inherent limitations in any mathematical formulation of number theory that was rich enough where, no matter how large or seemingly complete its set of axioms are, if it is consistent, then there will be statements that can’t be proven or disproven within that formulation.  Likewise, the consistency of any formulation of number theory can’t be proven by the formulation itself; rather, it depends on assumptions that lie outside of that formulation.  However, once again, contrary to what Barrett claims, I don’t think this should be taken to be a human limitation of reason, but rather a limitation of mathematics generally.

Regardless, I concede Barrett’s overarching point that, in all of these cases (Kant, physics, and mathematics), we are still running into scenarios where we are unable to do something or to know something that we may have previously thought we could do or thought we could know, at least in principle.  And so these discoveries did run counter to the beliefs of many that thought that humans were inherently unstoppable in these endeavors of knowledge accumulation and in our ability to create technology capable of solving any problem whatsoever.  We can’t do this.  We are finite creatures with finite brains, but perhaps more importantly, our capacities are also limited by what knowledge is theoretically available to any conscious system trying to better understand the universe.  The universe is inherently unknowable in at least some respects, which means it is unpredictable in at least some respects.  And unpredictability doesn’t sit well with humans intent on eliminating uncertainty, eliminating the unknown, and trying to have a coherent narrative to explain everything encountered throughout one’s existence.

In the next post in this series, I’ll explore Irrational Man, Part 1, Chapter 3: The Testimony of Modern Art.

It’s Time For Some Philosophical Investigations

Ludwig Wittgenstein’s Philosophical Investigations is a nice piece of work where he attempts to explain his views on language and the consequences of this view on various subjects like logic, semantics, cognition, and psychology.  I’ve mentioned some of his views very briefly in a couple of earlier posts, but I wanted to delve into his work in a little more depth here and comment on what strikes me as most interesting.  Lately, I’ve been looking back at some of the books I’ve read from various philosophers and have been wanting to revisit them so I can explore them in more detail and share how they connect to some of my own thoughts.  Alright…let’s begin.

Language, Meaning, & Their Probabilistic Attributes

He opens his Philosophical Investigations with a quote from St. Augustine’s Confessions that describes how a person learns a language.  St. Augustine believed that this process involved simply learning the names of objects (for example, by someone else pointing to the objects that are so named) and then stringing them together into sentences, and Wittgenstein points out that this is true to some trivial degree but it overlooks a much more fundamental relationship between language and the world.  For Wittgenstein, the meaning of words can not simply be attached to an object like a name can.  The meaning of a word or concept has much more of a fuzzy boundary as it depends on a breadth of context or associations with other concepts.  He analogizes this plurality in the meanings of words with the relationship between members of a family.  While there may be some resemblance between different uses of a word or concept, we aren’t able to formulate a strict definition to fully describe this resemblance.

One problem then, especially within philosophy, is that many people assume that the meaning of a word or concept is fixed with sharp boundaries (just like the fixed structure of words themselves).  Wittgenstein wants to dispel people of this false notion (much as Nietzsche tried to do before him) so that they can stop misusing language, as he believed that this misuse was the cause of many (if not all) of the major problems that had cropped up in philosophy over the centuries, particularly in metaphysics.  Since meaning is actually somewhat fluid and can’t be accounted for by any fixed structure, Wittgenstein thinks that any meaning that we can attach to these words is ultimately going to be determined by how those words are used.  Since he ties meaning with use, and since this use is something occurring in our social forms of life, it has an inextricably external character.  Thus, the only way to determine if someone else has a particular understanding of a word or concept is through their behavior, in response to or in association with the use of the word(s) in question.  This is especially important in the case of ambiguous sentences, which Wittgenstein explores to some degree.

Probabilistic Shared Understanding

Some of what Wittgenstein is trying to point out here are what I like to refer to as the inherently probabilistic attributes of language.  And it seems to me to be probabilistic for a few different reasons, beyond what Wittgenstein seems to address.  First, there is no guarantee that what one person means by a word or concept exactly matches the meaning from another person’s point of view, but at the very least there is almost always going to be some amount of semantic overlap (and possibly 100% in some cases) between the two individual’s intended meanings, and so there is going to be some probability that the speaker and the listener do in fact share a complete understanding.  It seems reasonable to argue that simpler concepts will have a higher probability of complete semantic overlap whereas more complex concepts are more likely to leave a gap in that shared understanding.  And I think this is true even if we can’t actually calculate what any of these probabilities are.

Now my use of the word meaning here differs from Wittgenstein’s because I am referring to something that is not exclusively shared by all parties involved and I am pointing to something that is internal (a subjective understanding of a word) rather than external (the shared use of a word).  But I think this move is necessary if we are to capture all of the attributes that people explicitly or implicitly refer to with a concept like meaning.  It seems better to compromise with Wittgenstein’s thinking and refer to the meaning of a word as a form of understanding that is intimately connected with its use, but which involves elements that are not exclusively external.

We can justify this version of meaning through an example.  If I help teach you how to ride a bike and explain that this activity is called biking or to bike, then we can use Wittgenstein’s conception of meaning and it will likely account for our shared understanding, and so I have no qualms about that, and I’d agree with Wittgenstein that this is perhaps the most important function of language.  But it may be the case that you have an understanding that biking is an activity that can only happen on Tuesdays, because that happened to be the day that I helped teach you how to ride a bike.  Though I never intended for you to understand biking in this way, there was no immediate way for me to infer that you had this misunderstanding on the day I was teaching you this.  I could only learn of this fact if you explicitly explained to me your understanding of the term with enough detail, or if I had asked you additional questions like whether or not you’d like to bike on a Wednesday (for example), with you answering “I don’t know how to do that as that doesn’t make any sense to me.”  Wittgenstein doesn’t account for this gap in understanding in his conception of meaning and I think this makes for a far less useful conception.

Now I think that Wittgenstein is still right in the sense that the only way to determine someone else’s understanding (or lack thereof) of a word is through their behavior, but due to chance as well as where our attention is directed at any given moment, we may never see the right kinds of behavior to rule out any number of possible misunderstandings, and so we’re apt to just assume that these misunderstandings don’t exist because language hides them to varying degrees.  But they can and in some cases do exist, and this is why I prefer a conception of meaning that takes these misunderstandings into account.  So I think it’s useful to see language as probabilistic in the sense that there is some probability of a complete shared understanding underlying the use of a word, and thus there is a converse probability of some degree of misunderstanding.

Language & Meaning as Probabilistic Associations Between Causal Relations

A second way of language being probabilistic is due to the fact that the unique meanings associated with any particular use of a word or concept as understood by an individual are derived from probabilistic associations between various inferred causal relations.  And I believe that this is the underlying cause of most of the problems that Wittgenstein was trying to address in this book.  He may not have been thinking about the problem in this way, but it can account for the fuzzy boundary problem associated with the task of trying to define the meaning of words since this probabilistic structure underlies our experiences, our understanding of the world, and our use of language such that it can’t be represented by static, sharp definitions.  When a person is learning a concept, like redness, they experience a number of observations and infer what is common to all of those experiences, and then they extract a particular subset of what is common and associate it with a word like red or redness (as opposed to another commonality like objectness, or roundness, or what-have-you).  But in most cases, separate experiences of redness are going to be different instantiations of redness with different hues, textures, shapes, etc., which means that redness gets associated with a large range of different qualia.

If you come across a new qualia that seems to more closely match previous experiences associated with redness rather than orangeness (for example), I would argue that this is because the brain has assigned a higher probability to that qualia being an instance of redness as opposed to, say, orangeness.  And the brain may very well test a hypothesis of the qualia matching the concept of redness versus the concept of orangeness, and depending on your previous experiences of both, and the present context of the experience, your brain may assign a higher probability of orangeness instead.  Perhaps if a red-orange colored object is mixed in with a bunch of unambiguously orange-colored objects, it will be perceived as a shade of orange (to match it with the rest of the set), but if the case were reversed and it were mixed in with a bunch of unambiguously red-colored objects, it will be perceived as a shade of red instead.

Since our perception of the world depends on context, then the meanings we assign to words or concepts also depends on context, but not only in the sense of choosing a different use of a word (like Wittgenstein argues) in some language game or other, but also by perceiving the same incoming sensory information as conceptually different qualia (like in the aforementioned case of a red-orange colored object).  In that case, we weren’t intentionally using red or orange in a different way but rather were assigning one word or the other to the exact same sensory information (with respect to the red-orange object) which depended on what else was happening in the scene that surrounded that subset of sensory information.  To me, this highlights how meaning can be fluid in multiple ways, some of that fluidity stemming from our conscious intentions and some of it from unintentional forces at play involving our prior expectations within some context which directly modify our perceived experience.

This can also be seen through Wittgenstein’s example of what he calls a duckrabbit, an ambiguous image that can be perceived as a duck or a rabbit.  I’ve taken the liberty of inserting this image here along with a copy of it which has been rotated in order to more strongly invoke the perception of a rabbit.  The first image no doubt looks more like a duck and the second image, more like a rabbit.

Now Wittgenstein says that when one is looking at the duckrabbit and sees a rabbit, they aren’t interpreting the picture as a rabbit but are simply reporting what they see.  But in the case where a person sees a duck first and then later sees a rabbit, Wittgenstein isn’t sure what to make of this.  However, he claims to be sure that whatever it is, it can’t be the case that the external world stays the same while an internal cognitive change takes place.  Wittgenstein was incorrect on this point because the external world doesn’t change (in any relevant sense) despite our seeing the duck or seeing the rabbit.  Furthermore, he never demonstrates why two different perceptions would require a change in the external world.  The fact of the matter is, you can stare at this static picture and ask yourself to see a duck or to see a rabbit and it will affect your perception accordingly.  This is partially accomplished by you mentally rotating the image in your imagination and seeing if that changes how well it matches one conception or the other, and since it matches both conceptions to a high degree, you can easily perceive it one way or the other.  Your brain is simply processing competing hypotheses to account for the incoming sensory information, and the top-down predictions of rabbitness or duckness (which you’ve acquired over past experiences) actually changes the way you perceive it with no change required in the external world (despite Wittgenstein’s assertion to the contrary).

To give yet another illustration of the probabilistic nature of language, just imagine the head of a bald man and ask yourself, if you were to add one hair at a time to this bald man’s head, at what point does he lose the property of baldness?  If hairs were slowly added at random, and you could simply say “Stop!  Now he’s no longer bald!” at some particular time, there’s no doubt in my mind that if this procedure were repeated (even if the hairs were added non-randomly), you would say “Stop!  Now he’s no longer bald!” at a different point in this transition.  Similarly if you were looking at a light that was changing color from red to orange, and were asked to say when the color has changed to orange, you would pick a point in the transition that is within some margin of error but it wouldn’t be an exact, repeatable point in the transition.  We could do this thought experiment with all sorts of concepts that are attached to words, like cat and dog and, for example, use a computer graphic program to seamlessly morph a picture of a cat into a picture of a dog and ask at what point did the cat “turn into” a dog?  It’s going to be based on a probability of coincident features that you detect which can vary over time.  Here’s a series of pictures showing a chimpanzee morphing into Bill Clinton to better illustrate this point:

At what point do we stop seeing a picture of a chimpanzee and start seeing a picture of something else?  When do we first see Bill Clinton?  What if I expanded this series of 15 images into a series of 1000 images so that this transition happened even more gradually?  It would be highly unlikely to pick the exact same point in the transition two times in a row if the images weren’t numbered or arranged in a grid.  We can analogize this phenomenon with an ongoing problem in science, known as the species problem.  This problem can be described as the inherent difficulty of defining exactly what a species is, which is necessary if one wants to determine if and when one species evolves into another.  This problem occurs because the evolutionary processes giving rise to new species are relatively slow and continuous whereas sorting those organisms into sharply defined categories involves the elimination of that generational continuity and replacing it with discrete steps.

And we can see this effect in the series of images above, where each picture could represent some large number of generations in an evolutionary timeline, where each picture/organism looks roughly like the “parent” or “child” of the picture/organism that is adjacent to it.  Despite this continuity, if we look at the first picture and the last one, they look like pictures of distinct species.  So if we want to categorize the first and last picture as distinct species, then we create a problem when trying to account for every picture/organism that lies in between that transition.  Similarly words take on an appearance of strict categorization (of meaning) when in actuality, any underlying meaning attached is probabilistic and dynamic.  And as Wittgenstein pointed out, this makes it more appropriate to consider meaning as use so that the probabilistic and dynamic attributes of meaning aren’t lost.

Now you may think you can get around this problem of fluidity or fuzzy boundaries with concepts that are simpler and more abstract, like the concept of a particular quantity (say, a quantity of four objects) or other concepts in mathematics.  But in order to learn these concepts in the first place, like quantity, and then associate particular instances of it with a word, like four, one had to be presented with a number of experiences and infer what was common to all of those experiences (as was the case with redness mentioned earlier).  And this inference (I would argue) involves a probabilistic process as well, it’s just that the resulting probability of our inferring particular experiences as an instance of four objects is incredibly high and therefore repeatable and relatively unambiguous.  Therefore that kind of inference is likely to be sustained no matter what the context, and it is likely to be shared by two individuals with 100% semantic overlap (i.e. it’s almost certain that what I mean by four is exactly what you mean by four even though this is almost certainly not the case for a concept like love or consciousness).  This makes mathematical concepts qualitatively different from other concepts (especially those that are more complex or that more closely map on to reality), but it doesn’t negate their having a probabilistic attribute or foundation.

Looking at the Big Picture

Though this discussion of language and meaning is not an exhaustive analysis of Wittgenstein’s Philosophical Investigations, it represents an analysis of the main theme present throughout.  His main point was to shed light on the disparity between how we often think of language and how we actually use it.  When we stray away from the way it is actually used in our everyday lives, in one form of social life or other, and instead misuse it such as in philosophy, this creates all sorts of problems and unwarranted conclusions.  He also wants his readers to realize that the ultimate goal of philosophy should not be to try and make metaphysical theories and deep explanations underlying everyday phenomena, since these are often born out of unwarranted generalizations and other assumptions stemming from how our grammar is structured.  Instead we ought to subdue these temptations to generalize and subdue our temptations to be dogmatic and instead use philosophy as a kind of therapeutic tool to keep our abstract thinking in check and to better understand ourselves and the world we live in.

Although I disagree with some of Wittgenstein’s claims about cognition (in terms of how intimately it is connected to the external world) and take some issue with his arguably less useful conception of meaning, he makes a lot of sense overall.  Wittgenstein was clearly hitting upon a real difference between the way actual causal relations in our experience are structured and how those relations are represented in language.  Personally, I think that work within philosophy is moving in the right direction if the contributions made therein lead us to make more successful predictions about the causal structure of the world.  And I believe this to be so even if this progress includes generalizations that may not be exactly right.  As long as we can structure our thinking to make more successful predictions, then we’re moving forward as far as I’m concerned.  In any case, I liked the book overall and thought that the interlocutory style gave the reader a nice break from the typical form seen in philosophical argumentation.  I highly recommend it!

Predictive Processing: Unlocking the Mysteries of Mind & Body (Part V)

In the previous post, part 4 in this series on Predictive Processing (PP), I explored some aspects of reasoning and how different forms of reasoning can be built from a foundational bedrock of Bayesian inference (click here for parts 1, 2, or 3).  This has a lot to do with language, but I also claimed that it depends on how the brain is likely generating new models, which I think is likely to involve some kind of natural selection operating on neural networks.  The hierarchical structure of the generative models for these predictions as described within a PP framework, also seems to fit well with the hierarchical structure that we find in the brain’s neural networks.  In this post, I’m going to talk about the relation between memory, imagination, and unconscious and conscious forms of reasoning.

Memory, Imagination, and Reasoning

Memory is of course crucial to the PP framework whether for constructing real-time predictions of incoming sensory information (for perception) or for long-term predictions involving high-level, increasingly abstract generative models that allow us to accomplish complex future goals (like planning to go grocery shopping, or planning for retirement).  Either case requires the brain to have stored some kind of information pertaining to predicted causal relations.  Rather than memories being some kind of exact copy of past experiences (where they’d be stored like data on a computer), research has shown that memory functions more like a reconstruction of those past experiences which are modified by current knowledge and context, and produced by some of the same faculties used in imagination.

This accounts for any false or erroneous aspects of our memories, where the recalled memory can differ substantially from how the original event was experienced.  It also accounts for why our memories become increasingly altered as more time passes.  Over time, we learn new things, continuing to change many of our predictive models about the world, and thus have a more involved reconstructive process the older the memories are.  And the context we find ourselves in when trying to recall certain memories, further affect this reconstruction process, adapting our memories in some sense to better match what we find most salient and relevant in the present moment.

Conscious vs. Unconscious Processing & Intuitive Reasoning (Intuition)

Another attribute of memory is that it is primarily unconscious, where we seem to have this pool of information that is kept out of consciousness until parts of it are needed (during memory recall or or other conscious thought processes).  In fact, within the PP framework we can think of most of our generative models (predictions), especially those operating in the lower levels of the hierarchy, as being out of our conscious awareness as well.  However, since our memories are composed of (or reconstructed with) many higher level predictions, and since only a limited number of them can enter our conscious awareness at any moment, this implies that most of the higher-level predictions are also being maintained or processed unconsciously as well.

It’s worth noting however that when we were first forming these memories, a lot of the information was in our consciousness (the higher-level, more abstract predictions in particular).  Within PP, consciousness plays a special role since our attention modifies what is called the precision weight (or synaptic gain) on any prediction error that flows upward through the predictive hierarchy.  This means that the prediction errors produced from the incoming sensory information or at even higher levels of processing are able to have a greater impact on modifying and updating the predictive models.  This makes sense from an evolutionary perspective, where we can ration our cognitive resources in a more adaptable way, by allowing things that catch our attention (which may be more important to our survival prospects) to have the greatest effect on how we understand the world around us and how we need to act at any given moment.

After repeatedly encountering certain predicted causal relations in a conscious fashion, the more likely those predictions can become automated or unconsciously processed.  And if this has happened with certain rules of inference that govern how we manipulate and process many of our predictive models, it seems reasonable to suspect that this would contribute to what we call our intuitive reasoning (or intuition).  After all, intuition seems to give people the sense of knowing something without knowing how it was acquired and without any present conscious process of reasoning.

This is similar to muscle memory or procedural memory (like learning how to ride a bike) which is consciously processed at first (thus involving many parts of the cerebral cortex), but after enough repetition it becomes a faster and more automated process that is accomplished more economically and efficiently by the basal ganglia and cerebellum, parts of the brain that are believed to handle a great deal of unconscious processing like that needed for procedural memory.  This would mean that the predictions associated with these kinds of causal relations begin to function out of our consciousness, even if the same predictive strategy is still in place.

As mentioned above, one difference between this unconscious intuition and other forms of reasoning that operate within the purview of consciousness is that our intuitions are less likely to be updated or changed based on new experiential evidence since our conscious attention isn’t involved in the updating process. This means that the precision weight of upward flowing prediction errors that encounter downward flowing predictions that are operating unconsciously will have little impact in updating those predictions.  Furthermore, the fact that the most automated predictions are often those that we’ve been using for most of our lives, means that they are also likely to have extremely high Bayesian priors, further isolating them from modification.

Some of these priors may become what are called hyperpriors or priors over priors (many of these believed to be established early in life) where there may be nothing that can overcome them, because they describe an extremely abstract feature of the world.  An example of a possible hyperprior could be one that demands that the brain settle on one generative model even when it’s comparable to several others under consideration.  One could call this a “tie breaker” hyperprior, where if the brain didn’t have this kind of predictive mechanism in place, it may never be able to settle on a model, causing it to see the world (or some aspect of it) as a superposition of equiprobable states rather than simply one determinate state.  We could see the potential problem in an organism’s survival prospects if it didn’t have this kind of hyperprior in place.  Whether or not a hyperprior like this is a form of innate specificity, or acquired in early learning is debatable.

An obvious trade-off with intuition (or any kind of innate biases) is that it provides us with fast, automated predictions that are robust and likely to be reliable much of the time, but at the expense of not being able to adequately handle more novel or complex situations, thereby leading to fallacious inferences.  Our cognitive biases are also likely related to this kind of unconscious reasoning whereby evolution has naturally selected cognitive strategies that work well for the kind of environment we evolved in (African savanna, jungle, etc.) even at the expense of our not being able to adapt as well culturally or in very artificial situations.

Imagination vs. Perception

One large benefit of storing so much perceptual information in our memories (predictive models with different spatio-temporal scales) is our ability to re-create it offline (so to speak).  This is where imagination comes in, where we are able to effectively simulate perceptions without requiring a stream of incoming sensory data that matches it.  Notice however that this is still a form of perception, because we can still see, hear, feel, taste and smell predicted causal relations that have been inferred from past sensory experiences.

The crucial difference, within a PP framework, is the role of precision weighting on the prediction error, just as we saw above in terms of trying to update intuitions.  If precision weighting is set or adjusted to be relatively low with respect to a particular set of predictive models, then prediction error will have little if any impact on the model.  During imagination, we effectively decouple the bottom-up prediction error from the top-down predictions associated with our sensory cortex (by reducing the precision weighting of the prediction error), thus allowing us to intentionally perceive things that aren’t actually in the external world.  We need not decouple the error from the predictions entirely, as we may want our imagination to somehow correlate with what we’re actually perceiving in the external world.  For example, maybe I want to watch a car driving down the street and simply imagine that it is a different color, while still seeing the rest of the scene as I normally would.  In general though, it is this decoupling “knob” that we can turn (precision weighting) that underlies our ability to produce and discriminate between normal perception and our imagination.

So what happens when we lose the ability to control our perception in a normal way (whether consciously or not)?  Well, this usually results in our having some kind of hallucination.  Since perception is often referred to as a form of controlled hallucination (within PP), we could better describe a pathological hallucination (such as that arising from certain psychedelic drugs or a condition like Schizophrenia) as a form of uncontrolled hallucination.  In some cases, even with a perfectly normal/healthy brain, when the prediction error simply can’t be minimized enough, or the brain is continuously switching between models, based on what we’re looking at, we experience perceptual illusions.

Whether it’s illusions, hallucinations, or any other kind of perceptual pathology (like not being able to recognize faces), PP offers a good explanation for why these kinds of experiences can happen to us.  It’s either because the models are poor (their causal structure or priors) or something isn’t being controlled properly, like the delicate balance between precision weighting and prediction error, any of which that could result from an imbalance in neurotransmitters or some kind of brain damage.

Imagination & Conscious Reasoning

While most people would tend to define imagination as that which pertains to visual imagery, I prefer to classify all conscious experiences that are not directly resulting from online perception as imagination.  In other words, any part of our conscious experience that isn’t stemming from an immediate inference of incoming sensory information is what I consider to be imagination.  This is because any kind of conscious thinking is going to involve an experience that could in theory be re-created by an artificial stream of incoming sensory information (along with our top-down generative models that put that information into a particular context of understanding).  As long as the incoming sensory information was a particular way (any way that we can imagine!), even if it could never be that way in the actual external world we live in, it seems to me that it should be able to reproduce any conscious process given the right top-down predictive model.  Another way of saying this is that imagination is simply another word to describe any kind of offline conscious mental simulation.

This also means that I’d classify any and all kinds of conscious reasoning processes as yet another form of imagination.  Just as is the case with more standard conceptions of imagination (within PP at least), we are simply taking particular predictive models, manipulating them in certain ways in order to simulate some result with this process decoupled (at least in part) from actual incoming sensory information.  We may for example, apply a rule of inference that we’ve picked up on and manipulate several predictive models of causal relations using that rule.  As mentioned in the previous post and in the post from part 2 of this series, language is also likely to play a special role here where we’ll likely be using it to help guide this conceptual manipulation process by organizing and further representing the causal relations in a linguistic form, and then determining the resulting inference (which will more than likely be in a linguistic form as well).  In doing so, we are able to take highly abstract properties of causal relations and apply rules to them to extract new information.

If I imagine a purple elephant trumpeting and flying in the air over my house, even though I’ve never experienced such a thing, it seems clear that I’m manipulating several different types of predicted causal relations at varying levels of abstraction and experiencing the result of that manipulation.  This involves inferred causal relations like those pertaining to visual aspects of elephants, the color purple, flying objects, motion in general, houses, the air, and inferred causal relations pertaining to auditory aspects like trumpeting sounds and so forth.

Specific instances of these kinds of experienced causal relations have led to my inferring them as an abstract probabilistically-defined property (e.g. elephantness, purpleness, flyingness, etc.) that can be reused and modified to some degree to produce an infinite number of possible recreated perceptual scenes.  These may not be physically possible perceptual scenes (since elephants don’t have wings to fly, for example) but regardless I’m able to add or subtract, mix and match, and ultimately manipulate properties in countless ways, only limited really by what is logically possible (so I can’t possibly imagine what a square circle would look like).

What if I’m performing a mathematical calculation, like “adding 9 + 9”, or some other similar problem?  This appears (upon first glance at least) to be very qualitatively different than simply imagining things that we tend to perceive in the world like elephants, books, music, and other things, even if they are imagined in some phantasmagorical way.  As crazy as those imagined things may be, they still contain things like shapes, colors, sounds, etc., and a mathematical calculation seems to lack this.  I think the key thing to realize here is the fundamental process of imagination as being able to add or subtract and manipulate abstract properties in any way that is logically possible (given our current set of predictive models).  This means that we can imagine properties or abstractions that lack all the richness of a typical visual/auditory perceptual scene.

In the case of a mathematical calculation, I would be manipulating previously acquired predicted causal relations that pertain to quantity and changes in quantity.  Once I was old enough to infer that separate objects existed in the world, then I could infer an abstraction of how many objects there were in some space at some particular time.  Eventually, I could abstract the property of how many objects without applying it to any particular object at all.  Using language to associate a linguistic symbol for each and every specific quantity would lay the groundwork for a system of “numbers” (where numbers are just quantities pertaining to no particular object at all).  Once this was done, then my brain could use the abstraction of quantity and manipulate it by following certain inferred rules of how quantities can change by adding to or subtracting from them.  After some practice and experience I would now be in a reasonable position to consciously think about “adding 9 + 9”, and either do it by following a manual iterative rule of addition that I’ve learned to do with real or imagined visual objects (like adding up some number of apples or dots/points in a row or grid), or I can simply use a memorized addition table and search/recall the sum I’m interested in (9 + 9 = 18).

Whether we consider imagining a purple elephant, mentally adding up numbers, thinking about what I’m going to say to my wife when I see her next, or trying to explicitly apply logical rules to some set of concepts, all of these forms of conscious thought or reasoning are all simply different sets of predictive models that I’m simply manipulating in mental simulations until I arrive at a perception that’s understood in the desired context and that has minimal prediction error.

Putting it all together

In summary, I think we can gain a lot of insight by looking at all the different aspects of brain function through a PP framework.  Imagination, perception, memory, intuition, and conscious reasoning fit together very well when viewed as different aspects of hierarchical predictive models that are manipulated and altered in ways that give us a much more firm grip on the world we live in and its inferred causal structure.  Not only that, but this kind of cognitive architecture also provides us with an enormous potential for creativity and intelligence.  In the next post in this series, I’m going to talk about consciousness, specifically theories of consciousness and how they may be viewed through a PP framework.

Predictive Processing: Unlocking the Mysteries of Mind & Body (Part IV)

In the previous post which was part 3 in this series (click here for parts 1 and 2) on Predictive Processing (PP), I discussed how the PP framework can be used to adequately account for traditional and scientific notions of knowledge, by treating knowledge as a subset of all the predicted causal relations currently at our brain’s disposal.  This subset of predictions that we tend to call knowledge has the special quality of especially high confidence levels (high Bayesian priors).  Within a scientific context, knowledge tends to have an even stricter definition (and even higher confidence levels) and so we end up with a smaller subset of predictions which have been further verified through comparing them with the inferred predictions of others and by testing them with external means of instrumentation and some agreed upon conventions for analysis.

However, no amount of testing or verification is going to give us direct access to any knowledge per se.  Rather, the creation or discovery of knowledge has to involve the application of some kind of reasoning to explain the causal inputs, and only after this reasoning process can the resulting predicted causal relations be validated to varying degrees by testing it (through raw sensory data, external instruments, etc.).  So getting an adequate account of reasoning within any theory or framework of overall brain function is going to be absolutely crucial and I think that the PP framework is well-suited for the job.  As has already been mentioned throughout this post-series, this framework fundamentally relies on a form of Bayesian inference (or some approximation) which is a type of reasoning.  It is this inferential strategy then, combined with a hierarchical neurological structure for it to work upon, that would allow our knowledge to be created in the first place.

Rules of Inference & Reasoning Based on Hierarchical-Bayesian Prediction Structure, Neuronal Selection, Associations, and Abstraction

While PP tends to focus on perception and action in particular, I’ve mentioned that I see the same general framework as being able to account for not only the folk psychological concepts of beliefs, desires, and emotions, but also that the hierarchical predictive structure it entails should plausibly be able to account for language and ontology and help explain the relationship between the two.  It seems reasonable to me that the associations between all of these hierarchically structured beliefs or predicted causal relations at varying levels of abstraction, can provide a foundation for our reasoning as well, whether intuitive or logical forms of reasoning.

To illustrate some of the importance of associations between beliefs, consider an example like the belief in object permanence (i.e. that objects persist or continue to exist even when I can no longer see them).  This belief of ours has an extremely high prior because our entire life experience has only served to support this prediction in a large number of ways.  This means that it’s become embedded or implicit in a number of other beliefs.  If I didn’t predict that object permanence was a feature of my reality, then an enormous number of everyday tasks would become difficult if not impossible to do because objects would be treated as if they are blinking into and out of existence.

We have a large number of beliefs that require object permanence (and which are thus associated with object permanence), and so it is a more fundamental lower-level prediction (though not as low level as sensory information entering the visual cortex) and we use this lower-level prediction to build upon into any number of higher-level predictions in the overall conceptual/predictive hierarchy.  When I put money in a bank, I expect to be able to spend it even if I can’t see it anymore (such as with a check or debit card).  This is only possible if my money continues to exist even when out of view (regardless of if the money is in a paper/coin or electronic form).  This is just one of many countless everyday tasks that depend on this belief.  So it’s no surprise that this belief (this set of predictions) would have an incredibly high Bayesian prior, and therefore I would treat it as a non-negotiable fact about reality.

On the other hand, when I was a newborn infant, I didn’t have this belief of object permanence (or at best, it was a very weak belief).  Most psychologists estimate that our belief in object permanence isn’t acquired until after several months of brain development and experience.  This would translate to our having a relatively low Bayesian prior for this belief early on in our lives, and only once a person begins to form predictions based on these kinds of recognized causal relations can we begin to increase that prior and perhaps eventually reach a point that results in a subjective experience of a high degree in certainty for this particular belief.  From that point on, we are likely to simply take that belief for granted, no longer questioning it.  The most important thing to note here is that the more associations made between beliefs, the higher their effective weighting (their priors), and thus the higher our confidence in those beliefs becomes.

Neural Implementation, Spontaneous or Random Neural Activity & Generative Model Selection

This all seems pretty reasonable if a neuronal implementation worked to strengthen Bayesian priors as a function of the neuronal/synaptic connectivity (among other factors), where neurons that fire together are more likely to wire together.  And connectivity strength will increase the more often this happens.  On the flip-side, the less often this happens or if it isn’t happening at all then the connectivity is likely to be weakened or non-existent.  So if a concept (or a belief composed of many conceptual relations) is represented by some cluster of interconnected neurons and their activity, then it’s applicability to other concepts increases its chances of not only firing but also increasing the strength of wiring with those other clusters of neurons, thus plausibly increasing the Bayesian priors for the overlapping concept or belief.

Another likely important factor in the Bayesian inferential process, in terms of the brain forming new generative models or predictive hypotheses to test, is the role of spontaneous or random neural activity and neural cluster generation.  This random neural activity could plausibly provide a means for some randomly generated predictions or random changes in the pool of predictive models that our brain is able to select from.  Similar to the role of random mutation in gene pools which allows for differential reproductive rates and relative fitness of offspring, some amount of randomness in neural activity and the generative models that result would allow for improved models to be naturally selected based on those which best minimize prediction error.  The ability to minimize prediction error could be seen as a direct measure of the fitness of the generative model, within this evolutionary landscape.

This idea is related to the late Gerald Edelman’s Theory of Neuronal Group Selection (NGS), also known as Neural Darwinism, which I briefly explored in a post I wrote long ago.  I’ve long believed that this kind of natural selection process is applicable to a number of different domains (aside from genetics), and I think any viable version of PP is going to depend on it to at least some degree.  This random neural activity (and the naturally selected products derived from them) could be thought of as contributing to a steady supply of new generative models to choose from and thus contributing to our overall human creativity as well whether for reasoning and problem solving strategies or simply for artistic expression.

Increasing Abstraction, Language, & New Rules of Inference

This kind of use it or lose it property of brain plasticity combined with dynamic associations between concepts or beliefs and their underlying predictive structure, would allow for the brain to accommodate learning by extracting statistical inferences (at increasing levels of abstraction) as they occur and modifying or eliminating those inferences by changing their hierarchical associative structure as prediction error is encountered.  While some form of Bayesian inference (or an approximation to it) underlies this process, once lower-level inferences about certain causal relations have been made, I believe that new rules of inference can be derived from this basic Bayesian foundation.

To see how this might work, consider how we acquire a skill like learning how to speak and write in some particular language.  The rules of grammar, the syntactic structure and so forth which underlie any particular language are learned through use.  We begin to associate words with certain conceptual structures (see part 2 of this post-series for more details on language and ontology) and then we build up the length and complexity of our linguistic expressions by adding concepts built on higher levels of abstraction.  To maximize the productivity and specificity of our expressions, we also learn more complex rules pertaining to the order in which we speak or write various combinations of words (which varies from language to language).

These grammatical rules can be thought of as just another higher-level abstraction, another higher-level causal relation that we predict will convey more specific information to whomever we are speaking to.  If it doesn’t seem to do so, then we either modify what we have mistakenly inferred to be those grammatical rules, or depending on the context, we may simply assume that the person we’re talking to hasn’t conformed to the language or grammar that my community seems to be using.

Just like with grammar (which provides a kind of logical structure to our language), we can begin to learn new rules of inference built on the same probabilistic predictive bedrock of Bayesian inference.  We can learn some of these rules explicitly by studying logic, induction, deduction, etc., and consciously applying those rules to infer some new piece of knowledge, or we can learn these kinds of rules implicitly based on successful predictions (pertaining to behaviors of varying complexity) that happen to result from stumbling upon this method of processing causal relations within various contexts.  As mentioned earlier, this would be accomplished in part by the natural selection of randomly-generated neural network changes that best reduce the incoming prediction error.

However, language and grammar are interesting examples of an acquired set of rules because they also happen to be the primary tool that we use to learn other rules (along with anything we learn through verbal or written instruction), including (as far as I can tell) various rules of inference.  The logical structure of language (though it need not have an exclusively logical structure), its ability to be used for a number of cognitive short-cuts, and it’s influence on our thought complexity and structure, means that we are likely dependent on it during our reasoning processes as well.

When we perform any kind of conscious reasoning process, we are effectively running various mental simulations where we can intentionally manipulate our various generative models to test new predictions (new models) at varying levels of abstraction, and thus we also manipulate the linguistic structure associated with those generative models as well.  Since I have a lot more to say on reasoning as it relates to PP, including more on intuitive reasoning in particular, I’m going to expand on this further in my next post in this series, part 5.  I’ll also be exploring imagination and memory including how they relate to the processes of reasoning.

Predictive Processing: Unlocking the Mysteries of Mind & Body (Part II)

In the first post of this series I introduced some of the basic concepts involved in the Predictive Processing (PP) theory of perception and action.  I briefly tied together the notions of belief, desire, emotion, and action from within a PP lens.  In this post, I’d like to discuss the relationship between language and ontology through the same framework.  I’ll also start talking about PP in an evolutionary context as well, though I’ll have more to say about that in future posts in this series.

Active (Bayesian) Inference as a Source for Ontology

One of the main themes within PP is the idea of active (Bayesian) inference whereby we physically interact with the world, sampling it and modifying it in order to reduce our level of uncertainty in our predictions about the causes of the brain’s inputs.  Within an evolutionary context, we can see why this form of embodied cognition is an ideal schema for an information processing system to employ in order to maximize chances of survival in our highly interactive world.

In order to reduce the amount of sensory information that has to be processed at any given time, it is far more economical for the brain to only worry about the prediction error that flows upward through the neural system, rather than processing all incoming sensory data from scratch.  If the brain is employing a set of predictions that can “explain away” most of the incoming sensory data, then the downward flow of predictions can encounter an upward flow of sensory information (effectively cancelling each other out) and the only thing that remains to propagate upward through the system and do any “cognitive work” (i.e. the only thing that needs to be processed) on the predictive models flowing downward is the remaining prediction error (prediction error = predictions of sensory input minus the actual sensory input).  This is similar to data compression strategies for video files (for example) that only worry about the information that changes over time (pixels that change brightness/color) and then simply compress the information that remains constant (pixels that do not change from frame-to-frame).

The ultimate goal for this strategy within an evolutionary context is to allow the organism to understand its environment in the most salient ways for the pragmatic purposes of accomplishing goals relating to survival.  But once humans began to develop culture and evolve culturally, the predictive strategy gained a new kind of evolutionary breathing space, being able to predict increasingly complex causal relations and developing technology along the way.  All of these inferred causal relations appear to me to be the very source of our ontology, as each hierarchically structured prediction and its ability to become associated with others provides an ideal platform for differentiating between any number of spatio-temporal conceptions and their categorical or logical organization.

An active Bayesian inference system is also ideal to explain our intellectual thirst, human curiosity, and interest in novel experiences (to some degree), because we learn more about the world (and ourselves) by interacting with it in new ways.  In doing so, we are provided with a constant means of fueling and altering our ontology.

Language & Ontology

Language is an important component as well and it fits well within a PP framework as it serves to further link perception and action together in a very important way, allowing us to make new kinds of predictions about the world that wouldn’t have been possible without it.   A tool like language makes a lot of sense from an evolutionary perspective as well since better predictions about the world result in a higher chance of survival.

When we use language by speaking or writing it, we are performing an action which is instantiated by the desire to do so (see previous post about “desire” within a PP framework).  When we interpret language by listening to it or by reading, we are performing a perceptual task which is again simply another set of predictions (in this case, pertaining to the specific causes leading to our sensory inputs).  If we were simply sending and receiving non-lingual nonsense, then the same basic predictive principles underlying perception and action would still apply, but something new emerges when we send and receive actual language (which contains information).  With language, we begin to associate certain sounds and visual information with some kind of meaning or meaningful information.  Once we can do this, we can effectively share our thoughts with one another, or at least many aspects of our thoughts with one another.  This provides for an enormous evolutionary advantage as now we can communicate almost anything we want to one another, store it in external forms of memory (books, computers, etc.), and further analyze or manipulate the information for various purposes (accounting, inventory, science, mathematics, etc.).

By being able to predict certain causal outcomes through the use of language, we are effectively using the lower level predictions associated with perceiving and emitting language to satisfy higher level predictions related to more complex goals including those that extend far into the future.  Since the information that is sent and received amounts to testing or modifying our predictions of the world, we are effectively using language to share and modulate one brain’s set of predictions with that of another brain.  One important aspect of this process is that this information is inherently probabilistic which is why language often trips people up with ambiguities, nuances, multiple meanings behind words and other attributes of language that often lead to misunderstanding.  Wittgenstein is one of the more prominent philosophers who caught onto this property of language and its consequence on philosophical problems and how we see the world structured.  I think a lot of the problems Wittgenstein elaborated on with respect to language can be better accounted for by looking at language as dealing with probabilistic ontological/causal relations that serve some pragmatic purpose, with the meaning of any word or phrase as being best described by its use rather than some clear-cut definition.

This probabilistic attribute of language in terms of the meanings of words having fuzzy boundaries also tracks very well with the ontology that a brain currently has access to.  Our ontology, or what kinds of things we think exist in the world, are often categorized in various ways with some of the more concrete entities given names such as: “animals”, “plants”, “rocks”, “cats”, “cups”, “cars”, “cities”, etc.  But if I morph a wooden chair (say, by chipping away at parts of it with a chisel), eventually it will no longer be recognizable as a chair, and it may begin to look more like a table than a chair or like nothing other than an oddly shaped chunk of wood.  During this process, it may be difficult to point to the exact moment that it stopped being a chair and instead became a table or something else, and this would make sense if what we know to be a chair or table or what-have-you is nothing more than a probabilistic high-level prediction about certain causal relations.  If my brain perceives an object that produces too high of a prediction error based on the predictive model of what a “chair” is, then it will try another model (such as the predictive model pertaining to a “table”), potentially leading to models that are less and less specific until it is satisfied with recognizing the object as merely a “chunk of wood”.

From a PP lens, we can consider lower level predictions pertaining to more basic causes of sensory input (bright/dark regions, lines, colors, curves, edges, etc.) to form some basic ontological building blocks and when they are assembled into higher level predictions, the amount of integrated information increases.  This information integration process leads to condensed probabilities about increasingly complex causal relations, and this ends up reducing the dimensionality of the cause-effect space of the predicted phenomenon (where a set of separate cause-effect repertoires are combined into a smaller number of them).

You can see the advantage here by considering what the brain might do if it’s looking at a black cat sitting on a brown chair.  What if the brain were to look at this scene as merely a set of pixels on the retina that change over time, where there’s no expectations of any subset of pixels to change in ways that differ from any other subset?  This wouldn’t be very useful in predicting how the visual scene will change over time.  What if instead, the brain differentiates one subset of pixels (that correspond to what we call a cat) from all the rest of the pixels, and it does this in part by predicting proximity relations between neighboring pixels in the subset (so if some black pixels move from the right to the left visual field, then some number of neighboring black pixels are predicted to move with it)?

This latter method treats the subset of black-colored pixels as a separate object (as opposed to treating the entire visual scene as a single object), and doing this kind of differentiation in more and more complex ways leads to a well-defined object or concept, or a large number of them.  Associating sounds like “meow” with this subset of black-colored pixels, is just one example of yet another set of properties or predictions that further defines this perceived object as distinct from the rest of the perceptual scene.  Associating this object with a visual or auditory label such as “cat” finally links this ontological object with language.  As long as we agree on what is generally meant by the word “cat” (which we determine through its use), then we can share and modify the predictive models associated with such an object or concept, as we can do with any other successful instance of linguistic communication.

Language, Context, and Linguistic Relativism

However, it should be noted that as we get to more complex causal relations (more complex concepts/objects), we can no longer give these concepts a simple one word label and expect to communicate information about them nearly as easily as we could for the concept of a “cat”.  Think about concepts like “love” or “patriotism” or “transcendence” and realize how there’s many different ways that we use those terms and how they can mean all sorts of different things and so our meaning behind those words will be heavily conveyed to others by the context that they are used in.  And context in a PP framework could be described as simply the (expected) conjunction of multiple predictive models (multiple sets of causal relations) such as the conjunction of the predictive models pertaining to the concepts of food, pizza, and a desirable taste, and the word “love” which would imply a particular use of the word “love” as used in the phrase “I love pizza”.  This use of the word “love” is different than one which involves the conjunction of predictive models pertaining to the concepts of intimacy, sex, infatuation, and care, implied in a phrase like “I love my wife”.  In any case, conjunctions of predictive models can get complicated and this carries over to our stretching our language to its very limits.

Since we are immersed in language and since it is integral in our day-to-day lives, we also end up being conditioned to think linguistically in a number of ways.  For example, we often think with an interior monologue (e.g. “I am hungry and I want pizza for lunch”) even when we don’t plan on communicating this information to anyone else, so it’s not as if we’re simply rehearsing what we need to say before we say it.  I tend to think however that this linguistic thinking (thinking in our native language) is more or less a result of the fact that the causal relations that we think about have become so strongly associated with certain linguistic labels and propositions, that we sort of automatically think of the causal relations alongside the labels that we “hear” in our head.  This seems to be true even if the causal relations could be thought of without any linguistic labels, in principle at least.  We’ve simply learned to associate them so strongly to one another that in most cases separating the two is just not possible.

On the flip side, this tendency of language to associate itself with our thoughts, also puts certain barriers or restrictions on our thoughts.  If we are always preparing to share our thoughts through language, then we’re going to become somewhat entrained to think in ways that can be most easily expressed in a linguistic form.  So although language may simply be along for the ride with many non-linguistic aspects of thought, our tendency to use it may also structure our thinking and reasoning in large ways.  This would account for why people raised in different cultures with different languages see the world in different ways based on the structure of their language.  While linguistic determinism seems to have been ruled out (the strong version of the Sapir-Whorf hypothesis), there is still strong evidence to support linguistic relativism (the weak version of the Sapir-Whorf hypothesis), whereby one’s language effects their ontology and view of how the world is structured.

If language is so heavily used day-to-day then this phenomenon makes sense as viewed through a PP lens since we’re going to end up putting a high weight on the predictions that link ontology with language since these predictions have been demonstrated to us to be useful most of the time.  Minimal prediction error means that our Bayesian evidence is further supported and the higher the weight carried by these predictions, the more these predictions will restrict our overall thinking, including how our ontology is structured.

Moving on…

I think that these are but a few of the interesting relationships between language and ontology and how a PP framework helps to put it all together nicely, and I just haven’t seen this kind of explanatory power and parsimony in any other kind of conceptual framework about how the brain functions.  This bodes well for the framework and it’s becoming less and less surprising to see it being further supported over time with studies in neuroscience, cognition, psychology, and also those pertaining to pathologies of the brain, perceptual illusions, etc.  In the next post in this series, I’m going to talk about knowledge and how it can be seen through the lens of PP.

Predictive Processing: Unlocking the Mysteries of Mind & Body (Part I)

I’ve been away from writing for a while because I’ve had some health problems relating to my neck.  A few weeks ago I had double-cervical-disc replacement surgery and so I’ve been unable to write and respond to comments and so forth for a little while.  I’m in the second week following my surgery now and have finally been able to get back to writing, which feels very good given that I’m unable to lift or resume martial arts for the time being.  Anyway, I want to resume my course of writing beginning with a post-series that pertains to Predictive Processing (PP) and the Bayesian brain.  I’ve written one post on this topic a little over a year ago (which can be found here) as I’ve become extremely interested in this topic for the last several years now.

The Predictive Processing (PP) theory of perception shows a lot of promise in terms of finding an overarching schema that can account for everything that the brain seems to do.  While its technical application is to account for the acts of perception and active inference in particular, I think it can be used more broadly to account for other descriptions of our mental life such as beliefs (and knowledge), desires, emotions, language, reasoning, cognitive biases, and even consciousness itself.  I want to explore some of these relationships as viewed through a PP lens more because I think it is the key framework needed to reconcile all of these aspects into one coherent picture, especially within the evolutionary context of an organism driven to survive.  Let’s begin this post-series by first looking at how PP relates to perception (including imagination), beliefs, emotions, and desires (and by extension, the actions resulting from particular desires).

Within a PP framework, beliefs can be best described as simply the set of particular predictions that the brain employs which encompass perception, desires, action, emotion, etc., and which are ultimately mediated and updated in order to reduce prediction errors based on incoming sensory evidence (and which approximates a Bayesian form of inference).  Perception then, which is constituted by a subset of all our beliefs (with many of them being implicit or unconscious beliefs), is more or less a form of controlled hallucination in the sense that what we consciously perceive is not the actual sensory evidence itself (not even after processing it), but rather our brain’s “best guess” of what the causes for the incoming sensory evidence are.

Desires can be best described as another subset of one’s beliefs, and a set of beliefs which has the special characteristic of being able to drive action or physical behavior in some way (whether driving internal bodily states, or external ones that move the body in various ways).  Finally, emotions can be thought of as predictions pertaining to the causes of internal bodily states and which may be driven or changed by changes in other beliefs (including changes in desires or perceptions).

When we believe something to be true or false, we are basically just modeling some kind of causal relationship (or its negation) which is able to manifest itself into a number of highly-weighted predicted perceptions and actions.  When we believe something to be likely true or likely false, the same principle applies but with a lower weight or precision on the predictions that directly corresponds to the degree of belief or disbelief (and so new sensory evidence will more easily sway such a belief).  And just like our perceptions, which are mediated by a number of low and high-level predictions pertaining to incoming sensory data, any prediction error that the brain encounters results in either updating the perceptual predictions to new ones that better reduce the prediction error and/or performing some physical action that reduces the prediction error (e.g. rotating your head, moving your eyes, reaching for an object, excreting hormones in your body, etc.).

In all these cases, we can describe the brain as having some set of Bayesian prior probabilities pertaining to the causes of incoming sensory data, and these priors changing over time in response to prediction errors arising from new incoming sensory evidence that fails to be “explained away” by the predictive models currently employed.  Strong beliefs are associated with high prior probabilities (highly-weighted predictions) and therefore need much more counterfactual sensory evidence to be overcome or modified than for weak beliefs which have relatively low priors (low-weighted predictions).

To illustrate some of these concepts, let’s consider a belief like “apples are a tasty food”.  This belief can be broken down into a number of lower level, highly-weighted predictions such as the prediction that eating a piece of what we call an “apple” will most likely result in qualia that accompany the perception of a particular satisfying taste, the lower level prediction that doing so will also cause my perception of hunger to change, and the higher level prediction that it will “give me energy” (with these latter two predictions stemming from the more basic category of “food” contained in the belief).  Another prediction or set of predictions is that these expectations will apply to not just one apple but a number of apples (different instances of one type of apple, or different types of apples altogether), and a host of other predictions.

These predictions may even result (in combination with other perceptions or beliefs) in an actual desire to eat an apple which, under a PP lens could be described as the highly weighted prediction of what it would feel like to find an apple, to reach for an apple, to grab it, to bite off a piece of it, to chew it, and to swallow it.  If I merely imagine doing such things, then the resulting predictions will necessarily carry such a small weight that they won’t be able to influence any actual motor actions (even if these imagined perceptions are able to influence other predictions that may eventually lead to some plan of action).  Imagined perceptions will also not carry enough weight (when my brain is functioning normally at least) to trick me into thinking that they are actual perceptions (by “actual”, I simply mean perceptions that correspond to incoming sensory data).  This low-weighting attribute of imagined perceptual predictions thus provides a viable way for us to have an imagination and to distinguish it from perceptions corresponding to incoming sensory data, and to distinguish it from predictions that directly cause bodily action.  On the other hand, predictions that are weighted highly enough (among other factors) will be uniquely capable of affecting our perception of the real world and/or instantiating action.

This latter case of desire and action shows how the PP model takes the organism to be an embodied prediction machine that is directly influencing and being influenced by the world that its body interacts with, with the ultimate goal of reducing any prediction error encountered (which can be thought of as maximizing Bayesian evidence).  In this particular example, the highly-weighted prediction of eating an apple is simply another way of describing a desire to eat an apple, which produces some degree of prediction error until the proper actions have taken place in order to reduce said error.  The only two ways of reducing this prediction error are to change the desire (or eliminate it) to one that no longer involves eating an apple, and/or to perform bodily actions that result in actually eating an apple.

Perhaps if I realize that I don’t have any apples in my house, but I realize that I do have bananas, then my desire will change to one that predicts my eating a banana instead.  Another way of saying this is that my higher-weighted prediction of satisfying hunger supersedes my prediction of eating an apple specifically, thus one desire is able to supersede another.  However, if the prediction weight associated with my desire to eat an apple is high enough, it may mean that my predictions will motivate me enough to avoid eating the banana, and instead to predict what it is like to walk out of my house, go to the store, and actually get an apple (and therefore, to actually do so).  Furthermore, it may motivate me to predict actions that lead me to earn the money such that I can purchase the apple (if I don’t already have the money to do so).  To do this, I would be employing a number of predictions having to do with performing actions that lead to me obtaining money, using money to purchase goods, etc.

This is but a taste of what PP has to offer, and how we can look at basic concepts within folk psychology, cognitive science, and theories of mind in a new light.  Associated with all of these beliefs, desires, emotions, and actions (which again, are simply different kinds of predictions under this framework), is a number of elements pertaining to ontology (i.e. what kinds of things we think exist in the world) and pertaining to language as well, and I’d like to explore this relationship in my next post.  This link can be found here.

“The Brothers Karamazov” – A Moral & Philosophical Critique (Part III)

In the first two posts that I wrote in this series (part I and part II) concerning some concepts and themes mentioned in Dostoyevksy’s The Brothers Karamazov, I talked about moral realism and how it pertains to theism and atheism (and the character Ivan’s own views), and I also talked about moral responsibility and free will to some degree (and how this related to the interplay between Ivan and Smerdyakov).  In this post, I’m going to look at the concept of moral conscience and intuition, and how they apply to Ivan’s perspective and his experiencing an ongoing hallucination of a demonic apparition.  This demonic apparition only begins to haunt Ivan after hearing that his influence on his brother Smerdyakov led him to murder their father Fyodor.  The demon continues to torment Ivan until just before his other brother Alyosha informs him that Smerdyakov has committed suicide.  Then I’ll conclude with some discussion on the concept of moral desert (justice).

It seems pretty clear that the demonic apparition that appears to Ivan is a psychosomatic hallucination brought about as a manifestation of Ivan’s overwhelming guilt for what his brother has done, since he feels that he bears at least some of the responsibility for his brothers actions.  We learn earlier in the story that Zosima, a wise elder living at a monastery who acts as a mentor and teacher to Alyosha, had explained to Ivan that everyone bears at least some responsibility for the actions of everyone around them because human causality is so heavily intertwined with one person’s actions having a number of complicated effects on the actions of everyone else.  Despite Ivan’s strong initial reservations against this line of reasoning, he seems to have finally accepted that Zosima was right — hence him suffering a nervous breakdown as a result of realizing this.

Obviously Ivan’s moral conscience seems to be driving this turn of events and this is the case whether or not Ivan explicitly believes that morality is real.  And so we can see that despite Ivan’s moral skepticism, his moral intuitions and/or his newly accepted moral dispositions as per Zosima, have led him to his current state of despair.  Similarly, Ivan’s views on the problem of evil — whereby the vast amount of suffering in the world either refutes the existence of God, or shows that this God (if he does exist) must be a moral monster — betray even more of Ivan’s moral views with respect to how he wants the world to be.  His wanting the world to have less suffering in it, along with his wishing that his brother had not committed murder (let alone as a result of his influence on his brother), illustrates a number of moral “oughts” that Ivan subscribes to.  And whether they’re simply based on his moral intuitions or also rational moral reflection, they illustrate the deeply rooted psychological aspects of morality that are an inescapable facet of the human condition.

This situation also helps to explain some of the underlying motivations behind my own reversion back toward some form of moral realism, after becoming an atheist myself, initially catalyzed by my own moral intuitions and then later solidified and justified by rational moral reflection on objective facts pertaining to human psychology and other factors.  Now it should be said that moral intuitions on their own are only a generally useful heuristic as they are often misguiding (and incorrect) which is why it is imperative that they are checked by a rational assessment of the facts at hand.  But, nevertheless, they help to illustrate how good and evil can be said to be real (in at least some sense), even to someone like Ivan that doesn’t think they have an objective foundation.  They may not be conceptions of good and evil as described in many religions, with supernatural baggage attached, but they are real nonetheless.

Another interesting point worth noting is in regard to Zosima’s discussion about mutual moral responsibility.  While I already discussed moral responsibility in the last post along with its relation to free will, there’s something rather paradoxical about Dostoyevsky’s reasoning as expressed through Zosima that I found quite interesting.  Zosima talks about how love and forgiveness are necessary because everyone’s actions are intertwined with everyone else’s and therefore everyone bears some responsibility for the sins of others.  This idea of shared responsibility is abhorrent to those in the story that doubt God and the Christian religion (such as Ivan), who only want to be responsible for their own actions, but the complex intertwined causal chain that Zosima speaks of is the same causal chain that many determinists invoke to explain our lack of libertarian free will and how we can’t be held responsible in a causa sui manner for our actions.

Thus, if someone dies and there is in fact an afterlife, by Zosima’s own reasoning that person should not be judged as an individual solely responsible for their actions either.  That person should instead receive unconditional love and forgiveness and be redeemed rather than punished.  But this idea is anathema to standard Christian theology where one is supposed to be judged and given eternal paradise or eternal torment (with vastly disproportionate consequences given the finite degree of one’s actions).  It’s no surprise that Zosima isn’t looked upon as a model clergyman by some of his fellow monks in the monastery because his emphatic preaching about love and forgiveness undermines the typical heavy-handed judgemental aspects of God within Christianity.  But in any case, if God exists and understood that people were products of their genes and their environment which is causally interconnected with everyone else’s (i.e. libertarian free will is logically impossible), then a loving God would grant everyone forgiveness after death and grant them eternal paradise based on that understanding.  And oddly enough, this also undermines Ivan’s own reasoning that good and evil can only exist with an afterlife that undergoes judgement, because forgiveness and eternal paradise should be granted to everyone in the afterlife (by a truly loving God) if Zosima’s reasoning was taken to it’s logical conclusions.  So not only does Zosima’s reasoning seem to undermine the justification for unequal treatment of souls in the afterlife, but it also undermines the Christian conception of free will to boot (which is logically impossible regardless of Zosima’s reasoning).

And this brings me to the concept of moral desert.  In some ways I agree with Zosima, at least in the sense that love (or more specifically compassion) and forgiveness are extremely important in proper moral reasoning. And once one realizes the logical impossibility of libertarian free will, this should only encourage one’s use of love and forgiveness in the sense that people should never be trying to punish a wrongdoer (or hope for their punishment) for the sake of retributive justice or vengeance.  Rather, people should only punish (or hope that one is punished) as much as is necessary to compensate the victim as best as the circumstances allow and (more importantly) to rehabilitate the wrongdoer by reprogramming them through behavioral conditioning.  Anything above and beyond this is excessive, malicious, and immoral.  Similarly, a loving God (if one existed) would never punish anyone in the afterlife beyond what is needed to rehabilitate them (and it would seem that no punishment at all should really be needed if this God had the power to accomplish these feats on immaterial souls using magic), and if this God had no magic to accomplish this, then at the very least, it would still mean that there should never by any eternal punishments, since punishing someone forever (let alone torturing them forever), not only illustrates that there is no goal to rehabilitate the wrongdoer, but also that this God is beyond psychopathic and malevolent.  Again, think of Zosima’s reasoning as it applies here.

Looking back at the story with Smerdyakov, why does the demonic apparition disappear from Ivan right around the time that he learns that Smerdyakov killed himself?  It could be because Ivan thinks that Smerdyakov has gotten what he deserved, and that he’s no longer roaming free (so to speak) after his heinous act of murder.  And it could also be because Ivan seemed sure at that point that he would confess to the murder (or at least motivating Smerdyakov to do it).  But if either of these notions are true, then once again Ivan has betrayed yet another moral disposition of his, that murder is morally wrong.  It may also imply that Ivan, deep down, may in fact believe in an afterlife, and that Smerdyakov will now be judged for his actions.

It no doubt feels good to a lot of people when they see someone that has wronged another, getting punished for their bad deeds.  The feeling of justice and even vengeance can be so emotionally powerful, especially if the wrongdoer took the life of someone that you or someone else loved very much.  It’s a common feeling to want that criminal to suffer, perhaps to rot in jail until they die, perhaps to be tortured, or what-have-you.  And these intuitions illustrate why so many religious beliefs surrounding judgment in the afterlife share many of these common elements.  People invented these religious beliefs (whether unconsciously or not) because it makes them feel better about wrongdoers that may otherwise die without having been judged for their actions.  After all, when is justice going to be served?  It is also a motivating factor for a lot of people to keep their behaviors in check (as per Ivan’s rationale regarding an afterlife requirement in order for good and evil to be meaningful to people).  Even though I don’t think that this particular motivation is necessary (and therefore Ivan’s argument is incorrect) — due to other motivating forces such as the level of fulfillment and personal self-worth in one’s life, gained through living a life of moral virtue, or the lack thereof by those that fail to live virtuously — it is still a motivation that exists with many people and strongly intersects with the concept of moral desert.  Due to its pervasiveness in our intuitions and how we perceive other human beings and its importance in moral theory in general, people should spend a lot more time critically reflecting on this concept.

In the next part of this post series, I’m going to talk about the conflict between faith and doubt, perhaps the most ubiquitous theme found in The Brothers Karamazov, and how it ties all of these other concepts together.

“The Brothers Karamazov” – A Moral & Philosophical Critique (Part II)

In my last post in this series, concerning Dostoyevsky’s The Brothers Karamazov, I talked about the concept of good and evil and the character Ivan’s personal atheistic perception that they are contingent on God existing (or at least an afterlife of eternal reward or punishment).  While there may even be a decent percentage of atheists that share this view (objective morality being contingent on God’s existence), I briefly explained my own views (being an atheist myself) which differs from Ivan’s in that I am a moral realist and believe that morality is objective independent of any gods or immortal souls existing.  I believe that moral facts exist (and science is the best way to find them), that morality is ultimately grounded on objective facts pertaining to human psychology, biology, sociology, neurology, and other facts about human beings, and thus that good and evil do exist in at least some sense.

In this post, I’m going to talk about Ivan’s influence on his half-brother, Smerdyakov, who ends up confessing to Ivan that he murdered their father Fyodor Pavlovich, as a result of Ivan’s philosophical influence on him.  In particular, Smerdyakov implicates Ivan as at least partially responsible for his own murderous behavior since Ivan successfully convinced him that evil wasn’t possible in a world without a God.  Ivan ends up becoming consumed with guilt, basically suffers a nervous breakdown, and then is incessantly taunted by a demonic apparition.  The hallucinations continue up until the moment Ivan comes to find out, from his very religious brother Alyosha, that Smerdyakov has hung himself.  This scene highlights a number of important topics beyond the moral realism I discussed in the first post, such as moral responsibility, free will, and even moral desert (I’ll discuss this last topic in my next post).

As I mentioned before, beyond the fact that we do not need a god to ground moral values, we also don’t need a god or an afterlife to motivate us to behave morally either.  By cultivating moral virtues such as compassion, honesty, and reasonableness, and analyzing a situation using a rational assessment of as many facts as are currently accessible, we can maximize our personal satisfaction and thus our chances of living a fulfilling life.  Behavioral causal factors that support this goal are “good” and those that detract from it are “evil”.  Aside from these labels though, we actually experience a more or less pleasing life depending on our behaviors and therefore we do have real-time motivations for behaving morally (such as acting in ways that conform to various cultivated virtues).  Aristotle claimed this more than 2000 years ago and moral psychology has been confirming it time and time again.

Since we have evolved as a particular social species with a particular psychology, not only do we have particular behaviors that best accomplish a fulfilling life, but there are also various behavioral conditioning algorithms and punishment/reward systems that are best at modifying our behavior.  And this brings me to Smerdyakov.  By listening to Ivan and ultimately becoming convinced by Ivan’s philosophical arguments, he seems to have been conditioned out of his previous views on moral responsibility.  In particular, he ended up adopting the belief that if God does not exist, then anything is permissible.  Since he also rejected a belief in God, he therefore thought he could do whatever he wanted.

One thing this turn of events highlights is that there are a number of different factors that influence people’s behaviors and that lead to their being reasoned into doing (or not doing) all sorts of things.  As Voltaire once said “Those who can make you believe absurdities can also make you commit atrocities.  And I think what Ivan told Smerdyakov was in fact absurd — although it was a belief that I once held as well not long after becoming an atheist.  For it is quite obviously absurd that anything is permissible without a God existing for at least two types of reasons: pragmatic considerations and moral considerations (with the former overlapping with the latter).  Pragmatic reasons include things like not wanting to be fined, incarcerated, or even executed by a criminal justice system that operates to minimize illegal behaviors.  It includes not wanting to be ostracized from your circle of friends, your social groups, or your community (and risking the loss of beneficial reciprocity, safety nets, etc.).  Moral reasons include everything that detracts from your overall psychological well-being, the very thing that is needed to live a maximally fulfilling life given one’s circumstances.  Behaving in ways that degrade your sense of inner worth, your integrity, self-esteem, and that diminish a good conscience, is going to make you feel miserable compared to behaving in ways that positively impact these fundamental psychological goals.

Furthermore, this part of the story illustrates that we have a moral responsibility not only to ourselves and our own behavior, but also in terms of how we influence the behavior of those around us, based on what we say, how we treat them, and more.  This also reinforces the importance of social contract theory and how it pertains to moral behavior.  If we follow simple behavioral heuristics like the Golden Rule and mutual reciprocity, then we can work together to obtain and secure common social goods such as various rights, equality, environmental sustainability, democratic legislation (ideally based on open moral deliberation), and various social safety nets.  We also can punish those that violate the social contract, as we already do with the criminal justice system and various kinds of social ostracization.  While our system of checks is far from perfect, having some system that serves such a purpose is necessary because not everybody behaves in ways that are ultimately beneficial to themselves nor everyone else around them.  People need to be conditioned to behave in ways that are more conducive to their own well being and that of others, and if all reasonable efforts to achieve that fails, they may simply need to be quarantined through incarceration (for example psychopaths or other violent criminals that society needs to be protected from, and that aren’t responding to rehabilitation efforts).

In any case, we do have a responsibility to others and that means we need to be careful what we say, such as the case with Ivan and his brother.  And this includes how we talk about concepts like free will, moral responsibility, and moral desert (justice).  If we tell people that all of their behaviors are determined and therefore don’t matter, that’s not a good way to get people to behave in ways that are good for them or for others.  Nor is telling them that because a God doesn’t exist, that their actions don’t matter.  In the case of deterministic nihilism, it’s a way to get people to lose much if not all of their motivation to put forward effort in achieving useful goals.  And both deterministic and atheistic moral nihilism are dangerous ideas that can get some people to commit heinous crimes such as mass shootings (or murdering their own father as Smerdyakov did), because they simply cause people to think that all behaviors are on equal footing in any way that matters.  And quite frankly, those nihilistic ideas are not only dangerous but also absurd.

While I’ve written a bit on free will in the past, my views have become more refined over the years, and my overall attitude towards the issue has been co-evolving alongside my views on morality.  The main crux of the free will issue is that libertarian free will is logically impossible because our actions are never free from both determinism and indeterminism (randomness) since one or the other must underlie how our universe operates (depending on which interpretation of Quantum Mechanics is correct).  Neither option from this logical dichotomy gives us “the freedom to have chosen to behave differently given the same initial conditions in a non-random way”.  Therefore free will in this sense is logically impossible.  However, this does not mean that our behavior isn’t operating under some sets of rules and patterns that we can discover and modify.  That is to say, we can effectively reprogram many of our behavioral tendencies using various forms of conditioning through punishment/reward systems.  These are the same systems we use to teach children how to behave and to rehabilitate criminals.

The key thing to note here is that we need to acknowledge that even if we don’t have libertarian free will, we still have a form of “free will” that matters (as philosophers like Daniel Dennett have said numerous times) whereby we have the ability to be programmed and reprogrammed in certain ways, thus allowing us to take responsibility for our actions and design ways to modify future actions as needed.  We have more degrees of freedom than a person who is insane for example, or a child, or a dog, and these degrees of freedom or autonomy — the flexibility we have in our decision-making algorithms — can be used as a rough guideline for determining how “morally responsible” a person is for their actions.  That is to say, the more easily a person can be conditioned out of a particular behavior, and the more rational decision making processes are involved in governing that behavior, the more “free will” this person has in a sense that applies to a criminal justice system and that applies to most of our everyday lives.

In the end, it doesn’t matter whether someone thinks that their behavior doesn’t matter because there’s no God, or because they have no libertarian free will.  What needs to be pointed out is the fact that we are able to behave in ways (or be conditioned to behave in ways) that lead to more happiness, more satisfaction and more fulfilling lives.  And we are able to behave in ways that detract from this goal.  So which behaviors should we aim for?  I think the answer is obvious.  And we also need to realize that as a part of our behavioral patterns, we need to realize that ideas have consequences on others and their subsequent behaviors.  So we need to be careful about what ideas we choose to spread and to make sure that they are put into a fuller context.  If a person hasn’t given some critical reflection about the consequences that may ensue from spreading their ideas to others, especially to others that may misunderstand it, then they need to keep those ideas to themselves until they’ve reflected on them more.  And this is something that I’ve discovered and applied for myself as well, as I was once far less careful about this than I am now.  In the next post, I’m going to talk about the concept of moral desert and how it pertains to free will.  This will be relevant to the scene described above regarding Ivan’s demonic apparition that haunts him as a result of his guilt over Smerdyakov’s murder of their father, as well as why the demonic apparition disappeared once Ivan heard that Smerdyakov had taken his own life.