The Open Mind

Cogito Ergo Sum

Archive for the ‘Ontology’ Category

Irrational Man: An Analysis (Part 2, Chapter 5: “Christian Sources”)

leave a comment »

In the previous post in this series on William Barrett’s Irrational Man, we explored some of the Hebraistic and Hellenistic contributions to the formation and onset of existentialism within Western thought.  In this post, I’m going to look at chapter 5 of Barrett’s book, which takes a look at some of the more specifically Christian sources of existentialism in the Western tradition.

1. Faith and Reason

We should expect there to be some significant overlap between Christianity and Hebraism, since the former was a dissenting sect of the latter.  One concept worth looking at is the dichotomy of faith versus reason, where in Christianity the man of faith is far and above the man of reason.  Even though faith is heavily prized within Hebraism as well, there are still some differences between the two belief systems as they relate to faith, with these differences stemming from a number of historical contingencies:

“Ancient Biblical man knew the uncertanties and waverings of faith as a matter of personal experience, but he did not yet know the full conflict of faith with reason because reason itself did not come into historical existence until later, with the Greeks.  Christian faith is therefore more intense than that of the Old Testament, and at the same time paradoxical: it is not only faith beyond reason but, if need be, against reason.”

Once again we are brought back to Plato, and his concept of rational consciousness and the explicit use of reason; a concept that had never before been articulated or developed.  Christianity no longer had reason hiding under the surface or homogeneously mixed-in with the rest of our mental traits; rather, it was now an explicit and specific way of thinking and processing information that was well known and that couldn’t simply be ignored.  Not only did the Christian have to hold faith above this now-identified capacity of thinking logically, but they also had to explicitly reject reason if it contradicted any belief that was grounded on faith.

Barrett also mentions here the problem of trying to describe the concept of faith in a language of reason, and how the opposition between the two is particularly relevant today:

“Faith can no more be described to a thoroughly rational mind than the idea of colors can be conveyed to a blind man.  Fortunately, we are able to recognize it when we see it in others, as in St. Paul, a case where faith had taken over the whole personality.  Thus vital and indescribable, faith partakes of the mystery of life itself.  The opposition between faith and reason is that between the vital and the rational-and stated in these terms, the opposition is a crucial problem today.”

I disagree with Barrett to some degree here as I don’t think it’s possible to be able to recognize a quality in others (let alone a quality that others can recognize as well) that is entirely incommunicable.  On the contrary, if you and I can both recognize some quality, then we should be able to describe it even if only in some rudimentary way.  Now this doesn’t mean that I think a description in words can do justice to every conception imaginable, but rather that, as human beings there is an inherent ability to communicate fairly well that which is involved in common modes of living and in our shared experiences; even very complex concepts that have somewhat of a fuzzy boundary.

It may be that a thoroughly rational mind will not appreciate faith in any way, nor think it is of any use, nor have had any personal experience with it; but it doesn’t mean that they can’t understand the concept, or what a mode of living dominated by faith would look like.  I also don’t think it’s fair to say that the opposition between faith and reason is that between the vital and the rational, even though this may be a real view for some.  If by vital, Barrett is alluding to what is essential to our psyche, then I don’t think he can be talking about faith, at least not in the sense of faith as belief without good reason.  But I’m willing to concede his point if instead by vital he is simply referring to an attitude that is spirited, vibrant, energetic, and thus involving some lively form of emotional expression.

He delves a little deeper into the distinction between faith and reason when he says:

“From the point of view of reason, any faith, including the faith in reason itself, is paradoxical, since faith and reason are fundamentally different functions of the human psyche.”

It seems that Barrett is guilty of making a little bit of a false equivocation here between two uses of the word faith.  I’d actually go so far as to say that the two uses of the word faith are best described in the way that Barrett himself alluded to in the last chapter: faith as trust and faith as belief.  On the one hand, we have faith as belief where some belief or set of beliefs is maintained without good reason; and on the other hand, we have faith as trust where, in the case of reason, our trust in reason is grounded on the fact that its use leads to successful predictions about the causal structure of our experience.  So there isn’t really any paradox at all when it comes to “faith” in reason, because such a faith need not involve adopting some belief without good reason, but rather there is, at most, a trust in reason based on the demonstrable and replicable success it has provided us in the past.

Barrett is right however, when he says that faith and reason are fundamentally different functions of the human psyche.  But this doesn’t mean that they are isolated from one another: for example, if one believes that having faith in God will help them to secure some afterlife in heaven, and if one also desires an afterlife in heaven, then the use of reason can actually be employed to reinforce a faith in God (as it did for me, back when I was a Christian).  This doesn’t mean that faith in God is reasonable, but rather that when certain beliefs are isolated or compartmentalized in the right way (even in the case of imagining hypothetical scenarios), reason can be used to process a logical relation between them, even if those beliefs are themselves derived from illogical cognitive biases.  To see that this is true, one need only realize that an argument can be logically valid even if the premises are not logically sound.

To be as charitable as possible with the concept of faith (at least, one that is more broadly construed), I will make one more point that I think is relevant here: having faith as a form of positivity or optimism in one’s outlook on life, given the social and personal circumstances that one finds themselves in, is perfectly rational and reasonable.  It is well known that people tend to be happier and have more fulfilling lives if they steer clear of pessimism and simply try and stay as positive as possible.  One primary reason for this has to do with what I have previously called socio-psychological feedback loops (what I would also call an evidence-based form of Karma).  Thus, one can have an empirically demonstrable good reason to have an attitude that is reminiscent of faith, yet without having to sacrifice an epistemology that is reliable and which consistently tracks on to reality.

When it comes to the motivating factors that lead people to faith, existentialist thought can shed some light on what some of these factors may be.  If we consider Paul the Apostle, for example, which Barrett mentioned earlier, he also says:

“The central fact for his faith is that Jesus did actually rise from the dead, and so that death itself is conquered-which is what in the end man most ardently longs for.”

The finite nature of man, not only with regard to the intellect but also with respect to our lives in general, is something that existentialism both recognizes and sees as an integral starting point to philosophically build off of.  And I think it should come as no surprise that the fear of death in particular is likely to be a motivating factor for most if not all supernatural beliefs found within any number of religions, including the belief in possibly resurrecting one from the dead (as in the case of St. Paul’s belief in Jesus), the belief in spirits, souls, or other disembodied minds, or any belief in an afterlife (including reincarnation).

We can see that faith is, largely anyway, a means of reducing the cognitive dissonance that results from existential elements of our experience; it is a means of countering or rejecting a number of facts surrounding the contingencies and uncertainties of our existence.  From this fact, we can then see that by examining what faith is compensating for, one can begin to extract what the existentialists eventually elaborated on.  In other words, within existentialism, it is fully accepted that we can’t escape death (for example), and that we have to face death head-on (among other things) in order to live authentically; therefore having faith in overcoming death in any way is simply a denial of at least one burden willingly taken on by the existentialist.

Within early Christianity, we can also see some parallels between an existentialist like Kierkegaard and an early Christian author like Tertullian.  Barrett gives us an excerpt from Tertullian’s De Carne Christi:

“The Son of God was crucified; I am unashamed of it because men must needs be ashamed of it.  And the Son of God died; it is by all means to be believed, because it is absurd.  And He was buried and rose again; the fact is certain because it is impossible.”

This sounds a lot like Kierkegaard, despite the sixteen centuries of separation between these two figures, and despite the fact that Tertullian was writing when Christianity was first expanding and most aggressive and Kierkegaard writing more towards the end of Christianity when it had already drastically receded under the pressure of secularization and modernity.  In this quote of Tertullian’s, we can see a kind of submission to paradox, where it is because a particular proposition is irrational or unreasonable that it becomes an article of faith.  It’s not enough to say that faith is simply used to arrive at a belief that is unreasonable or irrational, but rather that its attribute of violating reason is actually taken as a reason to use it and to believe that the claims of faith are in fact true.  So rather than merely seeing this as an example of irrationality, this actually makes for a good example of the kind of anti-rational attitudes that contributed to the existentialist revolt against some dominant forms of positivism; a revolt that didn’t really begin to precipitate until Kierkegaard’s time.

Although anti-rationalism is often associated with existentialism, I think one would be making an error if they equivocated the two or assumed that anti-rationalism was integral or necessary to existentialism; though it should be said that this is contingent on how exactly one is defining both anti-rationalism (and by extension rationalism) and existentialism.  The two terms have a variety of definitions (as does positivism, with Barrett giving us one particularly narrow definition back in chapter one).  If by rationalism, one is referring to the very narrow view that rationality is the only means of acquiring knowledge, or a view that the only thing that matters in human thought or in human life is rationality, or even a view that humans are fundamentally or primarily rational beings; then I think it is fair to say that this is largely incompatible with existentialism, since the latter is often defined as a philosophy positing that humans are not primarily rational, and that subjective meaning (rather than rationality) plays a much more primary role in our lives.

However, if rationality is simply taken as a view that reason and rational thought are integral components (even if not the only components) for discovering much of the knowledge that exists about ourselves and the universe at large, then it is by all means perfectly compatible with most existentialist schools of thought.  But in order to remain compatible, some priority needs to be given to subjective experience in terms of where meaning is derived from, how our individual identity is established, and how we are to direct our lives and inform our system of ethics.

The importance of subjective experience, which became a primary assumption motivating existentialist thought, was also appreciated by early Christian thinkers such as St. Augustine.  In fact, Barrett goes so far as to say that the interiorization of experience and the primary focus on the individual over the collective was unknown to the Greeks and didn’t come about until Christianity had begun to take hold in Western culture:

“Where Plato and Aristotle had asked the question, What is man?, St. Augustine (in the Confessions) asks, Who am I?-and this shift is decisive.  The first question presupposed a world of objects, a fixed natural and zoological order, in which man was included; and when man’s precise place in that order had been found, the specifically differentiating characteristic of reason was added.  Augustine’s question, on the other hand, stems from an altogether different, more obscure and vital center within the questioner himself: from an acutely personal sense of dereliction and loss, rather than from the detachment with which reason surveys the world of objects in order to locate its bearer, man, zoologically within it.”

So rather than looking at man as some kind of well-defined abstraction within a categorical hierarchy of animals and objects, and one bestowed with a unique faculty of rational consciousness, man is looked at from the inside, from the perspective of, and identified as, an embodied individual with an experience that is deeply rooted and inherently emotional.  And in Augustine’s question, we also see an implication that man can’t be defined in a way that the Greeks attempted to do because in asking the question, Who am I?, man became separated from the kind of zoological order that all other animals adhered to.  It is the unique sense of self and the (often foreboding) awareness of this self then, as opposed to the faculty of reason, that Augustine highlighted during his moment of reflection.

Though Augustine was cognizant of the importance of exploring the nature of our personal, human existence (albeit for him, with a focus on religious experience in particular), he also explored aspects of human existence on the cosmic scale.  But when he went down this cosmic road of inquiry, he left the nature of personal lived existence by the wayside and instead went down a Neo-Platonist path of theodicy.  Augustine was after all a formal theologian, and one who tried to come to grips with the problem of evil in the world.  But he employed Greek metaphysics for this task, and tried to use logic mixed with irrational faith-based claims about the benevolence of God, in order to show that God’s world wasn’t evil at all.  Augustine did this by eliminating evil from his conception of existence such that all evil was simply a lack of being and hence a form of non-being and therefore non-existent.  This was supposed to be our consolation: ignore any apparent evil in the world because we’re told that it doesn’t really exist; all in the name of trying to justify the cosmos as good and thus to maintain a view that God is good.

This is such a shame in the end, for in Augustine’s attempt at theodicy, he simply downplayed and ignored the abominable and precarious aspects of our existence, which are as real a part of our experience as anything could be.  Perhaps Augustine was motivated to use rationalism here in order to feel comfort and safety in a world where many feel alienated and alone.  But as Barrett says:

“…reason cannot give that security; if it could, faith would be neither necessary nor so difficult.  In the age-old struggle between the rational and the vital, the modern revolt against theodicy (or, equally, the modern recognition of its impossibility) is on the side of the vital…”

And so once again we see that a primary motivation for faith is because many cannot get the consolation they long for through the use of reason.  They can’t psychologically deal with the way the world really is and so they either use faith to believe the world is some other way or they try to use reason along with some number of faith-based premises, in evaluating our apparent place in the world.  St. Augustine actually thought that he could harmonize faith and reason (or what Barrett referred to as the vital and the rational) and this set the stage for the next thousand years of Christian thought.  Once faith claims became more explicit in the Church’s various articles of dogma, one was left to exercise rationality as they saw fit, as long as it was within the confines of that faith and dogma.  And this of course led to many medieval thinkers to mistake faith for reason (in many cases at least), reinforced by the fact that the use of reason was operating under faith-based premises.

The problematic relation between the vital and the rational didn’t disappear even after many a philosopher assumed the two forces were in complete harmony and agreement; instead the clash resurfaced in a debate between Voluntarism and Intellectualism, with our attention now turned toward St. Thomas Aquinas.  As an intellectualist, St. Thomas tried to argue that the intellect in man is prior to the will of man because the intellect determines the will, since we can only desire what we know.  Scotus on the other hand, a voluntarist, responded to this claim and said that the will determines what ideas the intellect turns toward, and thus the will ends up determining what the intellect comes to know.

As an aside, I think that Scotus was more likely correct here, because while the intellect may inform the will (and thus help to direct the will), there is something far more fundamental and unconscious at play that drives our engagement with the world as an organism evolved to survive.  For example, we have a curiosity for both novelty (seeking new information) and familiarity (a maximal understanding of incoming information) and basic biologically-driven desires for satisfaction and homeostasis.  These drives simply don’t depend on the intellect even though they can make use of it, and these drives will continue to operate even if the intellect does not.  I also disagree with St. Thomas’ claim that one can only desire that which they already know, for one can desire knowledge for it’s own sake without knowing exactly what knowledge is yet to come (again from an independent drive, which we could call the will), and one can also desire a change in the world from the way they currently know it to be to some other as yet unspecified way (e.g. to desire a world without a specific kind of suffering, even though one may not know exactly what that would be like, having never lived such a life before).

The crux of the debate between the primacy of the will versus the intellect can perhaps be re-framed accordingly:

“…not in terms of whether will is to be given primacy over the intellect, or the intellect over the will-these functions being after all but abstract fragments of the total man-but rather in terms of the primacy of the thinker over his thoughts, of the concrete and total man himself who is doing the thinking…the fact remains that Voluntarism has always been, in intention at least, an effort to go beyond the thought to the concrete existence of the thinker who is thinking that thought.”

Here again I see the concept of an embodied and evolved organism and a reflective self at play, where the contents of consciousness cannot be taken on their own, nor as primary, but rather they require a conscious agent to become manifest, let alone to have any causal power in the world.

2. Existence vs. Essence

There is a deeper root to the problem underlying the debate between St. Thomas and Scotus and it has to do with the relation between essence and existence; a relation that Sartre (among others) emphasized as critical to existentialism.  Whereas the essence of a thing is simply what the thing is (its essential properties for example), the existence of a thing simply refers to the brute fact that the thing is.  A very common theme found throughout Sartre’s writings is the basic contention that existence precedes essence (though, for him, this only applies to the case of man).  Barrett gives a very simplified explanation of such a thesis:

“In the case of man, its meaning is not difficult to grasp.  Man exists and makes himself to be what he is; his individual essence or nature comes to be out of his existence; and in this sense it is proper to say that existence precedes essence.  Man does not have a fixed essence that is handed to him ready-made; rather, he makes his own nature out of his freedom and the historical conditions in which he is placed.”

I think this concept also makes sense when we put it into the historical context going back to ancient Greece.  Whereas the Greeks tried to put humanity within a zoological order and categorize us by some essential qualities, the existentialists that later came on the scene rejected such an idea almost as a kind of categorical error.  It was simply not possible to classify human beings like you could inanimate objects or other animals that didn’t have the behavioral complexity and a will to define themselves as we do.

Barrett also explains that the essence versus existence relation breaks down into two different but related questions:

1) Does existence have primacy over essence, or the reverse?

2) In actual existing things is there a real distinction between the two?  Or are they merely different points of view that the mind takes toward the same existing thing?

In an attempt at answering these questions, I think it’s reasonable to say that essence has more to do with potentiality (a logically possible set of abstractions or properties) and existence has more to do with actuality (a logically and physically possible past or present state of being).  I also think that existence is necessary in order for any essence to be real in any way, and not because an actual object needs to exist in order to instantiate a physical instance of some specific essence, but because any essence is merely an abstraction created by a conscious agent who inferred it in the first place.  Quite simply, there is no Platonic realm for the essence to “exist” within and so it needs an existing consciousness to produce it.  So I would say that existence has primacy over essence since an existent is needed for an essence to be conceived, let alone instantiated at all.

As for the second question, I think there is a real distinction between the two and that they are different points of view that the mind takes toward the same existing thing.  I don’t see this as an either/or dichotomy.  The mind can certainly acknowledge the existence of some object, animal, or person, without giving much thought to what its essence is (aside from its being distinct enough to consider it a thing), but it can never acknowledge the essence of an actual object, animal, or person without also acknowledging its existence.  There’s also something relevant to be said about how our minds differentiate between an actual perception and an imagined one, even between an actual life and an imagined one, or an actual person and an imagined one; something that we do all the time.

On the other side of this issue, within the debate between essentialism and existentialism, are the questions of whether or not there are any essential qualities of human beings that make them human, and if so, whether or not any of these essential qualities are fixed for human beings.  Most people are familiar with the idea of human nature, whereby there are some biologically-grounded innate predispositions that affect our behavior in some ways.  We are evolved animals after all, and the only way for evolution to work is to have genes providing different phenotypic effects on the body and behavior of a population of organisms.  Since about half of our own genes are used to code for our brain’s development, its basic structure, and functionality, it should seem obvious to anyone that we’re bound to have some set of innate behavioral tendencies as well as some biological limitations on the space of possibilities for our behavior.

Thus, any set of innate behavioral tendencies that are shared by healthy human beings would constitute our human nature.  And furthermore, this behavioral overlap would be a fixed attribute of human nature insofar as the genes coding for that innate behavioral overlap don’t evolve beyond a certain degree.  Human nature is simply an undeniable fact about us grounded in our biology.  If our biology changes enough, then we will eventually cease to be human, or if we still call ourselves “human” at that point in time then we will cease to have the same nature since we will no longer be the same species.  But as long as that biological evolution remains under a certain threshold, we will have some degree of fixity in our human nature.

But unlike most other organisms on earth, human beings have an incredibly complex brain as well as a number of culturally inherited cognitive programs that provide us with a seemingly infinite number of possible behaviors and subsequent life trajectories.  And I think this fact has fueled some of the disagreement between the existentialists and the essentialists.  The essentialists (or many of them at least) have maintained that we have a particular human nature, which I think is certainly true.  But, as per the existentialists, we must also acknowledge just how vast our space of possibilities really is and not let our human nature be the ultimate arbiter of how we define ourselves.  We have far more freedom to choose a particular life project and way of life than one would be led to believe given some preconceived notion of what is essentially human.

3. The Case of Pascal

Science brought about a dramatic change to Western society, effectively carrying us out of the Middle Ages within a century; but this change also created the very environment that made modern Existentialism possible.  Cosmology, for example, led 17th century mathematician and theologian, Blaise Pascal to say that “The silence of these infinite spaces (outer space) frightens me.”  In this statement he was expressing the general reaction of humanity to the world that science had discovered; a world that made man feel homeless and insignificant.

As a religious man, Pascal was also seeking to reconcile this world discovered by science with his own faith.  This was a difficult task, but he found some direction in looking at the wretchedness of the human condition:

“In Pascal’s universe one has to search much more desperately to find any signposts that would lead the mind in the direction of faith.  And where Pascal finds such a signpost, significantly enough, is in the radically miserable condition of man himself.  How is it that this creature who shows everywhere, in comparison with other animals and with nature itself, such evidence marks of grandeur and power is at the same time so feeble and miserable?  We can only conclude, Pascal says, that man is rather like a ruined or disinherited nobleman cast out from the kingdom which ought to have been his (his fundamental premise is the image of man as a disinherited being).”

Personally I see this as indicative of our being an evolved species, albeit one with an incredibly rich level of consciousness and intelligence.  In some sense our minds’ inherent capacities have a potential that is far and beyond what most humans ever come close to actualizing, and this may create a feeling of our having lost something that we deserve; some possible yet unrealized world that would make more sense for us to have given the level of power and potential we possess.  And perhaps this lends itself to a feeling of angst or a lack of fulfillment as well.

As Pascal himself said:

“The natural misfortune of our mortal and feeble condition is so wretched that when we consider it closely, nothing can console us.” 

Furthermore, Barrett reminds us of Pascal having mentioned the human tendency of escaping from this close consideration of our condition through the two “sovereign anodynes” of habit and diversion:

“Both habit and diversion, so long as they work, conceal from man “his nothingness, his forlornness, his inadequacy, his impotence and his emptiness.”  Religion is the only possible cure for this desperate malady that is nothing other than our ordinary mortal existence itself.”

Barrett describes our state of cultivating various habits and distractions as a kind of concealment of the ever-narrowing space of possibilities in our lives, as each day passes us by with some set of hopes and dreams forever lost in the past.  I think it would be fair to say however, that the psychological benefit we gain from at least many of our habits and distractions warrants our having them in the first place.  As an analogy, if one day I have been informed by my doctor that I’m going to die from a terminal illness in a few months (thus drastically narrowing the future space of possibilities in my life), is there really a problem with me trying to keep my mind off of such a morbid fate, such that I can make the most of the time that I have left?  I certainly wouldn’t advocate burying one’s head in the sand either, nor having irrational expectations, nor living in denial; and so I think the time will be used best if I don’t completely forget about my timeline coming to an end either, because then I can more adequately appreciate that remaining time.

Moreover, I don’t agree that religion is the only possible cure for dealing with our “ordinary mortal existence”.  I think that psychological and moral development are the ultimate answers here, where a critical look at oneself and an ongoing attempt to become a better version of oneself in order to maximize one’s life fulfillment is key.  The world can be any way that it is but that doesn’t change the fact that certain attitudes toward that world and certain behaviors can make our experience of the world, as it truly is, better or worse.  Falling back on religion to console us is, to use Pascal’s own words, nothing but another habit and form of distraction, and one where we sacrifice knowing and facing the truth of our condition for some kind of ignorant bliss that relies on false beliefs about the world.  And I don’t think we can live authentically if we don’t face the condition we’re in as a species, and more importantly as individuals; by trying to believe as many true things and as few false things as possible.

When it comes to the human mind and how we deal with or process information about our everyday lives or aspects of our overall condition, it can be a rather messy process with very complicated concepts, many of which that have fuzzy boundaries and so can’t be dealt with in the same way that we deal with analyses relying strictly on logic.  In some cases of analyzing our lives, we don’t know exactly what the premises could even be, let alone how to arrange them within a logical structure to arrive at some kind of certain conclusion.  And this differs a lot from other areas of inquiry like, say, mathematics.  As Pascal was delving deeper into the study of our human condition, he realized this difference and posited that there was an interesting distinction in terms of what the mind can comprehend in these different contexts: a mathematical mind and an intuitive mind.  Within a field of study such as mathematics, it is our mind’s ability to logically comprehend various distinct and well defined ideas that we make use of, whereas in other more concrete domains of our experience, we rely quite heavily on intuition.

Not only is logic not very useful in the more concrete and complex domains of our lives, but in many cases it may actually get in the way of seeing things clearly since our intuition, though less reliable than logic, is more up to the task of dealing with complexities that haven’t yet been parsed out into well-defined ideas and concepts.  Barrett describes this a little differently than I would when he says:

“What Pascal had really seen, then, in order to have arrived at this distinction was this: that man himself is a creature of contradictions and ambivalences such as pure logic can never grasp…By delimiting a sphere of intuition over against that of logic, Pascal had of course set limits to human reason. “

In the case of logic, one is certainly limited in its use when the concepts involved are probabilistic (rather than being well-defined or “black and white”) and there are significant limitations when our attitudes toward those concepts vary contextually.  We are after all very complex creatures with a number of competing goals and with a prioritization of those goals that varies over time.  And of course we have a number of value judgments and emotional dispositions that motivate our actions in often unpredictable ways.  So it seems perfectly reasonable to me that one should acknowledge the limits we have in our use of logic and reason.

Barrett makes another interesting claim about the limits of reason when he says:

“Three centuries before Heidegger showed, through a learned and laborious exegesis, that Kant’s doctrine of the limitations of human reason really rests on the finitude of our human existence, Pascal clearly saw that the feebleness of our reason is part and parcel of the feebleness of our human condition generally.  Above all, reason does not get at the heart of religious experience.”

And I think the last sentence is most important here, where we try and understand that the limitations of reason include the inability to examine all that matters and all that is meaningful within religious experience.  Even though I am a kind of champion for reason, in the sense that I find it to be in short supply when it matters most and in the sense that I often advertise the dire need for critical thinking skills and the need for more education to combat our cognitive biases, I for one have never made the assumption that reason could ever accomplish such an arduous task of dissecting religious experience in some reductionist way while retaining or accounting for everything of value within it.  I have however used reason to analyze various religious beliefs and claims about the world in order to discover their consequences on, or incompatibilities with, a number of other beliefs.

Theologians do something similar in their attempts to prove the existence of God, by beginning with some set of presuppositions (about God or the supernatural), and then applying reason to those faith-based premises to see what would result if they were in fact true.  And Pascal saw this mental exercise as futile and misguided as well because it misses the entire point of religion:

“In any case, God as the object of a rigorous demonstration, even supposing such a demonstration were forthcoming, would have nothing to do with the living needs of religion.”

I admire the fact that Pascal was honest here about what really matters most when it comes to religion and religious beliefs.  It doesn’t matter whether or not God exists, or whether any religious claim or other is actually true or false; rather, it is the living need of religion that he perceives that ultimately motivates one to adhere to it.  Thus, theology doesn’t really matter to him because those who want to be convinced by the arguments for God’s existence will be convinced simply because they desire the conclusion insofar as they believe it will give them consolation from the human condition they find themselves in.

To add something to the bleak picture of just how contingent our existence is, Barrett mentions a personal story that Pascal shared about his having had a brush with death:

“While he was driving by the Seine one day, his carriage  suddenly swerved, the door was flung open, and Pascal almost catapulted down the embankment to his death.  The arbitrariness and suddenness of this near accident became for him another lightning flash of revelation.  Thereafter he saw Nothingness as a possibility that lurked, so to speak, beneath our feet, a gulf and an abyss into which we might tumble at any moment.  No other writer has expressed more powerfully than Pascal the radical contingency that lies at the heart of human existence-a contingency that may at any moment hurl us all unsuspecting into non-being…The idea of Nothingness or Nothing had up to this time played no role at all in Western philosophy…Nothingness had suddenly and drastically revealed itself to him.”

And here it is: the concept of Nothingness which has truly grounded a bulk of the philosophical constructs found within existentialism.  To add further to this, Pascal also saw that one’s inevitable death wasn’t the whole picture of our contingency, but only a part of it; it was our having been born at a particular time, place, and within a certain culture that vastly reduces the number of possible life trajectories one can take, and thus vastly reduces our freedom in establishing our own identity.  Our life begins with contingency and ends with it also.

Pascal also acknowledged our existence as lying in the middle of the universe, in terms of being between that of the infinitesimal and microscopic and that of the seemingly infinite lying at the cosmological scale.  Within this middle position, he saw man as being “an All in relation to Nothingness, a Nothingness in relation to the All.”  I think it’s also worth pointing out that it is because of our evolutionary history and the “middle-position” we evolved within that causes the drastic misalignment between the world we were in some sense meant to understand, and the micro and cosmic scales we discovered in the last several hundred years.

Aside from any feelings of alienation and homelessness that these different scales have precipitated, we simply don’t have an intuition that evolved a need to understand existence at the infinitesimal quantum level nor at the cosmic relativistic level, which is why these discoveries in physics are difficult to understand even by experts in the field.  We should expect these discoveries to be vastly perplexing and counter-intuitive, for our consciousness evolved to deal with life at a very finite scale; yet another example of our finitude as human beings.

It was this very same fact, that we gained so much information about our world through these kinds of discoveries, including major advancements made in mathematics, that also promoted the assumption that human nature could be perfected through the universal application of reason.

I’ll finish the post on this chapter with one final quote from Barrett that I found insightful:

“Poets are witnesses to Being before the philosophers are able to bring it into thought.  And what these particular poets were struggling to reveal, in this case, were the very conditions of Being that are ours historically today.  They were sounding, in poetic terms, the premonitory chords of our own era.”

It is the voice of the poets that we first hear lamenting about the Enlightenment and what reason seems to have done, or at least what it had begun to do.  They were some of the most important contemporary precursors to the later existentialists and philosophers that would soon begin to process (in far more detail and with far more precision) the full effects of modernity on the human psyche.

Advertisements

The Experientiality of Matter

with 5 comments

If there’s one thing that Nietzsche advocated for, it was to eliminate dogmatism and absolutism from one’s thinking.  And although I generally agree with him here, I do think that there is one exception to this rule.  One thing that we can be absolutely certain of is our own conscious experience.  Nietzsche actually had something to say about this (in Beyond Good and Evil, which I explored in a previous post), where he responds to Descartes’ famous adage “cogito ergo sum” (the very adage associated with my blog!), and he basically says that Descartes’ conclusion (“I think therefore I am”) shows a lack of reflection concerning the meaning of “I think”.  He wonders how he (and by extension, Descartes) could possibly know for sure that he was doing the thinking rather than the thought doing the thinking, and he even considers the possibility that what is generally described as thinking may actually be something more like willing or feeling or something else entirely.

Despite Nietzsche’s criticism against Descartes in terms of what thought is exactly or who or what actually does the thinking, we still can’t deny that there is thought.  Perhaps if we replace “I think therefore I am” with something more like “I am conscious, therefore (my) conscious experience exists”, then we can retain some core of Descartes’ insight while throwing out the ambiguities or uncertainties associated with how exactly one interprets that conscious experience.

So I’m in agreement with Nietzsche in the sense that we can’t be certain of any particular interpretation of our conscious experience, including whether or not there is an ego or a self (which Nietzsche actually describes as a childish superstition similar to the idea of a soul), nor can we make any certain inferences about the world’s causal relations stemming from that conscious experience.  Regardless of these limitations, we can still be sure that conscious experience exists, even if it can’t be ascribed to an “I” or a “self” or any particular identity (let alone a persistent identity).

Once we’re cognizant of this certainty, and if we’re able to crawl out of the well of solipsism and eventually build up a theory about reality (for pragmatic reasons at the very least), then we must remain aware of the priority of consciousness in any resultant theory we construct about reality, with regard to its structure or any of its other properties.  Personally, I believe that some form of naturalistic physicalism (a realistic physicalism) is the best candidate for an ontological theory that is the most parsimonious and explanatory for all that we experience in our reality.  However, most people that make the move to adopt some brand of physicalism seem to throw the baby out with the bathwater (so to speak), whereby consciousness gets eliminated by assuming it’s an illusion or that it’s not physical (therefore having no room for it in a physicalist theory, aside from its neurophysiological attributes).

Although I used to feel differently about consciousness (and it’s relationship to physicalism), where I thought it was plausible for it to be some kind of an illusion, upon further reflection I’ve come to realize that this was a rather ridiculous position to hold.  Consciousness can’t be an illusion in the proper sense of the word, because the experience of consciousness is real.  Even if I’m hallucinating where my perceptions don’t directly correspond with the actual incoming sensory information transduced through my body’s sensory receptors, then we can only say that the perceptions are illusory insofar as they don’t directly map onto that incoming sensory information.  But I still can’t say that having these experiences is itself an illusion.  And this is because consciousness is self-evident and experiential in that it constitutes whatever is experienced no matter what that experience consists of.

As for my thoughts on physicalism, I came to realize that positing consciousness as an intrinsic property of at least some kinds of physical material (analogous to a property like mass) allows us to avoid having to call consciousness non-physical.  If it is simply an experiential property of matter, that doesn’t negate its being a physical property of that matter.  It may be that we can’t access this property in such a way as to evaluate it with external instrumentation, like we can for all the other properties of matter that we know of such as mass, charge, spin, or what-have-you, but that doesn’t mean an experiential property should be off limits for any physicalist theory.  It’s just that most physicalists assume that everything can or has to be reducible to the externally accessible properties that our instrumentation can measure.  And this suggests that they’ve simply assumed that the physical can only include externally accessible properties of matter, rather than both internally and externally accessible properties of matter.

Now it’s easy to see why science might push philosophy in this direction because its methodology is largely grounded on third-party verification and a form of objectivity involving the ability to accurately quantify everything about a phenomenon with little or no regard for introspection or subjectivity.  And I think that this has caused many a philosopher to paint themselves into a corner by assuming that any ontological theory underlying the totality of our reality must be constrained in the same way that the physical sciences are.  To see why this is an unwarranted assumption, let’s consider a “black box” that can only be evaluated by a certain method externally.  It would be fallacious to conclude that just because we are unable to access the inside of the box, that the box must therefore be empty inside or that there can’t be anything substantially different inside the box compared to what is outside the box.

We can analogize this limitation of studying consciousness with our ability to study black holes within the field of astrophysics, where we’ve come to realize that accessing any information about their interior (aside from how much mass there is) is impossible to do from the outside.  And if we managed to access this information (if there is any) from the inside by leaping past its outer event horizon, it would be impossible for us to escape and share any of that information.  The best we can do is to learn what we can from the outside behavior of the black hole in terms of its interaction with surrounding matter and light and infer something about the inside, like how much matter it contains (e.g. we can infer the mass of a black hole from its outer surface area).  And we can learn a little bit more by considering what is needed to create or destroy a black hole, thus creating or destroying any interior qualities that may or may not exist.

A black hole can only form from certain configurations of matter, particularly aggregates that are above a certain mass and density.  And it can only be destroyed by starving it to death, by depriving it of any new matter, where it will slowly die by evaporating entirely into Hawking radiation, thus destroying anything that was on the inside in the process.  So we can infer that any internal qualities it does have, however inaccessible they may be, can be brought into and out of existence with certain physical processes.

Similarly, we can infer some things about consciousness by observing one’s external behavior including inferring some conditions that can create, modify, or destroy that type of consciousness, but we are unable to know what it’s like to be on the inside of that system once it exists.  We’re only able to know about the inside of our own conscious system, where we are in some sense inside our own black hole with nobody else able to access this perspective.  And I think it is easy enough to imagine that certain configurations of matter simply have an intrinsic, externally inaccessible experiential property, just as certain configurations of matter lead to the creation of a black hole with its own externally inaccessible and qualitatively unknown internal properties.  Despite the fact that we can’t access the black hole’s interior with a strictly external method, to determine its internal properties, this doesn’t mean we should assume that whatever properties may exist inside it are therefore fundamentally non-physical.  Just as we wouldn’t consider alternate dimensions (such as those predicted in M-theory/String-Theory) that we can’t physically access to be non-physical.  Perhaps one or more of these inaccessible dimensions (if they exist) is what accounts for an intrinsic experiential property within matter (though this is entirely speculative and need not be true for the previous points to hold, but it’s an interesting thought nevertheless).

Here’s a relevant quote from the philosopher Galen Strawson, where he outlines what physicalism actually entails:

Real physicalists must accept that at least some ultimates are intrinsically experience-involving. They must at least embrace micropsychism. Given that everything concrete is physical, and that everything physical is constituted out of physical ultimates, and that experience is part of concrete reality, it seems the only reasonable position, more than just an ‘inference to the best explanation’… Micropsychism is not yet panpsychism, for as things stand realistic physicalists can conjecture that only some types of ultimates are intrinsically experiential. But they must allow that panpsychism may be true, and the big step has already been taken with micropsychism, the admission that at least some ultimates must be experiential. ‘And were the inmost essence of things laid open to us’ I think that the idea that some but not all physical ultimates are experiential would look like the idea that some but not all physical ultimates are spatio-temporal (on the assumption that spacetime is indeed a fundamental feature of reality). I would bet a lot against there being such radical heterogeneity at the very bottom of things. In fact (to disagree with my earlier self) it is hard to see why this view would not count as a form of dualism… So now I can say that physicalism, i.e. real physicalism, entails panexperientialism or panpsychism. All physical stuff is energy, in one form or another, and all energy, I trow, is an experience-involving phenomenon. This sounded crazy to me for a long time, but I am quite used to it, now that I know that there is no alternative short of ‘substance dualism’… Real physicalism, realistic physicalism, entails panpsychism, and whatever problems are raised by this fact are problems a real physicalist must face.

— Galen Strawson, Consciousness and Its Place in Nature: Does Physicalism Entail Panpsychism?
.
While I don’t believe that all matter has a mind per se, because a mind is generally conceived as being a complex information processing structure, I think it is likely that all matter has an experiential quality of some kind, even if most instantiations of it are entirely unrecognizable as what we’d generally consider to be “consciousness” or “mentality”.  I believe that the intuitive gap here is born from the fact that the only minds we are confident exist are those instantiated by a brain, which has the ability to make an incredibly large number of experiential discriminations between various causal relations, thus giving it a capacity that is incredibly complex and not observed anywhere else in nature.  On the other hand, a particle of matter on its own would be hypothesized to have the capacity to make only one or two of these kinds of discriminations, making it unintelligent and thus incapable of brain-like activity.  Once a person accepts that an experiential quality can come in varying degrees from say one experiential “bit” to billions or trillions of “bits” (or more), then we can plausibly see how matter could give rise to systems that have a vast range in their causal power, from rocks that don’t appear to do much at all, to living organisms that have the ability to store countless representations of causal relations (memory) allowing them to behave in increasingly complex ways.  And perhaps best of all, this approach solves the mind-body problem by eliminating the mystery of how fundamentally non-experiential stuff could possibly give rise to experientiality and consciousness.

It’s Time For Some Philosophical Investigations

leave a comment »

Ludwig Wittgenstein’s Philosophical Investigations is a nice piece of work where he attempts to explain his views on language and the consequences of this view on various subjects like logic, semantics, cognition, and psychology.  I’ve mentioned some of his views very briefly in a couple of earlier posts, but I wanted to delve into his work in a little more depth here and comment on what strikes me as most interesting.  Lately, I’ve been looking back at some of the books I’ve read from various philosophers and have been wanting to revisit them so I can explore them in more detail and share how they connect to some of my own thoughts.  Alright…let’s begin.

Language, Meaning, & Their Probabilistic Attributes

He opens his Philosophical Investigations with a quote from St. Augustine’s Confessions that describes how a person learns a language.  St. Augustine believed that this process involved simply learning the names of objects (for example, by someone else pointing to the objects that are so named) and then stringing them together into sentences, and Wittgenstein points out that this is true to some trivial degree but it overlooks a much more fundamental relationship between language and the world.  For Wittgenstein, the meaning of words can not simply be attached to an object like a name can.  The meaning of a word or concept has much more of a fuzzy boundary as it depends on a breadth of context or associations with other concepts.  He analogizes this plurality in the meanings of words with the relationship between members of a family.  While there may be some resemblance between different uses of a word or concept, we aren’t able to formulate a strict definition to fully describe this resemblance.

One problem then, especially within philosophy, is that many people assume that the meaning of a word or concept is fixed with sharp boundaries (just like the fixed structure of words themselves).  Wittgenstein wants to dispel people of this false notion (much as Nietzsche tried to do before him) so that they can stop misusing language, as he believed that this misuse was the cause of many (if not all) of the major problems that had cropped up in philosophy over the centuries, particularly in metaphysics.  Since meaning is actually somewhat fluid and can’t be accounted for by any fixed structure, Wittgenstein thinks that any meaning that we can attach to these words is ultimately going to be determined by how those words are used.  Since he ties meaning with use, and since this use is something occurring in our social forms of life, it has an inextricably external character.  Thus, the only way to determine if someone else has a particular understanding of a word or concept is through their behavior, in response to or in association with the use of the word(s) in question.  This is especially important in the case of ambiguous sentences, which Wittgenstein explores to some degree.

Probabilistic Shared Understanding

Some of what Wittgenstein is trying to point out here are what I like to refer to as the inherently probabilistic attributes of language.  And it seems to me to be probabilistic for a few different reasons, beyond what Wittgenstein seems to address.  First, there is no guarantee that what one person means by a word or concept exactly matches the meaning from another person’s point of view, but at the very least there is almost always going to be some amount of semantic overlap (and possibly 100% in some cases) between the two individual’s intended meanings, and so there is going to be some probability that the speaker and the listener do in fact share a complete understanding.  It seems reasonable to argue that simpler concepts will have a higher probability of complete semantic overlap whereas more complex concepts are more likely to leave a gap in that shared understanding.  And I think this is true even if we can’t actually calculate what any of these probabilities are.

Now my use of the word meaning here differs from Wittgenstein’s because I am referring to something that is not exclusively shared by all parties involved and I am pointing to something that is internal (a subjective understanding of a word) rather than external (the shared use of a word).  But I think this move is necessary if we are to capture all of the attributes that people explicitly or implicitly refer to with a concept like meaning.  It seems better to compromise with Wittgenstein’s thinking and refer to the meaning of a word as a form of understanding that is intimately connected with its use, but which involves elements that are not exclusively external.

We can justify this version of meaning through an example.  If I help teach you how to ride a bike and explain that this activity is called biking or to bike, then we can use Wittgenstein’s conception of meaning and it will likely account for our shared understanding, and so I have no qualms about that, and I’d agree with Wittgenstein that this is perhaps the most important function of language.  But it may be the case that you have an understanding that biking is an activity that can only happen on Tuesdays, because that happened to be the day that I helped teach you how to ride a bike.  Though I never intended for you to understand biking in this way, there was no immediate way for me to infer that you had this misunderstanding on the day I was teaching you this.  I could only learn of this fact if you explicitly explained to me your understanding of the term with enough detail, or if I had asked you additional questions like whether or not you’d like to bike on a Wednesday (for example), with you answering “I don’t know how to do that as that doesn’t make any sense to me.”  Wittgenstein doesn’t account for this gap in understanding in his conception of meaning and I think this makes for a far less useful conception.

Now I think that Wittgenstein is still right in the sense that the only way to determine someone else’s understanding (or lack thereof) of a word is through their behavior, but due to chance as well as where our attention is directed at any given moment, we may never see the right kinds of behavior to rule out any number of possible misunderstandings, and so we’re apt to just assume that these misunderstandings don’t exist because language hides them to varying degrees.  But they can and in some cases do exist, and this is why I prefer a conception of meaning that takes these misunderstandings into account.  So I think it’s useful to see language as probabilistic in the sense that there is some probability of a complete shared understanding underlying the use of a word, and thus there is a converse probability of some degree of misunderstanding.

Language & Meaning as Probabilistic Associations Between Causal Relations

A second way of language being probabilistic is due to the fact that the unique meanings associated with any particular use of a word or concept as understood by an individual are derived from probabilistic associations between various inferred causal relations.  And I believe that this is the underlying cause of most of the problems that Wittgenstein was trying to address in this book.  He may not have been thinking about the problem in this way, but it can account for the fuzzy boundary problem associated with the task of trying to define the meaning of words since this probabilistic structure underlies our experiences, our understanding of the world, and our use of language such that it can’t be represented by static, sharp definitions.  When a person is learning a concept, like redness, they experience a number of observations and infer what is common to all of those experiences, and then they extract a particular subset of what is common and associate it with a word like red or redness (as opposed to another commonality like objectness, or roundness, or what-have-you).  But in most cases, separate experiences of redness are going to be different instantiations of redness with different hues, textures, shapes, etc., which means that redness gets associated with a large range of different qualia.

If you come across a new qualia that seems to more closely match previous experiences associated with redness rather than orangeness (for example), I would argue that this is because the brain has assigned a higher probability to that qualia being an instance of redness as opposed to, say, orangeness.  And the brain may very well test a hypothesis of the qualia matching the concept of redness versus the concept of orangeness, and depending on your previous experiences of both, and the present context of the experience, your brain may assign a higher probability of orangeness instead.  Perhaps if a red-orange colored object is mixed in with a bunch of unambiguously orange-colored objects, it will be perceived as a shade of orange (to match it with the rest of the set), but if the case were reversed and it were mixed in with a bunch of unambiguously red-colored objects, it will be perceived as a shade of red instead.

Since our perception of the world depends on context, then the meanings we assign to words or concepts also depends on context, but not only in the sense of choosing a different use of a word (like Wittgenstein argues) in some language game or other, but also by perceiving the same incoming sensory information as conceptually different qualia (like in the aforementioned case of a red-orange colored object).  In that case, we weren’t intentionally using red or orange in a different way but rather were assigning one word or the other to the exact same sensory information (with respect to the red-orange object) which depended on what else was happening in the scene that surrounded that subset of sensory information.  To me, this highlights how meaning can be fluid in multiple ways, some of that fluidity stemming from our conscious intentions and some of it from unintentional forces at play involving our prior expectations within some context which directly modify our perceived experience.

This can also be seen through Wittgenstein’s example of what he calls a duckrabbit, an ambiguous image that can be perceived as a duck or a rabbit.  I’ve taken the liberty of inserting this image here along with a copy of it which has been rotated in order to more strongly invoke the perception of a rabbit.  The first image no doubt looks more like a duck and the second image, more like a rabbit.

Now Wittgenstein says that when one is looking at the duckrabbit and sees a rabbit, they aren’t interpreting the picture as a rabbit but are simply reporting what they see.  But in the case where a person sees a duck first and then later sees a rabbit, Wittgenstein isn’t sure what to make of this.  However, he claims to be sure that whatever it is, it can’t be the case that the external world stays the same while an internal cognitive change takes place.  Wittgenstein was incorrect on this point because the external world doesn’t change (in any relevant sense) despite our seeing the duck or seeing the rabbit.  Furthermore, he never demonstrates why two different perceptions would require a change in the external world.  The fact of the matter is, you can stare at this static picture and ask yourself to see a duck or to see a rabbit and it will affect your perception accordingly.  This is partially accomplished by you mentally rotating the image in your imagination and seeing if that changes how well it matches one conception or the other, and since it matches both conceptions to a high degree, you can easily perceive it one way or the other.  Your brain is simply processing competing hypotheses to account for the incoming sensory information, and the top-down predictions of rabbitness or duckness (which you’ve acquired over past experiences) actually changes the way you perceive it with no change required in the external world (despite Wittgenstein’s assertion to the contrary).

To give yet another illustration of the probabilistic nature of language, just imagine the head of a bald man and ask yourself, if you were to add one hair at a time to this bald man’s head, at what point does he lose the property of baldness?  If hairs were slowly added at random, and you could simply say “Stop!  Now he’s no longer bald!” at some particular time, there’s no doubt in my mind that if this procedure were repeated (even if the hairs were added non-randomly), you would say “Stop!  Now he’s no longer bald!” at a different point in this transition.  Similarly if you were looking at a light that was changing color from red to orange, and were asked to say when the color has changed to orange, you would pick a point in the transition that is within some margin of error but it wouldn’t be an exact, repeatable point in the transition.  We could do this thought experiment with all sorts of concepts that are attached to words, like cat and dog and, for example, use a computer graphic program to seamlessly morph a picture of a cat into a picture of a dog and ask at what point did the cat “turn into” a dog?  It’s going to be based on a probability of coincident features that you detect which can vary over time.  Here’s a series of pictures showing a chimpanzee morphing into Bill Clinton to better illustrate this point:

At what point do we stop seeing a picture of a chimpanzee and start seeing a picture of something else?  When do we first see Bill Clinton?  What if I expanded this series of 15 images into a series of 1000 images so that this transition happened even more gradually?  It would be highly unlikely to pick the exact same point in the transition two times in a row if the images weren’t numbered or arranged in a grid.  We can analogize this phenomenon with an ongoing problem in science, known as the species problem.  This problem can be described as the inherent difficulty of defining exactly what a species is, which is necessary if one wants to determine if and when one species evolves into another.  This problem occurs because the evolutionary processes giving rise to new species are relatively slow and continuous whereas sorting those organisms into sharply defined categories involves the elimination of that generational continuity and replacing it with discrete steps.

And we can see this effect in the series of images above, where each picture could represent some large number of generations in an evolutionary timeline, where each picture/organism looks roughly like the “parent” or “child” of the picture/organism that is adjacent to it.  Despite this continuity, if we look at the first picture and the last one, they look like pictures of distinct species.  So if we want to categorize the first and last picture as distinct species, then we create a problem when trying to account for every picture/organism that lies in between that transition.  Similarly words take on an appearance of strict categorization (of meaning) when in actuality, any underlying meaning attached is probabilistic and dynamic.  And as Wittgenstein pointed out, this makes it more appropriate to consider meaning as use so that the probabilistic and dynamic attributes of meaning aren’t lost.

Now you may think you can get around this problem of fluidity or fuzzy boundaries with concepts that are simpler and more abstract, like the concept of a particular quantity (say, a quantity of four objects) or other concepts in mathematics.  But in order to learn these concepts in the first place, like quantity, and then associate particular instances of it with a word, like four, one had to be presented with a number of experiences and infer what was common to all of those experiences (as was the case with redness mentioned earlier).  And this inference (I would argue) involves a probabilistic process as well, it’s just that the resulting probability of our inferring particular experiences as an instance of four objects is incredibly high and therefore repeatable and relatively unambiguous.  Therefore that kind of inference is likely to be sustained no matter what the context, and it is likely to be shared by two individuals with 100% semantic overlap (i.e. it’s almost certain that what I mean by four is exactly what you mean by four even though this is almost certainly not the case for a concept like love or consciousness).  This makes mathematical concepts qualitatively different from other concepts (especially those that are more complex or that more closely map on to reality), but it doesn’t negate their having a probabilistic attribute or foundation.

Looking at the Big Picture

Though this discussion of language and meaning is not an exhaustive analysis of Wittgenstein’s Philosophical Investigations, it represents an analysis of the main theme present throughout.  His main point was to shed light on the disparity between how we often think of language and how we actually use it.  When we stray away from the way it is actually used in our everyday lives, in one form of social life or other, and instead misuse it such as in philosophy, this creates all sorts of problems and unwarranted conclusions.  He also wants his readers to realize that the ultimate goal of philosophy should not be to try and make metaphysical theories and deep explanations underlying everyday phenomena, since these are often born out of unwarranted generalizations and other assumptions stemming from how our grammar is structured.  Instead we ought to subdue these temptations to generalize and subdue our temptations to be dogmatic and instead use philosophy as a kind of therapeutic tool to keep our abstract thinking in check and to better understand ourselves and the world we live in.

Although I disagree with some of Wittgenstein’s claims about cognition (in terms of how intimately it is connected to the external world) and take some issue with his arguably less useful conception of meaning, he makes a lot of sense overall.  Wittgenstein was clearly hitting upon a real difference between the way actual causal relations in our experience are structured and how those relations are represented in language.  Personally, I think that work within philosophy is moving in the right direction if the contributions made therein lead us to make more successful predictions about the causal structure of the world.  And I believe this to be so even if this progress includes generalizations that may not be exactly right.  As long as we can structure our thinking to make more successful predictions, then we’re moving forward as far as I’m concerned.  In any case, I liked the book overall and thought that the interlocutory style gave the reader a nice break from the typical form seen in philosophical argumentation.  I highly recommend it!

Predictive Processing: Unlocking the Mysteries of Mind & Body (Part II)

leave a comment »

In the first post of this series I introduced some of the basic concepts involved in the Predictive Processing (PP) theory of perception and action.  I briefly tied together the notions of belief, desire, emotion, and action from within a PP lens.  In this post, I’d like to discuss the relationship between language and ontology through the same framework.  I’ll also start talking about PP in an evolutionary context as well, though I’ll have more to say about that in future posts in this series.

Active (Bayesian) Inference as a Source for Ontology

One of the main themes within PP is the idea of active (Bayesian) inference whereby we physically interact with the world, sampling it and modifying it in order to reduce our level of uncertainty in our predictions about the causes of the brain’s inputs.  Within an evolutionary context, we can see why this form of embodied cognition is an ideal schema for an information processing system to employ in order to maximize chances of survival in our highly interactive world.

In order to reduce the amount of sensory information that has to be processed at any given time, it is far more economical for the brain to only worry about the prediction error that flows upward through the neural system, rather than processing all incoming sensory data from scratch.  If the brain is employing a set of predictions that can “explain away” most of the incoming sensory data, then the downward flow of predictions can encounter an upward flow of sensory information (effectively cancelling each other out) and the only thing that remains to propagate upward through the system and do any “cognitive work” (i.e. the only thing that needs to be processed) on the predictive models flowing downward is the remaining prediction error (prediction error = predictions of sensory input minus the actual sensory input).  This is similar to data compression strategies for video files (for example) that only worry about the information that changes over time (pixels that change brightness/color) and then simply compress the information that remains constant (pixels that do not change from frame-to-frame).

The ultimate goal for this strategy within an evolutionary context is to allow the organism to understand its environment in the most salient ways for the pragmatic purposes of accomplishing goals relating to survival.  But once humans began to develop culture and evolve culturally, the predictive strategy gained a new kind of evolutionary breathing space, being able to predict increasingly complex causal relations and developing technology along the way.  All of these inferred causal relations appear to me to be the very source of our ontology, as each hierarchically structured prediction and its ability to become associated with others provides an ideal platform for differentiating between any number of spatio-temporal conceptions and their categorical or logical organization.

An active Bayesian inference system is also ideal to explain our intellectual thirst, human curiosity, and interest in novel experiences (to some degree), because we learn more about the world (and ourselves) by interacting with it in new ways.  In doing so, we are provided with a constant means of fueling and altering our ontology.

Language & Ontology

Language is an important component as well and it fits well within a PP framework as it serves to further link perception and action together in a very important way, allowing us to make new kinds of predictions about the world that wouldn’t have been possible without it.   A tool like language makes a lot of sense from an evolutionary perspective as well since better predictions about the world result in a higher chance of survival.

When we use language by speaking or writing it, we are performing an action which is instantiated by the desire to do so (see previous post about “desire” within a PP framework).  When we interpret language by listening to it or by reading, we are performing a perceptual task which is again simply another set of predictions (in this case, pertaining to the specific causes leading to our sensory inputs).  If we were simply sending and receiving non-lingual nonsense, then the same basic predictive principles underlying perception and action would still apply, but something new emerges when we send and receive actual language (which contains information).  With language, we begin to associate certain sounds and visual information with some kind of meaning or meaningful information.  Once we can do this, we can effectively share our thoughts with one another, or at least many aspects of our thoughts with one another.  This provides for an enormous evolutionary advantage as now we can communicate almost anything we want to one another, store it in external forms of memory (books, computers, etc.), and further analyze or manipulate the information for various purposes (accounting, inventory, science, mathematics, etc.).

By being able to predict certain causal outcomes through the use of language, we are effectively using the lower level predictions associated with perceiving and emitting language to satisfy higher level predictions related to more complex goals including those that extend far into the future.  Since the information that is sent and received amounts to testing or modifying our predictions of the world, we are effectively using language to share and modulate one brain’s set of predictions with that of another brain.  One important aspect of this process is that this information is inherently probabilistic which is why language often trips people up with ambiguities, nuances, multiple meanings behind words and other attributes of language that often lead to misunderstanding.  Wittgenstein is one of the more prominent philosophers who caught onto this property of language and its consequence on philosophical problems and how we see the world structured.  I think a lot of the problems Wittgenstein elaborated on with respect to language can be better accounted for by looking at language as dealing with probabilistic ontological/causal relations that serve some pragmatic purpose, with the meaning of any word or phrase as being best described by its use rather than some clear-cut definition.

This probabilistic attribute of language in terms of the meanings of words having fuzzy boundaries also tracks very well with the ontology that a brain currently has access to.  Our ontology, or what kinds of things we think exist in the world, are often categorized in various ways with some of the more concrete entities given names such as: “animals”, “plants”, “rocks”, “cats”, “cups”, “cars”, “cities”, etc.  But if I morph a wooden chair (say, by chipping away at parts of it with a chisel), eventually it will no longer be recognizable as a chair, and it may begin to look more like a table than a chair or like nothing other than an oddly shaped chunk of wood.  During this process, it may be difficult to point to the exact moment that it stopped being a chair and instead became a table or something else, and this would make sense if what we know to be a chair or table or what-have-you is nothing more than a probabilistic high-level prediction about certain causal relations.  If my brain perceives an object that produces too high of a prediction error based on the predictive model of what a “chair” is, then it will try another model (such as the predictive model pertaining to a “table”), potentially leading to models that are less and less specific until it is satisfied with recognizing the object as merely a “chunk of wood”.

From a PP lens, we can consider lower level predictions pertaining to more basic causes of sensory input (bright/dark regions, lines, colors, curves, edges, etc.) to form some basic ontological building blocks and when they are assembled into higher level predictions, the amount of integrated information increases.  This information integration process leads to condensed probabilities about increasingly complex causal relations, and this ends up reducing the dimensionality of the cause-effect space of the predicted phenomenon (where a set of separate cause-effect repertoires are combined into a smaller number of them).

You can see the advantage here by considering what the brain might do if it’s looking at a black cat sitting on a brown chair.  What if the brain were to look at this scene as merely a set of pixels on the retina that change over time, where there’s no expectations of any subset of pixels to change in ways that differ from any other subset?  This wouldn’t be very useful in predicting how the visual scene will change over time.  What if instead, the brain differentiates one subset of pixels (that correspond to what we call a cat) from all the rest of the pixels, and it does this in part by predicting proximity relations between neighboring pixels in the subset (so if some black pixels move from the right to the left visual field, then some number of neighboring black pixels are predicted to move with it)?

This latter method treats the subset of black-colored pixels as a separate object (as opposed to treating the entire visual scene as a single object), and doing this kind of differentiation in more and more complex ways leads to a well-defined object or concept, or a large number of them.  Associating sounds like “meow” with this subset of black-colored pixels, is just one example of yet another set of properties or predictions that further defines this perceived object as distinct from the rest of the perceptual scene.  Associating this object with a visual or auditory label such as “cat” finally links this ontological object with language.  As long as we agree on what is generally meant by the word “cat” (which we determine through its use), then we can share and modify the predictive models associated with such an object or concept, as we can do with any other successful instance of linguistic communication.

Language, Context, and Linguistic Relativism

However, it should be noted that as we get to more complex causal relations (more complex concepts/objects), we can no longer give these concepts a simple one word label and expect to communicate information about them nearly as easily as we could for the concept of a “cat”.  Think about concepts like “love” or “patriotism” or “transcendence” and realize how there’s many different ways that we use those terms and how they can mean all sorts of different things and so our meaning behind those words will be heavily conveyed to others by the context that they are used in.  And context in a PP framework could be described as simply the (expected) conjunction of multiple predictive models (multiple sets of causal relations) such as the conjunction of the predictive models pertaining to the concepts of food, pizza, and a desirable taste, and the word “love” which would imply a particular use of the word “love” as used in the phrase “I love pizza”.  This use of the word “love” is different than one which involves the conjunction of predictive models pertaining to the concepts of intimacy, sex, infatuation, and care, implied in a phrase like “I love my wife”.  In any case, conjunctions of predictive models can get complicated and this carries over to our stretching our language to its very limits.

Since we are immersed in language and since it is integral in our day-to-day lives, we also end up being conditioned to think linguistically in a number of ways.  For example, we often think with an interior monologue (e.g. “I am hungry and I want pizza for lunch”) even when we don’t plan on communicating this information to anyone else, so it’s not as if we’re simply rehearsing what we need to say before we say it.  I tend to think however that this linguistic thinking (thinking in our native language) is more or less a result of the fact that the causal relations that we think about have become so strongly associated with certain linguistic labels and propositions, that we sort of automatically think of the causal relations alongside the labels that we “hear” in our head.  This seems to be true even if the causal relations could be thought of without any linguistic labels, in principle at least.  We’ve simply learned to associate them so strongly to one another that in most cases separating the two is just not possible.

On the flip side, this tendency of language to associate itself with our thoughts, also puts certain barriers or restrictions on our thoughts.  If we are always preparing to share our thoughts through language, then we’re going to become somewhat entrained to think in ways that can be most easily expressed in a linguistic form.  So although language may simply be along for the ride with many non-linguistic aspects of thought, our tendency to use it may also structure our thinking and reasoning in large ways.  This would account for why people raised in different cultures with different languages see the world in different ways based on the structure of their language.  While linguistic determinism seems to have been ruled out (the strong version of the Sapir-Whorf hypothesis), there is still strong evidence to support linguistic relativism (the weak version of the Sapir-Whorf hypothesis), whereby one’s language effects their ontology and view of how the world is structured.

If language is so heavily used day-to-day then this phenomenon makes sense as viewed through a PP lens since we’re going to end up putting a high weight on the predictions that link ontology with language since these predictions have been demonstrated to us to be useful most of the time.  Minimal prediction error means that our Bayesian evidence is further supported and the higher the weight carried by these predictions, the more these predictions will restrict our overall thinking, including how our ontology is structured.

Moving on…

I think that these are but a few of the interesting relationships between language and ontology and how a PP framework helps to put it all together nicely, and I just haven’t seen this kind of explanatory power and parsimony in any other kind of conceptual framework about how the brain functions.  This bodes well for the framework and it’s becoming less and less surprising to see it being further supported over time with studies in neuroscience, cognition, psychology, and also those pertaining to pathologies of the brain, perceptual illusions, etc.  In the next post in this series, I’m going to talk about knowledge and how it can be seen through the lens of PP.

Predictive Processing: Unlocking the Mysteries of Mind & Body (Part I)

leave a comment »

I’ve been away from writing for a while because I’ve had some health problems relating to my neck.  A few weeks ago I had double-cervical-disc replacement surgery and so I’ve been unable to write and respond to comments and so forth for a little while.  I’m in the second week following my surgery now and have finally been able to get back to writing, which feels very good given that I’m unable to lift or resume martial arts for the time being.  Anyway, I want to resume my course of writing beginning with a post-series that pertains to Predictive Processing (PP) and the Bayesian brain.  I’ve written one post on this topic a little over a year ago (which can be found here) as I’ve become extremely interested in this topic for the last several years now.

The Predictive Processing (PP) theory of perception shows a lot of promise in terms of finding an overarching schema that can account for everything that the brain seems to do.  While its technical application is to account for the acts of perception and active inference in particular, I think it can be used more broadly to account for other descriptions of our mental life such as beliefs (and knowledge), desires, emotions, language, reasoning, cognitive biases, and even consciousness itself.  I want to explore some of these relationships as viewed through a PP lens more because I think it is the key framework needed to reconcile all of these aspects into one coherent picture, especially within the evolutionary context of an organism driven to survive.  Let’s begin this post-series by first looking at how PP relates to perception (including imagination), beliefs, emotions, and desires (and by extension, the actions resulting from particular desires).

Within a PP framework, beliefs can be best described as simply the set of particular predictions that the brain employs which encompass perception, desires, action, emotion, etc., and which are ultimately mediated and updated in order to reduce prediction errors based on incoming sensory evidence (and which approximates a Bayesian form of inference).  Perception then, which is constituted by a subset of all our beliefs (with many of them being implicit or unconscious beliefs), is more or less a form of controlled hallucination in the sense that what we consciously perceive is not the actual sensory evidence itself (not even after processing it), but rather our brain’s “best guess” of what the causes for the incoming sensory evidence are.

Desires can be best described as another subset of one’s beliefs, and a set of beliefs which has the special characteristic of being able to drive action or physical behavior in some way (whether driving internal bodily states, or external ones that move the body in various ways).  Finally, emotions can be thought of as predictions pertaining to the causes of internal bodily states and which may be driven or changed by changes in other beliefs (including changes in desires or perceptions).

When we believe something to be true or false, we are basically just modeling some kind of causal relationship (or its negation) which is able to manifest itself into a number of highly-weighted predicted perceptions and actions.  When we believe something to be likely true or likely false, the same principle applies but with a lower weight or precision on the predictions that directly corresponds to the degree of belief or disbelief (and so new sensory evidence will more easily sway such a belief).  And just like our perceptions, which are mediated by a number of low and high-level predictions pertaining to incoming sensory data, any prediction error that the brain encounters results in either updating the perceptual predictions to new ones that better reduce the prediction error and/or performing some physical action that reduces the prediction error (e.g. rotating your head, moving your eyes, reaching for an object, excreting hormones in your body, etc.).

In all these cases, we can describe the brain as having some set of Bayesian prior probabilities pertaining to the causes of incoming sensory data, and these priors changing over time in response to prediction errors arising from new incoming sensory evidence that fails to be “explained away” by the predictive models currently employed.  Strong beliefs are associated with high prior probabilities (highly-weighted predictions) and therefore need much more counterfactual sensory evidence to be overcome or modified than for weak beliefs which have relatively low priors (low-weighted predictions).

To illustrate some of these concepts, let’s consider a belief like “apples are a tasty food”.  This belief can be broken down into a number of lower level, highly-weighted predictions such as the prediction that eating a piece of what we call an “apple” will most likely result in qualia that accompany the perception of a particular satisfying taste, the lower level prediction that doing so will also cause my perception of hunger to change, and the higher level prediction that it will “give me energy” (with these latter two predictions stemming from the more basic category of “food” contained in the belief).  Another prediction or set of predictions is that these expectations will apply to not just one apple but a number of apples (different instances of one type of apple, or different types of apples altogether), and a host of other predictions.

These predictions may even result (in combination with other perceptions or beliefs) in an actual desire to eat an apple which, under a PP lens could be described as the highly weighted prediction of what it would feel like to find an apple, to reach for an apple, to grab it, to bite off a piece of it, to chew it, and to swallow it.  If I merely imagine doing such things, then the resulting predictions will necessarily carry such a small weight that they won’t be able to influence any actual motor actions (even if these imagined perceptions are able to influence other predictions that may eventually lead to some plan of action).  Imagined perceptions will also not carry enough weight (when my brain is functioning normally at least) to trick me into thinking that they are actual perceptions (by “actual”, I simply mean perceptions that correspond to incoming sensory data).  This low-weighting attribute of imagined perceptual predictions thus provides a viable way for us to have an imagination and to distinguish it from perceptions corresponding to incoming sensory data, and to distinguish it from predictions that directly cause bodily action.  On the other hand, predictions that are weighted highly enough (among other factors) will be uniquely capable of affecting our perception of the real world and/or instantiating action.

This latter case of desire and action shows how the PP model takes the organism to be an embodied prediction machine that is directly influencing and being influenced by the world that its body interacts with, with the ultimate goal of reducing any prediction error encountered (which can be thought of as maximizing Bayesian evidence).  In this particular example, the highly-weighted prediction of eating an apple is simply another way of describing a desire to eat an apple, which produces some degree of prediction error until the proper actions have taken place in order to reduce said error.  The only two ways of reducing this prediction error are to change the desire (or eliminate it) to one that no longer involves eating an apple, and/or to perform bodily actions that result in actually eating an apple.

Perhaps if I realize that I don’t have any apples in my house, but I realize that I do have bananas, then my desire will change to one that predicts my eating a banana instead.  Another way of saying this is that my higher-weighted prediction of satisfying hunger supersedes my prediction of eating an apple specifically, thus one desire is able to supersede another.  However, if the prediction weight associated with my desire to eat an apple is high enough, it may mean that my predictions will motivate me enough to avoid eating the banana, and instead to predict what it is like to walk out of my house, go to the store, and actually get an apple (and therefore, to actually do so).  Furthermore, it may motivate me to predict actions that lead me to earn the money such that I can purchase the apple (if I don’t already have the money to do so).  To do this, I would be employing a number of predictions having to do with performing actions that lead to me obtaining money, using money to purchase goods, etc.

This is but a taste of what PP has to offer, and how we can look at basic concepts within folk psychology, cognitive science, and theories of mind in a new light.  Associated with all of these beliefs, desires, emotions, and actions (which again, are simply different kinds of predictions under this framework), is a number of elements pertaining to ontology (i.e. what kinds of things we think exist in the world) and pertaining to language as well, and I’d like to explore this relationship in my next post.  This link can be found here.

Is Death Bad For You? A Response to Shelly Kagan

leave a comment »

I’ve enjoyed reading and listening to the philosopher Shelly Kagan, both in debate, lectures, and various articles.  One topic he’s well known for is that of death, specifically the fear of death, and trying to understand the details behind, and justification for, the general attitude people have toward the concept of death.  I’ve written a little about the fear of death long ago, but coming across an article of Kagan’s reignited my interest in the topic.  He wrote an article a few years ago in The Chronicle, where he expounds on some of the ontological puzzles related to the concept of death.  I thought I’d briefly summarize the article’s main points and give a response to it here.

Can Death Be Bad For Us?

Kagan begins with the assumption that the death of a person’s body results in the end of that person’s existence.  This is certainly a reasonable assumption as there’s no evidence to the contrary, that is, that persons can exist without a living body.  Simple enough.  Then he asks the question, if death is the end of our existence, then how can being dead be bad for us?  While some would say that death is particularly bad for the survivors of the deceased since they miss the person who’s died and the relationship they once had with that person.  But it seems more complicated than that, because we could likewise have an experience where a cherished friend or family member leaves us and goes somewhere far away such that we can be confident that we’ll never see that person ever again.

Both the death of that person, and the alternative of their leaving forever to go somewhere such that we’ll never have contact with them again, result in the same loss of relationship.  Yet most people would say that if we knew about their dying instead of simply leaving forever, there’s more to be sad about in terms of death being bad for them, not simply bad for us.  And this sadness results from more than simply knowing how they died — the process of death itself — which could have been unpleasant, but also could have been entirely benign (such as dying peacefully in one’s sleep).  Similarly, Kagan tells us, the prospect of dying can be unpleasant as well, but he asserts, this only seems to make sense if death itself is bad for us.

Kagan suggests:

Maybe nonexistence is bad for me, not in an intrinsic way, like pain, and not in an instrumental way, like unemployment leading to poverty, which in turn leads to pain and suffering, but in a comparative way—what economists call opportunity costs. Death is bad for me in the comparative sense, because when I’m dead I lack life—more particularly, the good things in life. That explanation of death’s badness is known as the deprivation account.

While the deprivation account seems plausible, Kagan thinks that accepting it results in a couple of potential problems.  He argues, if something is true, it seems as if there must be some time when it’s true.  So when would it be true that death is bad for us?  Not now, he says.  Because we’re not dead now.  Not after we’re dead either, because then we no longer exist so nothing can be bad for a being that no longer exists.  This seems to lead to the conclusion that either death isn’t bad for anyone after all, or alternatively, that not all facts are datable.  He gives us another possible example of an undatable fact.  If Kagan shoots “John” today such that John slowly bleeds to death after two days, but Kagan dies tomorrow (before John dies) then after John dies, can we say that Kagan killed John?  If Kagan did kill John, when did he kill him?  Kagan no longer existed when John died so how can we say that Kagan killed John?

I think we could agree with this and say that while it’s true that Kagan didn’t technically kill John, a trivial response to this supposed conundrum is to say that Kagan’s actions led to John’s death.  This seems to solve that conundrum by working within the constraints of language, while highlighting the fact that when we say someone killed X what we really mean is that someone’s actions led to the death of X, thus allowing us to be consistent with our conceptions of existence, causality, killing, blame, etc.

Existence Requirement, Non-Existential Asymmetry, & It’s Implications

In any case, if all facts are datable (or at least facts like these), then we should be able to say when exactly death is bad for us.  Can things only be bad for us when we exist?  If so, this is what Kagan refers to as the existence requirement.  If we don’t accept such a requirement — that one must exist in order for things to be bad for us — that produces other problems, like being able to say for example that non-existence could be bad for someone who has never existed but that could have possibly existed.  This seems to be a pretty strange claim to hold to.  So if we refuse to accept that it’s a tragedy for possibly existent people to never come into existence, then we’d have to accept the existence requirement, which I would contend is a more plausible assumption to accept.  But if we do so, then it seems that we have to accept that death isn’t in fact bad for us.

Kagan suggests that we may be able to reinterpret the existence requirement, and he does this by distinguishing between two versions, a modest version which asserts that something can be bad for you only if you exist at some time or another, and a bold version which asserts that something can be bad for you only if you exist at the same time as that thing.  Accepting the modest version seems to allow us a way out of the problems posed here, but that it too has some counter-intuitive implications.

He illustrates this with another example:

Suppose that somebody’s got a nice long life. He lives 90 years. Now, imagine that, instead, he lives only 50 years. That’s clearly worse for him. And if we accept the modest existence requirement, we can indeed say that, because, after all, whether you live 50 years or 90 years, you did exist at some time or another. So the fact that you lost the 40 years you otherwise would have had is bad for you. But now imagine that instead of living 50 years, the person lives only 10 years. That’s worse still. Imagine he dies after one year. That’s worse still. An hour? Worse still. Finally, imagine I bring it about that he never exists at all. Oh, that’s fine.

He thinks this must be accepted if we accept the modest version of the existence requirement, but how can this be?  If one’s life is shortened relative to what they would have had, this is bad, and gets progressively worse as the life is hypothetically shortened, until a life span of zero is reached, in which case they no longer meet the modest existence requirement and thus can’t have anything be bad for them.  So it’s as if it gets infinitely worse as the potential life span approaches the limit of zero, and then when zero is reached, becomes benign and is no longer an issue.

I think a reasonable response to this scenario is to reject the claim that hypothetically shrinking the life span to zero is suddenly no longer an issue.  What seems to be glossed over in this example is the fact that this is a set of comparisons of one hypothetical life to another hypothetical life (two lives with different non-zero life spans), resulting in a final comparison between one hypothetical life and no life at all (a life span of zero).  This example illustrates whether or not something is better or worse in comparison, not whether something is good or bad intrinsically speaking.  The fact that somebody lived for as long as 90 years or only for 10 years isn’t necessarily good or bad but only better or worse in comparison to somebody who’s lived for a different length of time.

The Intrinsic Good of Existence & Intuitions On Death

However, I would go further and say that there is an intrinsic good to existing or being alive, and that most people would agree with such a claim (and that the strong will to live that most of us possess is evidence of our acknowledging such a good).  That’s not to say that never having lived is bad, but only to say that living is good.  If not living is neither good nor bad but considered a neutral or inconsequential state, then we can hold the position that living is better than not living, even if not living isn’t bad at all (after all it’s neutral, neither good nor bad).  Thus we can still maintain our modest existence requirement while consistently holding these views.  We can say that not living is neither good nor bad, that living 10 years is good (and better than not living), that living 50 years is even better, and that living 90 years is even better yet (assuming, for the sake of argument, that the quality of life is equivalently good in every year of one’s life).  What’s important to note here is that not having lived in the first place doesn’t involve the loss of a good, because there was never any good to begin with.  On the other hand, extending the life span involves increasing the quantity of the good, by increasing it’s duration.

Kagan seems to agree overall with the deprivation account of why we believe death is bad for us, but that some puzzles like those he presented still remain.  I think one of the important things to take away from this article is the illustration that we have obvious limitations in the language that we use to describe our ontological conceptions.  These scenarios and our intuitions about them also seem to show that we all generally accept that living or existence is intrinsically good.  It may also highlight the fact that many people intuit that some part of us (such as a soul) continues to exist after death such that death can be bad for us after all (since our post-death “self” would still exist).  While the belief in souls is irrational, it may help to explain some common intuitions about death.

Dying vs. Death, & The Loss of An Intrinsic Value

Remember that Kagan began his article by distinguishing between how one dies, the prospect of dying and death itself.  He asked us, how can the prospect of dying be bad if death itself (which is only true when we no longer exist) isn’t bad for us. Well, perhaps we should consider that when people say that death is bad for us they tend to mean that dying itself is bad for us.  That is to say, the prospect of dying isn’t unpleasant because death is bad for us, but rather because dying itself is bad for us.  If dying occurs while we’re still alive, resulting in one’s eventual loss of life, then dying can be bad for us even if we accepted the bold existence requirement — that something can only be bad for us if we exist at the same time as that thing.  So if the “thing” we’re referring to is our dying rather than our death, this would be consistent with the deprivation account of death, would allow us to put a date (or time interval) on such an event, and would seem to resolve the aforementioned problems.

As for Kagan’s opening question, when is death bad for us?  If we accept my previous response that dying is what’s bad for us, rather than death, then it would stand to reason that death itself isn’t ever bad for us (or doesn’t have to be), but rather what is bad for us is the loss of life that occurs as we die.  If I had to identify exactly when the “badness” that we’re actually referring to occurs, I suppose I would choose an increment of time before one’s death occurs (with an exclusive upper bound set to the time of death).  If time is quantized, as per quantum mechanics, then that means that the smallest interval of time is one Planck second.  So I would argue that at the very least, the last Planck second of our life (if not a longer interval), marks the event or time interval of our dying.

It is this last interval of time ticking away that is bad for us because it leads to our loss of life, which is a loss of an intrinsic good.  So while I would argue that never having received an intrinsic good in the first place isn’t bad (such as never having lived), the loss of (or the process of losing) an intrinsic good is bad.  So I agree with Kagan that the deprivation account is on the right track, but I also think the problems he’s posed are resolvable by thinking more carefully about the terminology we use when describing these concepts.

Transcendental Argument For God’s Existence: A Critique

with 2 comments

Theist apologists and theologians have presented many arguments for the existence of God throughout history including the Ontological Argument, Cosmological Argument, Fine-Tuning Argument, the Argument from Morality, and many others — all of which having been refuted with various counter arguments.  I’ve written about a few of these arguments in the past (1, 2, 3), but one that I haven’t yet touched on is that of the Transcendental Argument for God (or simply TAG).  Not long ago I heard the Christian apologist Matt Slick conversing/debating with the well renowned atheist Matt Dillahunty on this topic and then I decided to look deeper into the argument as Slick presents it on his website.  I have found a number of problems with his argument, so I decided to iterate them in this post.

Slick’s basic argument goes as follows:

  1. The Laws of Logic exist.
    1. Law of Identity: Something (A) is what it is and is not what it is not (i.e. A is A and A is not not-A).
    2. Law of Non-contradiction: A cannot be both A and not-A, or in other words, something cannot be both true and false at the same time.
    3. Law of the Excluded Middle: Something must either be A or not-A without a middle ground, or in other words, something must be either true or false without a middle ground.
  2. The Laws of Logic are conceptual by nature — are not dependent on space, time, physical properties, or human nature.
  3. They are not the product of the physical universe (space, time, matter) because if the physical universe were to disappear, The Laws of Logic would still be true.
  4. The Laws of Logic are not the product of human minds because human minds are different — not absolute.
  5. But, since the Laws of Logic are always true everywhere and not dependent on human minds, it must be an absolute transcendent mind that is authoring them.  This mind is called God.
  6. Furthermore, if there are only two options to account for something, i.e., God and no God, and one of them is negated, then by default the other position is validated.
  7. Therefore, part of the argument is that the atheist position cannot account for the existence of The Laws of Logic from its worldview.
  8. Therefore God exists.

Concepts are Dependent on and the Product of Physical Brains

Let’s begin with number 2, 3, and 4 from above:

The Laws of Logic are conceptual by nature — are not dependent on space, time, physical properties, or human nature.  They are not the product of the physical universe (space, time, matter) because if the physical universe were to disappear, The Laws of Logic would still be true.  The Laws of Logic are not the product of human minds because human minds are different — not absolute.

Now I’d like to first mention that Matt Dillahunty actually rejected the first part of Slick’s premise here, as Dillahunty explained that while logic (the concept, our application of it, etc.) may in fact be conceptual in nature, the logical absolutes themselves (i.e. the laws of logic) which logic is based on are in fact neither conceptual nor physical.  My understanding of what Dillahunty was getting at here is that he was basically saying that just as the concept of an apple points to or refers to something real (i.e. a real apple) which is not equivalent to the concept of an apple, so also does the concept of the logical absolutes refer to something that is not the same as the concept itself.  However, what it points to, Dillahunty asserted, is something that isn’t physical either.  Therefore, the logical absolutes themselves are neither physical nor conceptual (as a result, Dillahunty later labeled “the essence” of the LOL as transcendent).  When Dillahunty was pressed by Slick to answer the question, “then what caused the LOL to exist?”, Dillahunty responded by saying that nothing caused them (or we have no reason to believe so) because they are transcendent and are thus not a product of anything physical nor conceptual.

If this is truly the case, then Dillahunty’s point here does undermine the validity of the logical structure of Slick’s argument, because Slick would then be beginning his argument by referencing the content and the truth of the logical absolutes themselves, and then later on switching to the concept of the LOL (i.e. their being conceptual in their nature, etc.).  For the purposes of this post, I’m going to simply accept Slick’s dichotomy that the logical absolutes (i.e. the laws of logic) are in fact either physical or conceptual by nature and then I will attempt to refute the argument anyway.  This way, if “conceptual or physical” is actually a true dichotomy (i.e. if there are no other options), despite the fact that Slick hasn’t proven this to be the case, his argument will be undermined anyway.  If Dillahunty is correct and “conceptual or physical” isn’t a true dichotomy, then even if my refutation here fails, Slick’s argument will still be logically invalid based on the points Dillahunty raised.

I will say however that I don’t think I agree with the point that Dillahunty made that the LOL are neither physical nor conceptual, and for a few reasons (not least of all because I am a physicalist).  My reasons for this will become more clear throughout the rest of this post, but in a nutshell, I hold that concepts are ultimately based on and thus are a subset of the physical, and the LOL would be no exception to this.  Beyond the issue of concepts, I believe that the LOL are physical in their nature for a number of other reasons as well which I’ll get to in a moment.

So why does Slick think that the LOL can’t be dependent on space?  Slick mentions in the expanded form of his argument that:

They do not stop being true dependent on location. If we travel a million light years in a direction, The Laws of Logic are still true.

Sure, the LOL don’t depend on a specific location in space, but that doesn’t mean that they aren’t dependent on space in general.  I would actually argue that concepts are abstractions that are dependent on the brains that create them based on those brains having recognized properties of space, time, and matter/energy.  That is to say that any concept such as the number 3 or the concept of redness is in fact dependent on a brain having recognized, for example, a quantity of discrete objects (which when generalized leads to the concept of numbers) or having recognized the color red in various objects (which when generalized leads to the concept of red or redness).  Since a quantity of discrete objects or a color must be located in some kind of space — even if three points on a one dimensional line (in the case of the number 3), or a two-dimensional red-colored plane (in the case of redness), then we can see that these concepts are ultimately dependent on space and matter/energy (of some kind).  Even if we say that concepts such as the color red or the number 3 do not literally exist in actual space nor are made of actual matter, they do have to exist in a mental space as mental objects, just as our conception of an apple floating in empty space doesn’t actually lie in space nor is made of matter, it nevertheless exists as a mental/perceptual representation of real space and real matter/energy that has been experienced by interaction with the physical universe.

Slick also mentions in the expanded form of his argument that the LOL can’t be dependent on time because:

They do not stop being true dependent on time. If we travel a billion years in the future or past, The Laws of Logic are still true.

Once again, sure, the LOL do not depend on a specific time, but rather they are dependent on time in general, because minds depend on time in order to have any experience of said concepts at all.  So not only are concepts only able to be formed by a brain that has created abstractions from physically interacting with space, matter, and energy within time (so far as we know), but the mind/concept-generating brain itself is also made of actual matter/energy, lying in real space, and operating/functioning within time.  So concepts are in fact not only dependent on space, time, and matter/energy (so far as we know), but are in fact also the product of space, time, and matter/energy, since it is only certain configurations of such that in fact produce a brain and the mind that results from said brain.  Thus, if the LOL are conceptual, then they are ultimately the product of and dependent on the physical.

Can Truth Exist Without Brains and a Universe?  Can Identities Exist Without a Universe?  I Don’t Think So…

Since Slick himself even claims that The Laws of Logic (LOL) are conceptual by nature, then that would mean that they are in fact also dependent on and the product of the physical universe, and more specifically are dependent on and the product of the human mind (or natural minds in general which are produced by a physical brain).  Slick goes on to say that the LOL can’t be dependent on the physical universe (which contains the brains needed to think or produce those concepts) because “…if the physical universe were to disappear, The Laws of Logic would still be true.”  It seems to me that without a physical universe, there wouldn’t be any “somethings” with any identities at all and so the Law of Identity which is the root of the other LOL wouldn’t apply to anything because there wouldn’t be anything and thus no existing identities.  Therefore, to say that the LOL are true sans a physical universe would be meaningless because identities themselves wouldn’t exist without a physical universe.  One might argue that abstract identities would still exist (like numbers or properties), but abstractions are products of a mind and thus need a brain to exist (so far as we know).  If one argued that supernatural identities would still exist without a physical universe, this would be nothing more than an ad hoc metaphysical assertion about the existence of the supernatural which carries a large burden of proof that can’t be (or at least hasn’t been) met.  Beyond that, if at least one of the supernatural identities was claimed to be God, this would also be begging the question.  This leads me to believe that the LOL are in fact a property of the physical universe (and appear to be a necessary one at that).

And if truth is itself just another concept, it too is dependent on minds and by extension the physical brains that produce those minds (as mentioned earlier).  In fact, the LOL seem to be required for any rational thought at all (hence why they are often referred to as the Laws of Thought), including the determination of any truth value at all.  So our ability to establish the truth value of the LOL (or the truth of anything for that matter) is also contingent on our presupposing the LOL in the first place.  So if there were no minds to presuppose the very LOL that are needed to establish its truth value, then could one say that they would be true anyway?  Wouldn’t this be analogous to saying that 1 + 1 = 2 would still be true even if numbers and addition (constructs of the mind) didn’t exist?  I’m just not sure that truth can exist in the absence of any minds and any physical universe.  I think that just as physical laws are descriptions of how the universe changes over time, these Laws of Thought are descriptions that underlie what our rational thought is based on, and thus how we arrive at the concept of truth at all.  If rational thought ceases to exist in the absence of a physical universe (since there are no longer any brains/minds), then the descriptions that underlie that rational thought (as well as their truth value) also cease to exist.

Can Two Different Things Have Something Fundamental in Common?

Slick then erroneously claims that the LOL can’t be the product of human minds because human minds are different and thus aren’t absolute, apparently not realizing that even though human minds are different from one another in many ways, they also have a lot fundamentally in common, such as how they process information and how they form concepts about the reality they interact with generally.  Even though our minds differ from one another in a number of ways, we nevertheless only have evidence to support the claim that human brains produce concepts and process information in the same general way at the most fundamental neurological level.  For example, the evidence suggests that the concept of the color red is based on the neurological processing of a certain range of wavelengths of electromagnetic radiation that have been absorbed by the eye’s retinal cells at some point in the past.  However, in Slick’s defense, I’ll admit that it could be the case that what I experience as the color red may be what you would call the color blue, and this would in fact suggest that concepts that we think we mutually understand are actually understood or experienced differently in a way that we can’t currently verify (since I can’t get in your mind and compare it to my own experience, and vice versa).

Nevertheless, just because our minds may experience color differently from one another or just because we may differ slightly in terms of what range of shades/tints of color we’d like to label as red, this does not mean that our brains/minds (or natural minds in general) are not responsible for producing the concept of red, nor does it mean that we don’t all produce that concept in the same general way.  The number 3 is perhaps a better example of a concept that is actually shared by humans in an absolute sense, because it is a concept that isn’t dependent on specific qualia (like the color red is).  The concept of the number 3 has a universal meaning in the human mind since it is derived from the generalization of a quantity of three discrete objects (which is independent of how any three specific objects are experienced in terms of their respective qualia).

Human Brains Have an Absolute Fundamental Neurology Which Encompasses the LOL

So I see no reason to believe that human minds differ at all in their conception of the LOL, especially if this is the foundation for rational thought (and thus any coherent concept formed by our brains).  In fact, I also believe that the evidence within the neurosciences suggests that the way the brain recognizes different patterns and thus forms different/unique concepts and such is dependent on the fact that the brain uses a hardware configuration schema that encompasses the logical absolutes.  In a previous post, my contention was that:

Additionally, if the brain’s wiring has evolved in order to see dimensions of difference in the world (unique sensory/perceptual patterns that is, such as quantity, colors, sounds, tastes, smells, etc.), then it would make sense that the brain can give any particular pattern an identity by having a unique schema of hardware or unique use of said hardware to perceive such a pattern and distinguish it from other patterns.  After the brain does this, the patterns are then arguably organized by the logical absolutes.  For example, if the hardware scheme or process used to detect a particular pattern “A” exists and all other patterns we perceive have or are given their own unique hardware-based identity (i.e. “not-A” a.k.a. B, C, D, etc.), then the brain would effectively be wired such that pattern “A” = pattern “A” (law of identity), any other pattern which we can call “not-A” does not equal pattern “A” (law of non-contradiction), and any pattern must either be “A” or some other pattern even if brand new, which we can also call “not-A” (law of the excluded middle).  So by the brain giving a pattern a physical identity (i.e. a specific type of hardware configuration in our brain that when activated, represents a detection of one specific pattern), our brains effectively produce the logical absolutes by nature of the brain’s innate wiring strategy which it uses to distinguish one pattern from another.  So although it may be true that there can’t be any patterns stored in the brain until after learning begins (through sensory experience), the fact that the DNA-mediated brain wiring strategy inherently involves eventually giving a particular learned pattern a unique neurological hardware identity to distinguish it from other stored patterns, suggests that the logical absolutes themselves are an innate and implicit property of how the brain stores recognized patterns.

So I believe that our brain produces and distinguishes these different “object” identities by having a neurological scheme that represents each perceived identity (each object) with a unique set of neurons that function in a unique way and thus which have their own unique identity.  Therefore, it would seem that the absolute nature of the LOL can easily be explained by how the brain naturally encompasses them through its fundamental hardware schema.  In other words, my contention is that our brain uses this wiring schema because it is the only way that it can be wired to make any discriminations at all and validly distinguish one identity from another in perception and thought, and this ability to discriminate various aspects of reality would be evolutionarily naturally-selected for based on the brain accurately modeling properties of the universe (in this case different identities/objects/causal-interactions existing) as it interacts with that environment via our sensory organs.  Which would imply that the existence of discrete identities is a property of the physical universe, and the LOL would simply be a description of what identities are.  This would explain why we see the LOL as absolute and fundamental and presuppose them.  Our brains simply encompass them in the most fundamental aspect of our neurology as it is a fundamental physical property of the universe that our brains model.

I believe that this is one of the reasons that Dillahunty and others believe that the LOL are transcendent (neither physical nor conceptual), because natural brains/minds are neurologically incapable of imagining a world existing without them.  The problem then only occurs because Dillahunty is abstracting a hypothetical non-physical world or mode of existence, yet doesn’t realize that he is unable to remove every physical property from any abstracted world or imagined mode of existence.  In this case, the physical property that he is unable to remove from his hypothetical non-physical world is his own neurological foundation, the very foundation that underlies all concepts (including that of existential identities) and which underlies all rational thought.  I may be incorrect about Dillahunty’s position here, but this is what I’ve inferred anyway based on what I’ve heard him say while conversing with Slick about this topic.

Human (Natural) Minds Can’t Account for the LOL, But Disembodied Minds Can?

Slick even goes on to say in point 5 that:

But, since the Laws of Logic are always true everywhere and not dependent on human minds, it must be an absolute transcendent mind that is authoring them.  This mind is called God.

We can see here that he concedes that the LOL is in fact the product of a mind, only he rejects the possibility that it could be a human mind (and by implication any kind of natural mind).  Rather, he insists that it must be a transcendent mind of some kind, which he calls God.  The problem with this conclusion is that we have no evidence or argument that demonstrates that minds can exist without physical brains existing in physical space within time.  To assume so is simply to beg the question.  Thus, he is throwing in an ontological/metaphysical assumption of substance dualism as well as that of disembodied minds, not only claiming that there must exist some kind of supernatural substance, but that this mysterious “non-physical” substance also has the ability to constitute a mind, and somehow do so without any dependence on time (even though mental function and thinking is itself a temporal process).  He assumes all of this of course without providing any explanation of how this mind could work even in principle without being made of any kind of stuff, without being located in any kind of space, and without existing in any kind of time.  As I’ve mentioned elsewhere concerning the ad hoc concept of disembodied minds:

…the only concept of a mind that makes any sense at all is that which involves the properties of causality, time, change, space, and material, because minds result from particular physical processes involving a very complex configuration of physical materials.  That is, minds appear to be necessarily complex in terms of their physical structure (i.e. brains), and so trying to conceive of a mind that doesn’t have any physical parts at all, let alone a complex arrangement of said parts, is simply absurd (let alone a mind that can function without time, change, space, etc.).  At best, we are left with an ad hoc, unintelligible combination of properties without any underlying machinery or mechanism.

In summary, I believe Slick has made several errors in his reasoning, with the most egregious being his unfounded assumption that natural minds aren’t capable of producing an absolute concept such as the LOL simply because natural minds have differences between one another (not realizing that all minds have fundamental commonalities), and also his argument’s reliance on the assumption that an ad hoc disembodied mind not only exists (whatever that could possibly mean) but that this mind can somehow account for the LOL in a way that natural minds can not, which is nothing more than an argument from ignorance, a logical fallacy.  He also insists that the Laws of Logic would be true without any physical universe, not realizing that the truth value of the Laws of Logic can only be determined by presupposing the Laws of Logic in the first place, which is circular, thus showing that the truth value of the Laws of Logic can’t be used to prove that they are metaphysically transcendent in any way (even if they actually happen to be metaphysically transcendent).  Lastly, without a physical universe of any kind, I don’t see how identities themselves can exist, and identities seem to be required in order for the LOL to be meaningful at all.