The Open Mind

Cogito Ergo Sum

Archive for the ‘Metaphysics’ Category

Irrational Man: An Analysis (Part 1, Chapter 2: “The Encounter with Nothingness”)

leave a comment »

In the first post in this series on William Barrett’s Irrational Man, I explored Part 1, Chapter 1: The Advent of Existentialism, where Barrett gives a brief description of what he believes existentialism to be, and the environment it evolved within.  In this post, I want to explore Part I, Chapter 2: The Encounter with Nothingness.

Ch. 2 – The Encounter With Nothingness

Barrett talks about the critical need for self-analysis, despite the fact that many feel that this task has already been accomplished and that we’ve carried out this analysis exhaustively.  But Barrett sees this as contemporary society’s running away from the facts of our own ignorance.  Modern humankind, it seems to Barrett, is in even more need to question their identity for we seem to understand ourselves even less than when we first began to question who we are as a species.

1. The Decline of Religion

Ever since the end of the Middle Ages, religion as a whole has been on the decline.  This decrease in religiosity (particularly in Western civilization) became most prominent during the Enlightenment.  As science began to take off, the mechanistic structure and qualities of the universe (i.e. its laws of nature) began to reveal themselves in more and more detail.  This in turn led to a replacement of a large number of superstitious and supernatural religious beliefs about the causes for various phenomena with scientific explanations that could be empirically verified and tested.  Throughout this process, as theological explanations became replaced more and more with naturalistic explanations, the presumed role of God and the Church began to evaporate.  Thus, at the purely intellectual level, we underwent a significant change in terms of how we viewed the world and subsequently how we viewed the nature of human beings and our place in the world.

But, as Barrett points out:

“The waning of religion is a much more concrete and complex fact than a mere change in conscious outlook; it penetrates the deepest strata of man’s total psychic life…Religion to medieval man was not so much a theological system as a solid psychological matrix surrounding the individual’s life from birth to death, sanctifying and enclosing all its ordinary and extraordinary occasions in sacrament and ritual.”

We can see here how the role of religion has changed to some degree from medieval times to the present day.  Rather than simply being a set of beliefs that identified a person with a particular group and which had soteriological, metaphysical, and ethical significance to the believer (as it is more so in modern times), it used to be a complete system or a total solution for how one was to live their life.  And it also provided a means of psychological stability and coherence by providing a ready-made narrative of the essence of man; a sense of familiarity and a pre-defined purpose and structure that didn’t have to be constructed from scratch by the individual.

While the loss of the Church involved losing an entire system of dogmatic teachings, symbols, and various rites and sacraments, the most important loss according to Barrett was the loss of a concrete connection to a transcendent realm of being.  We were now set free such that we had to grapple with the world on our own, with all its precariousness, and to deal head-on with the brute facts of our own existence.

What I find most interesting in this chapter is when Barrett says:

“The rationalism of the medieval philosophers was contained by the mysteries of faith and dogma, which were altogether beyond the grasp of human reason, but were nevertheless powerfully real and meaningful to man as symbols that kept the vital circuit open between reason and emotion, between the rational and non-rational in the human psyche.”

And herein lies the crux of the matter; for Barrett believes that religion’s greatest function historically was its serving as a bridge between the rational and non-rational elements of our psychology, and also its serving as a barrier that limited the effective reach and power of our rationality over the rest of our psyches and our view of the world.  I would go even further to suggest that it may have allowed our emotional expression to more harmoniously co-exist and work with our reason instead of primarily being at odds with it.

I agree with Barrett’s point here in that religion often promulgates ideas and practices that appeal to many of our emotional dispositions and intuitions, thus allowing people to express certain emotional states and to maintain comforting intuitions that might otherwise be hindered or subjugated by reason and rationality.  And it has also provided a path for reason to connect to the unreasonable to some degree; as a means of minimizing the need to compartmentalize rationality from the emotional or irrational influences on a person’s belief systems.  By granting people an opportunity to combine reason and emotion in some way, where this reason could be used to try and make some sense of emotion and to give it some kind of validation without having to reject reason completely, religion has been effective (historically anyway) in helping people to avoid the discomfort of rejecting beliefs that they know to be reasonable (many of these beliefs at least) while also being able to avoid the discomfort of inadequate emotional/non-rational expression.

Once religion began to go by the wayside, due in large part to the accumulated knowledge acquired through reason and scientific progress, it became increasingly difficult to square the evidence and arguments that were becoming more widely known with many of the claims that religion and the Church had been propagating for centuries.  Along with this growing invalidation or loss of credibility came the increased need to compartmentalize reason and rationality from emotionally and irrationally-derived beliefs and experiences.  And this difficulty led to a decline in religiosity for many, which was accompanied with the loss in any emotional and irrational/non-rational expression that religion had once offered the masses.  Once reason and rationality expanded beyond a certain threshold, it effectively popped the religious bubble that had previously contained it, causing many to begin to feel homeless, out of place, and in many ways incomplete in the new world they now found themselves living in.

2. The Rational Ordering of Society

The organization of our lives has been, historically at least, a relatively organic process where it had a kind of self-guiding, pragmatic, and intuitive structure and evolution.  But once we approached the modern age, as Barrett points out, we saw a drastic shift toward an increasingly rational form of organization, where efficiency and a sort of technical precision began to dominate the overall direction of society and the lives of each individual.  The rise of capitalism was a part of this cultural evolutionary process (as was, I would argue, the Industrial Revolution), and only further enhanced the power and influence of reason and rationality over our day-to-day lives.

The collectivization and distribution of labor involved in the mass production of commodities and various products had taken us entirely out of our agrarian and hunter-gatherer roots.  We no longer lived off of the land so to speak, and were no longer ensconced within the kinds of natural scenery while performing the types of day-to-day tasks that our species had adapted to over its long-term evolutionary history.  And with this collectivization, we also lost a large component of our individuality; a component that is fairly important in human psychology.

Barrett comments on how we’ve accepted modern society as normal, relatively unaware of our ancestral roots and our previous way of life:

“We are so used to the fact that we forget it or fail to perceive that the man of the present day lives on a level of abstraction altogether beyond the man of the past.”

And he goes on to talk about how our ratcheting forward in terms of technological progress and any mechanistic societal re-structuring is what gives us our incredible power over our environment but at the cost of feeling rootless and without any concrete sense of feeling, when it’s needed now more than ever.

Perhaps a more interesting point he makes is with respect to how our increased mechanization and collectivization has changed a fundamental part of how our psyche and identity operate:

“Not only can the material wants of the masses be satisfied to a degree greater than ever before, but technology is fertile enough to generate new wants that it can also satisfy…All of this makes for an extraordinary externalization of life in our time. “

And it is this externalization of our identity and psychology, manifested in ever-growing and ever-changing sets of material objects and information flow, that is interesting to ponder over.  It reminds me somewhat of Richard Dawkins’ concept of an extended phenotype, where the effects of an organism’s genes aren’t merely limited to the organism’s body, but rather they extend into how the organism structures its environment. While this term is technically limited to behaviors that have a direct bearing on the organism’s survival, I prefer to think of this extended phenotype as encompassing everything the organism creates and the totality of its behaviors.

The reason I mention this concept is because I think it makes for a useful analogy here.  For in the earlier evolution of organisms on earth, the genes’ effects or the resulting phenotypes were primarily manifested as the particular body and bodily behavior of the organism, and as organisms became more complex (particularly those that evolved brains and a nervous system), that phenotype began to extend itself into the abilities of an organism to make external structures out of raw materials found in its environment.  And more and more genetic resources were allotted to the organism’s brain which made this capacity for environmental manipulation possible.  As this change occurred, the previous boundaries that defined the organism vanished as the external constructions effectively became an extension of the organism’s body.  But with this new capacity came a loss of intimacy in the sense that the organism wasn’t connected to these external structures in the same way it was connected to its own feelings and internal bodily states; and these external structures also lacked the privacy and hidden qualities inherent in an organism’s thoughts, feelings, and overall subjective experience.

Likewise, as we’ve evolved culturally, eventually gaining the ability to construct and mass-produce a plethora of new material goods, we began to dedicate a larger proportion of our attention on these different external objects, wanting more and more of them well past what we needed for survival.  And we began to invest or incorporate more of ourselves, including our knowledge and information, in these externalities, forcing us to compensate by investing less and less in our internal, private states and locally stored knowledge.  Now it would be impractical if not impossible for an organism to perform increasingly complex behaviors and to continuously increase its ability to manipulate its own environment without this kind of trade-off occurring in terms of its identity, and how it distributes its limited psychological resources.

And herein lies the source of our seemingly fractured psyche: the natural selection of curiosity, knowledge accumulation, and behavioral complexity for survival purposes has become co-opted for just about any purpose imaginable, since the hardware and schema required for the former has a capacity that transcends its evolutionary purpose and that transcends the finite boundaries, the guiding constraints, and the essential structure of the lives we once had in our evolutionary past.  Now we’ve moved beyond what used to be a kind of essential quality and highly predictable trajectory of our lives and of our species, and we’ve moved into the unknown; from the realm of the finite and the familiar to the seemingly infinite realm of the unknown.

A big part of this externalization has manifested itself in the new ways we acquire, store, and share information, such as with the advent of mass media.  As Barrett puts it:

“…journalism enables people to deal with life more and more at second hand.  Information usually consists of half-truths, and “knowledgability” becomes a substitute for real knowledge.  Moreover, popular journalism has by now extended its operations into what were previously considered the strongholds of culture-religion, art, philosophy…It becomes more and more difficult to distinguish the secondhand from the real thing, until most people end by forgetting there is such a distinction.”

I think this ties well into what Barrett mentioned previously when he talked about how modern civilization is built on increasing levels of abstraction.  The very information we’re absorbing, in order to make sense of and deal with a large aspect of our contemporary world, is second hand at best.  The information we rely on has become increasingly abstracted, manipulated, reinterpreted, and distorted.  The origin of so much of this information is now at least one level away from our immediate experience, giving it a quality that is disconnected, less important, and far less real than it otherwise would be.  But we often forget that there’s any epistemic difference between our first-hand lived experience and the information that arises from our mass media.

To add to Barrett’s previous description of existentialism as a reaction against positivism, he also mentions Karl Jaspers’ views of existentialism, which he described as:

“…a struggle to awaken in the individual the possibilities of an authentic and genuine life, in the fact of the great modern drift toward a standardized mass society.”

Though I concur with Jaspers’ claim that modernity has involved a shift toward a standardized mass society in a number of ways, I also think that it has provided the means for many more ways of being unique, many more possible talents and interests for one to explore, and many more kinds of goals to choose from for one’s life project(s).  Collectivization and distribution of labor and the technology that has precipitated from it have allowed many to avoid spending all day hunting and gathering food, making or cleaning their clothing, and other tasks that had previously consumed most of one’s time.

Now many people (in the industrialized world at least) have the ability to accumulate enough free time to explore many other types of experiences, including reading and writing, exploring aspects of our existence with highly-focused introspective effort (as in philosophy), creating or enjoying vast quantities of music and other forms of art, listening to and telling stories, playing any number of games, and thousands of other activities.  And even though some of these activities have been around for millennia, many of them have not (or there was little time for them), and of those that have been around the longest, there were still far fewer choices than what we have on offer today.  So we mustn’t forget that many people develop a life-long passion for at least some of these experiences that would never have been made possible without our modern society.

The issue I think lies in the balance or imbalance between standardization and collectivization on the one hand (such that we reap the benefits of more free time and more recreational choices), and opportunities for individualistic expression on the other.  And of the opportunities that exist for individualistic expression, there is still the need to track the psychological consequences that result from them so we can pick more psychologically fulfilling choices; so that we can pick choices that better allow us to keep open that channel between reason and emotion and between the rational and the non-rational/irrational that religion once provided, as Barrett mentioned earlier.

We also have to accept the fact that the findings in science have largely dehumanized or inhibited the anthropomorphization of nature, instead showing us that the universe is indifferent to us and to our goals; that humans and life in general are more of an aberration than anything else within a vast cosmos that is inhospitable to life.  Only after acknowledging the situation we’re in can we fully appreciate the consequences that modernity has had on upending the comforting structure that religion once gave to humans throughout their lives.  As Barrett tell us:

“Science stripped nature of its human forms and presented man with a universe that was neutral, alien, in its vastness and force, to his human purposes.  Religion, before this phase set in, had been a structure that encompassed man’s life, providing him with a system of images and symbols by which he could express his own aspirations toward psychic wholeness.  With the loss of this containing framework man became not only a dispossessed but a fragmentary being.”

Although we can’t unlearn the knowledge that has caused religion to decline including that which has had a bearing on the questions dealing with our ultimate meaning and purpose, we can certainly find new ways of filling the psychological void felt by many as a result of this decline.  The modern world has many potential opportunities for psychologically fulfilling projects in life, and these opportunities need to be more thoroughly explored.  But, existentialist thought rightly reminds us of how the fruits of our rational and enlightened philosophy have been less capable of providing as satisfying an answer to the question “What are human beings?” as religion once gave.  Along with the fruits of the Enlightenment came a lack of consolation, and a number of painful truths pertaining to the temporal and contingent nature of our existence, previously thought to possess both an eternal and necessary character.  Overall, this cultural change and accumulation of knowledge effectively forced humanity out of its comfort zone.  Barrett described the situation quite well when he said:

“In the end, he [modern man] sees each man as solitary and unsheltered before his own death.  Admittedly, these are painful truths, but the most basic things are always learned with pain, since our inertia and complacent love of comfort prevent us from learning them until they are forced upon us.”

Modern humanity then, has become alienated from God, from nature, and from the complex social machinery that produces the goods and services that he both wants and needs.  Even worse yet however, is the alienation from one’s own self that has occurred as humans have found themselves living in a society that expects each of them to perform some specific function in life (most often not of their choosing), and this leads to society effectively identifying each person as this function, forgetting or ignoring the real person buried underneath.

3.  Science and Finitude

In this last section of chapter two, Barrett discusses some of the ultimate limitations in our use of reason and in the scope of knowledge we can obtain from our scientific and mathematical methods of inquiry.  Reason itself is described as the product of a creature “whose psychic roots still extend downward into the primeval soil,” and is thus a creation from an animal that is still intimately connected to an irrational foundation; an animal still possessing a set of instincts arising from its evolutionary origins.

We see this core presence of irrationality in our day-to-day lives whenever we have trouble trying to employ reason, such as when it conflicts with our emotions and intuitions.  And Barrett brings up the optimism in the confirmed rationalist and their belief that they may still be able to one day overcome all of these obstacles of irrationality by simply employing reason in a more clever way than before.  Clearly Barrett doesn’t share this optimism of the rationalist and he tries to support his pessimism by pointing out a few limitations of reason as suggested within the work of Immanuel Kant, modern physics, and mathematics.

In Kant’s Critique of Pure Reason, he lays out a substantial number of claims and concepts relating to metaphysics and epistemology, where he discusses the limitations of both reason and the senses.  Among other things, he claims that our reason is limited by certain a priori intuitional forms or predispositions about space and time (for example) that allow us to cognize any thing at all.  And so if there are any “things in themselves”, that is, real objects underlying whatever appears to us in the external world, where these objects have qualities and attributes that are independent of our experience, then we can never know anything of substance about these underlying features.  We can never see an object or understand it in any way without incorporating a large number of intuitional forms and assumptions in order to create that experience at all; we can never see the world without seeing it through the limited lens of our minds and our body’s sensory machinery.

For Kant, this also means that we may often use reason erroneously to make unjustified claims of knowledge pertaining to the transcendent, particularly within metaphysics and theology.  When reason is applied to ideas that can’t be confirmed through sensory experience, or that lie outside of our realm of possible experience (such as the idea that cause and effect laws govern every interaction in the universe, something we can never know through experience), it leads to knowledge claims that it can’t justify.  Another limitation of reason, according to Kant, is that it operates through an a priori assumption of unification in our experience, and so the categories and concepts that we infer to exist based on reason are limited by this underlying unifying principle.  Science has added to the rigidity and specificity of this unification, by going beyond what we’ve unified through our unmodified experience (i.e. seeing the sun “rise” and “set” every day), that is, experience without the use of any instruments, telescopes, microscopes, etc. (where the use of these instruments has helped give us more data showing that the earth rotates on an axis rather than the sun revolving around the earth).  Nevertheless, unification is the ultimate goal of reason whether applied with or without a strict scientific method.

Then within physics, we find another limitation in terms of our possible understanding of the world.  Heisenberg’s Uncertainty Principle showed us that we are unable to know complementary parameters pertaining to a particle with an arbitrarily high level of precision.  We eventually hit a limit where, for example, if we know the position of a particle (such as an electron) with a high degree of accuracy at some particular time, then we’re unable to know the momentum of that particle with the same accuracy.  The more we know about one complementary parameter, the less we know about the other.  Barrett describes this discovery in physics as showing that nature may be irrational and chaotic at its most fundamental level and then he says:

“What is remarkable is that here, at the very farthest reaches of precise experimentation, in the most rigorous of the natural sciences, the ordinary and banal fact of our human limitations emerges.”

This is indeed interesting because for a long while many rationalists and positivists held that our potential knowledge was unlimited (or finite, yet complete) and that science was the means to gain this open-ended or complete knowledge of the universe.  Then quantum physics delivered a major blow to this assumption, showing us instead that our knowledge is inherently limited based on how the universe is causally structured.  However, it’s worth pointing out that this is not a human limitation per se, but a limitation of any conscious system acquiring knowledge in our universe.  It wouldn’t matter if humans had different brains, better technology, or used artificial intelligence to perform the computations or measurements.  There is simply a limit on what can ever be measured; a limit on what can ever be known about the state of any system that is being studied.

Godel’s Incompleteness Theorem likewise showed how there were inherent limitations in any mathematical formulation of number theory that was rich enough where, no matter how large or seemingly complete its set of axioms are, if it is consistent, then there will be statements that can’t be proven or disproven within that formulation.  Likewise, the consistency of any formulation of number theory can’t be proven by the formulation itself; rather, it depends on assumptions that lie outside of that formulation.  However, once again, contrary to what Barrett claims, I don’t think this should be taken to be a human limitation of reason, but rather a limitation of mathematics generally.

Regardless, I concede Barrett’s overarching point that, in all of these cases (Kant, physics, and mathematics), we are still running into scenarios where we are unable to do something or to know something that we may have previously thought we could do or thought we could know, at least in principle.  And so these discoveries did run counter to the beliefs of many that thought that humans were inherently unstoppable in these endeavors of knowledge accumulation and in our ability to create technology capable of solving any problem whatsoever.  We can’t do this.  We are finite creatures with finite brains, but perhaps more importantly, our capacities are also limited by what knowledge is theoretically available to any conscious system trying to better understand the universe.  The universe is inherently unknowable in at least some respects, which means it is unpredictable in at least some respects.  And unpredictability doesn’t sit well with humans intent on eliminating uncertainty, eliminating the unknown, and trying to have a coherent narrative to explain everything encountered throughout one’s existence.

In the next post in this series, I’ll explore Irrational Man, Part 1, Chapter 3: The Testimony of Modern Art.

Advertisements

The Experientiality of Matter

with 5 comments

If there’s one thing that Nietzsche advocated for, it was to eliminate dogmatism and absolutism from one’s thinking.  And although I generally agree with him here, I do think that there is one exception to this rule.  One thing that we can be absolutely certain of is our own conscious experience.  Nietzsche actually had something to say about this (in Beyond Good and Evil, which I explored in a previous post), where he responds to Descartes’ famous adage “cogito ergo sum” (the very adage associated with my blog!), and he basically says that Descartes’ conclusion (“I think therefore I am”) shows a lack of reflection concerning the meaning of “I think”.  He wonders how he (and by extension, Descartes) could possibly know for sure that he was doing the thinking rather than the thought doing the thinking, and he even considers the possibility that what is generally described as thinking may actually be something more like willing or feeling or something else entirely.

Despite Nietzsche’s criticism against Descartes in terms of what thought is exactly or who or what actually does the thinking, we still can’t deny that there is thought.  Perhaps if we replace “I think therefore I am” with something more like “I am conscious, therefore (my) conscious experience exists”, then we can retain some core of Descartes’ insight while throwing out the ambiguities or uncertainties associated with how exactly one interprets that conscious experience.

So I’m in agreement with Nietzsche in the sense that we can’t be certain of any particular interpretation of our conscious experience, including whether or not there is an ego or a self (which Nietzsche actually describes as a childish superstition similar to the idea of a soul), nor can we make any certain inferences about the world’s causal relations stemming from that conscious experience.  Regardless of these limitations, we can still be sure that conscious experience exists, even if it can’t be ascribed to an “I” or a “self” or any particular identity (let alone a persistent identity).

Once we’re cognizant of this certainty, and if we’re able to crawl out of the well of solipsism and eventually build up a theory about reality (for pragmatic reasons at the very least), then we must remain aware of the priority of consciousness in any resultant theory we construct about reality, with regard to its structure or any of its other properties.  Personally, I believe that some form of naturalistic physicalism (a realistic physicalism) is the best candidate for an ontological theory that is the most parsimonious and explanatory for all that we experience in our reality.  However, most people that make the move to adopt some brand of physicalism seem to throw the baby out with the bathwater (so to speak), whereby consciousness gets eliminated by assuming it’s an illusion or that it’s not physical (therefore having no room for it in a physicalist theory, aside from its neurophysiological attributes).

Although I used to feel differently about consciousness (and it’s relationship to physicalism), where I thought it was plausible for it to be some kind of an illusion, upon further reflection I’ve come to realize that this was a rather ridiculous position to hold.  Consciousness can’t be an illusion in the proper sense of the word, because the experience of consciousness is real.  Even if I’m hallucinating where my perceptions don’t directly correspond with the actual incoming sensory information transduced through my body’s sensory receptors, then we can only say that the perceptions are illusory insofar as they don’t directly map onto that incoming sensory information.  But I still can’t say that having these experiences is itself an illusion.  And this is because consciousness is self-evident and experiential in that it constitutes whatever is experienced no matter what that experience consists of.

As for my thoughts on physicalism, I came to realize that positing consciousness as an intrinsic property of at least some kinds of physical material (analogous to a property like mass) allows us to avoid having to call consciousness non-physical.  If it is simply an experiential property of matter, that doesn’t negate its being a physical property of that matter.  It may be that we can’t access this property in such a way as to evaluate it with external instrumentation, like we can for all the other properties of matter that we know of such as mass, charge, spin, or what-have-you, but that doesn’t mean an experiential property should be off limits for any physicalist theory.  It’s just that most physicalists assume that everything can or has to be reducible to the externally accessible properties that our instrumentation can measure.  And this suggests that they’ve simply assumed that the physical can only include externally accessible properties of matter, rather than both internally and externally accessible properties of matter.

Now it’s easy to see why science might push philosophy in this direction because its methodology is largely grounded on third-party verification and a form of objectivity involving the ability to accurately quantify everything about a phenomenon with little or no regard for introspection or subjectivity.  And I think that this has caused many a philosopher to paint themselves into a corner by assuming that any ontological theory underlying the totality of our reality must be constrained in the same way that the physical sciences are.  To see why this is an unwarranted assumption, let’s consider a “black box” that can only be evaluated by a certain method externally.  It would be fallacious to conclude that just because we are unable to access the inside of the box, that the box must therefore be empty inside or that there can’t be anything substantially different inside the box compared to what is outside the box.

We can analogize this limitation of studying consciousness with our ability to study black holes within the field of astrophysics, where we’ve come to realize that accessing any information about their interior (aside from how much mass there is) is impossible to do from the outside.  And if we managed to access this information (if there is any) from the inside by leaping past its outer event horizon, it would be impossible for us to escape and share any of that information.  The best we can do is to learn what we can from the outside behavior of the black hole in terms of its interaction with surrounding matter and light and infer something about the inside, like how much matter it contains (e.g. we can infer the mass of a black hole from its outer surface area).  And we can learn a little bit more by considering what is needed to create or destroy a black hole, thus creating or destroying any interior qualities that may or may not exist.

A black hole can only form from certain configurations of matter, particularly aggregates that are above a certain mass and density.  And it can only be destroyed by starving it to death, by depriving it of any new matter, where it will slowly die by evaporating entirely into Hawking radiation, thus destroying anything that was on the inside in the process.  So we can infer that any internal qualities it does have, however inaccessible they may be, can be brought into and out of existence with certain physical processes.

Similarly, we can infer some things about consciousness by observing one’s external behavior including inferring some conditions that can create, modify, or destroy that type of consciousness, but we are unable to know what it’s like to be on the inside of that system once it exists.  We’re only able to know about the inside of our own conscious system, where we are in some sense inside our own black hole with nobody else able to access this perspective.  And I think it is easy enough to imagine that certain configurations of matter simply have an intrinsic, externally inaccessible experiential property, just as certain configurations of matter lead to the creation of a black hole with its own externally inaccessible and qualitatively unknown internal properties.  Despite the fact that we can’t access the black hole’s interior with a strictly external method, to determine its internal properties, this doesn’t mean we should assume that whatever properties may exist inside it are therefore fundamentally non-physical.  Just as we wouldn’t consider alternate dimensions (such as those predicted in M-theory/String-Theory) that we can’t physically access to be non-physical.  Perhaps one or more of these inaccessible dimensions (if they exist) is what accounts for an intrinsic experiential property within matter (though this is entirely speculative and need not be true for the previous points to hold, but it’s an interesting thought nevertheless).

Here’s a relevant quote from the philosopher Galen Strawson, where he outlines what physicalism actually entails:

Real physicalists must accept that at least some ultimates are intrinsically experience-involving. They must at least embrace micropsychism. Given that everything concrete is physical, and that everything physical is constituted out of physical ultimates, and that experience is part of concrete reality, it seems the only reasonable position, more than just an ‘inference to the best explanation’… Micropsychism is not yet panpsychism, for as things stand realistic physicalists can conjecture that only some types of ultimates are intrinsically experiential. But they must allow that panpsychism may be true, and the big step has already been taken with micropsychism, the admission that at least some ultimates must be experiential. ‘And were the inmost essence of things laid open to us’ I think that the idea that some but not all physical ultimates are experiential would look like the idea that some but not all physical ultimates are spatio-temporal (on the assumption that spacetime is indeed a fundamental feature of reality). I would bet a lot against there being such radical heterogeneity at the very bottom of things. In fact (to disagree with my earlier self) it is hard to see why this view would not count as a form of dualism… So now I can say that physicalism, i.e. real physicalism, entails panexperientialism or panpsychism. All physical stuff is energy, in one form or another, and all energy, I trow, is an experience-involving phenomenon. This sounded crazy to me for a long time, but I am quite used to it, now that I know that there is no alternative short of ‘substance dualism’… Real physicalism, realistic physicalism, entails panpsychism, and whatever problems are raised by this fact are problems a real physicalist must face.

— Galen Strawson, Consciousness and Its Place in Nature: Does Physicalism Entail Panpsychism?
.
While I don’t believe that all matter has a mind per se, because a mind is generally conceived as being a complex information processing structure, I think it is likely that all matter has an experiential quality of some kind, even if most instantiations of it are entirely unrecognizable as what we’d generally consider to be “consciousness” or “mentality”.  I believe that the intuitive gap here is born from the fact that the only minds we are confident exist are those instantiated by a brain, which has the ability to make an incredibly large number of experiential discriminations between various causal relations, thus giving it a capacity that is incredibly complex and not observed anywhere else in nature.  On the other hand, a particle of matter on its own would be hypothesized to have the capacity to make only one or two of these kinds of discriminations, making it unintelligent and thus incapable of brain-like activity.  Once a person accepts that an experiential quality can come in varying degrees from say one experiential “bit” to billions or trillions of “bits” (or more), then we can plausibly see how matter could give rise to systems that have a vast range in their causal power, from rocks that don’t appear to do much at all, to living organisms that have the ability to store countless representations of causal relations (memory) allowing them to behave in increasingly complex ways.  And perhaps best of all, this approach solves the mind-body problem by eliminating the mystery of how fundamentally non-experiential stuff could possibly give rise to experientiality and consciousness.

It’s Time For Some Philosophical Investigations

leave a comment »

Ludwig Wittgenstein’s Philosophical Investigations is a nice piece of work where he attempts to explain his views on language and the consequences of this view on various subjects like logic, semantics, cognition, and psychology.  I’ve mentioned some of his views very briefly in a couple of earlier posts, but I wanted to delve into his work in a little more depth here and comment on what strikes me as most interesting.  Lately, I’ve been looking back at some of the books I’ve read from various philosophers and have been wanting to revisit them so I can explore them in more detail and share how they connect to some of my own thoughts.  Alright…let’s begin.

Language, Meaning, & Their Probabilistic Attributes

He opens his Philosophical Investigations with a quote from St. Augustine’s Confessions that describes how a person learns a language.  St. Augustine believed that this process involved simply learning the names of objects (for example, by someone else pointing to the objects that are so named) and then stringing them together into sentences, and Wittgenstein points out that this is true to some trivial degree but it overlooks a much more fundamental relationship between language and the world.  For Wittgenstein, the meaning of words can not simply be attached to an object like a name can.  The meaning of a word or concept has much more of a fuzzy boundary as it depends on a breadth of context or associations with other concepts.  He analogizes this plurality in the meanings of words with the relationship between members of a family.  While there may be some resemblance between different uses of a word or concept, we aren’t able to formulate a strict definition to fully describe this resemblance.

One problem then, especially within philosophy, is that many people assume that the meaning of a word or concept is fixed with sharp boundaries (just like the fixed structure of words themselves).  Wittgenstein wants to dispel people of this false notion (much as Nietzsche tried to do before him) so that they can stop misusing language, as he believed that this misuse was the cause of many (if not all) of the major problems that had cropped up in philosophy over the centuries, particularly in metaphysics.  Since meaning is actually somewhat fluid and can’t be accounted for by any fixed structure, Wittgenstein thinks that any meaning that we can attach to these words is ultimately going to be determined by how those words are used.  Since he ties meaning with use, and since this use is something occurring in our social forms of life, it has an inextricably external character.  Thus, the only way to determine if someone else has a particular understanding of a word or concept is through their behavior, in response to or in association with the use of the word(s) in question.  This is especially important in the case of ambiguous sentences, which Wittgenstein explores to some degree.

Probabilistic Shared Understanding

Some of what Wittgenstein is trying to point out here are what I like to refer to as the inherently probabilistic attributes of language.  And it seems to me to be probabilistic for a few different reasons, beyond what Wittgenstein seems to address.  First, there is no guarantee that what one person means by a word or concept exactly matches the meaning from another person’s point of view, but at the very least there is almost always going to be some amount of semantic overlap (and possibly 100% in some cases) between the two individual’s intended meanings, and so there is going to be some probability that the speaker and the listener do in fact share a complete understanding.  It seems reasonable to argue that simpler concepts will have a higher probability of complete semantic overlap whereas more complex concepts are more likely to leave a gap in that shared understanding.  And I think this is true even if we can’t actually calculate what any of these probabilities are.

Now my use of the word meaning here differs from Wittgenstein’s because I am referring to something that is not exclusively shared by all parties involved and I am pointing to something that is internal (a subjective understanding of a word) rather than external (the shared use of a word).  But I think this move is necessary if we are to capture all of the attributes that people explicitly or implicitly refer to with a concept like meaning.  It seems better to compromise with Wittgenstein’s thinking and refer to the meaning of a word as a form of understanding that is intimately connected with its use, but which involves elements that are not exclusively external.

We can justify this version of meaning through an example.  If I help teach you how to ride a bike and explain that this activity is called biking or to bike, then we can use Wittgenstein’s conception of meaning and it will likely account for our shared understanding, and so I have no qualms about that, and I’d agree with Wittgenstein that this is perhaps the most important function of language.  But it may be the case that you have an understanding that biking is an activity that can only happen on Tuesdays, because that happened to be the day that I helped teach you how to ride a bike.  Though I never intended for you to understand biking in this way, there was no immediate way for me to infer that you had this misunderstanding on the day I was teaching you this.  I could only learn of this fact if you explicitly explained to me your understanding of the term with enough detail, or if I had asked you additional questions like whether or not you’d like to bike on a Wednesday (for example), with you answering “I don’t know how to do that as that doesn’t make any sense to me.”  Wittgenstein doesn’t account for this gap in understanding in his conception of meaning and I think this makes for a far less useful conception.

Now I think that Wittgenstein is still right in the sense that the only way to determine someone else’s understanding (or lack thereof) of a word is through their behavior, but due to chance as well as where our attention is directed at any given moment, we may never see the right kinds of behavior to rule out any number of possible misunderstandings, and so we’re apt to just assume that these misunderstandings don’t exist because language hides them to varying degrees.  But they can and in some cases do exist, and this is why I prefer a conception of meaning that takes these misunderstandings into account.  So I think it’s useful to see language as probabilistic in the sense that there is some probability of a complete shared understanding underlying the use of a word, and thus there is a converse probability of some degree of misunderstanding.

Language & Meaning as Probabilistic Associations Between Causal Relations

A second way of language being probabilistic is due to the fact that the unique meanings associated with any particular use of a word or concept as understood by an individual are derived from probabilistic associations between various inferred causal relations.  And I believe that this is the underlying cause of most of the problems that Wittgenstein was trying to address in this book.  He may not have been thinking about the problem in this way, but it can account for the fuzzy boundary problem associated with the task of trying to define the meaning of words since this probabilistic structure underlies our experiences, our understanding of the world, and our use of language such that it can’t be represented by static, sharp definitions.  When a person is learning a concept, like redness, they experience a number of observations and infer what is common to all of those experiences, and then they extract a particular subset of what is common and associate it with a word like red or redness (as opposed to another commonality like objectness, or roundness, or what-have-you).  But in most cases, separate experiences of redness are going to be different instantiations of redness with different hues, textures, shapes, etc., which means that redness gets associated with a large range of different qualia.

If you come across a new qualia that seems to more closely match previous experiences associated with redness rather than orangeness (for example), I would argue that this is because the brain has assigned a higher probability to that qualia being an instance of redness as opposed to, say, orangeness.  And the brain may very well test a hypothesis of the qualia matching the concept of redness versus the concept of orangeness, and depending on your previous experiences of both, and the present context of the experience, your brain may assign a higher probability of orangeness instead.  Perhaps if a red-orange colored object is mixed in with a bunch of unambiguously orange-colored objects, it will be perceived as a shade of orange (to match it with the rest of the set), but if the case were reversed and it were mixed in with a bunch of unambiguously red-colored objects, it will be perceived as a shade of red instead.

Since our perception of the world depends on context, then the meanings we assign to words or concepts also depends on context, but not only in the sense of choosing a different use of a word (like Wittgenstein argues) in some language game or other, but also by perceiving the same incoming sensory information as conceptually different qualia (like in the aforementioned case of a red-orange colored object).  In that case, we weren’t intentionally using red or orange in a different way but rather were assigning one word or the other to the exact same sensory information (with respect to the red-orange object) which depended on what else was happening in the scene that surrounded that subset of sensory information.  To me, this highlights how meaning can be fluid in multiple ways, some of that fluidity stemming from our conscious intentions and some of it from unintentional forces at play involving our prior expectations within some context which directly modify our perceived experience.

This can also be seen through Wittgenstein’s example of what he calls a duckrabbit, an ambiguous image that can be perceived as a duck or a rabbit.  I’ve taken the liberty of inserting this image here along with a copy of it which has been rotated in order to more strongly invoke the perception of a rabbit.  The first image no doubt looks more like a duck and the second image, more like a rabbit.

Now Wittgenstein says that when one is looking at the duckrabbit and sees a rabbit, they aren’t interpreting the picture as a rabbit but are simply reporting what they see.  But in the case where a person sees a duck first and then later sees a rabbit, Wittgenstein isn’t sure what to make of this.  However, he claims to be sure that whatever it is, it can’t be the case that the external world stays the same while an internal cognitive change takes place.  Wittgenstein was incorrect on this point because the external world doesn’t change (in any relevant sense) despite our seeing the duck or seeing the rabbit.  Furthermore, he never demonstrates why two different perceptions would require a change in the external world.  The fact of the matter is, you can stare at this static picture and ask yourself to see a duck or to see a rabbit and it will affect your perception accordingly.  This is partially accomplished by you mentally rotating the image in your imagination and seeing if that changes how well it matches one conception or the other, and since it matches both conceptions to a high degree, you can easily perceive it one way or the other.  Your brain is simply processing competing hypotheses to account for the incoming sensory information, and the top-down predictions of rabbitness or duckness (which you’ve acquired over past experiences) actually changes the way you perceive it with no change required in the external world (despite Wittgenstein’s assertion to the contrary).

To give yet another illustration of the probabilistic nature of language, just imagine the head of a bald man and ask yourself, if you were to add one hair at a time to this bald man’s head, at what point does he lose the property of baldness?  If hairs were slowly added at random, and you could simply say “Stop!  Now he’s no longer bald!” at some particular time, there’s no doubt in my mind that if this procedure were repeated (even if the hairs were added non-randomly), you would say “Stop!  Now he’s no longer bald!” at a different point in this transition.  Similarly if you were looking at a light that was changing color from red to orange, and were asked to say when the color has changed to orange, you would pick a point in the transition that is within some margin of error but it wouldn’t be an exact, repeatable point in the transition.  We could do this thought experiment with all sorts of concepts that are attached to words, like cat and dog and, for example, use a computer graphic program to seamlessly morph a picture of a cat into a picture of a dog and ask at what point did the cat “turn into” a dog?  It’s going to be based on a probability of coincident features that you detect which can vary over time.  Here’s a series of pictures showing a chimpanzee morphing into Bill Clinton to better illustrate this point:

At what point do we stop seeing a picture of a chimpanzee and start seeing a picture of something else?  When do we first see Bill Clinton?  What if I expanded this series of 15 images into a series of 1000 images so that this transition happened even more gradually?  It would be highly unlikely to pick the exact same point in the transition two times in a row if the images weren’t numbered or arranged in a grid.  We can analogize this phenomenon with an ongoing problem in science, known as the species problem.  This problem can be described as the inherent difficulty of defining exactly what a species is, which is necessary if one wants to determine if and when one species evolves into another.  This problem occurs because the evolutionary processes giving rise to new species are relatively slow and continuous whereas sorting those organisms into sharply defined categories involves the elimination of that generational continuity and replacing it with discrete steps.

And we can see this effect in the series of images above, where each picture could represent some large number of generations in an evolutionary timeline, where each picture/organism looks roughly like the “parent” or “child” of the picture/organism that is adjacent to it.  Despite this continuity, if we look at the first picture and the last one, they look like pictures of distinct species.  So if we want to categorize the first and last picture as distinct species, then we create a problem when trying to account for every picture/organism that lies in between that transition.  Similarly words take on an appearance of strict categorization (of meaning) when in actuality, any underlying meaning attached is probabilistic and dynamic.  And as Wittgenstein pointed out, this makes it more appropriate to consider meaning as use so that the probabilistic and dynamic attributes of meaning aren’t lost.

Now you may think you can get around this problem of fluidity or fuzzy boundaries with concepts that are simpler and more abstract, like the concept of a particular quantity (say, a quantity of four objects) or other concepts in mathematics.  But in order to learn these concepts in the first place, like quantity, and then associate particular instances of it with a word, like four, one had to be presented with a number of experiences and infer what was common to all of those experiences (as was the case with redness mentioned earlier).  And this inference (I would argue) involves a probabilistic process as well, it’s just that the resulting probability of our inferring particular experiences as an instance of four objects is incredibly high and therefore repeatable and relatively unambiguous.  Therefore that kind of inference is likely to be sustained no matter what the context, and it is likely to be shared by two individuals with 100% semantic overlap (i.e. it’s almost certain that what I mean by four is exactly what you mean by four even though this is almost certainly not the case for a concept like love or consciousness).  This makes mathematical concepts qualitatively different from other concepts (especially those that are more complex or that more closely map on to reality), but it doesn’t negate their having a probabilistic attribute or foundation.

Looking at the Big Picture

Though this discussion of language and meaning is not an exhaustive analysis of Wittgenstein’s Philosophical Investigations, it represents an analysis of the main theme present throughout.  His main point was to shed light on the disparity between how we often think of language and how we actually use it.  When we stray away from the way it is actually used in our everyday lives, in one form of social life or other, and instead misuse it such as in philosophy, this creates all sorts of problems and unwarranted conclusions.  He also wants his readers to realize that the ultimate goal of philosophy should not be to try and make metaphysical theories and deep explanations underlying everyday phenomena, since these are often born out of unwarranted generalizations and other assumptions stemming from how our grammar is structured.  Instead we ought to subdue these temptations to generalize and subdue our temptations to be dogmatic and instead use philosophy as a kind of therapeutic tool to keep our abstract thinking in check and to better understand ourselves and the world we live in.

Although I disagree with some of Wittgenstein’s claims about cognition (in terms of how intimately it is connected to the external world) and take some issue with his arguably less useful conception of meaning, he makes a lot of sense overall.  Wittgenstein was clearly hitting upon a real difference between the way actual causal relations in our experience are structured and how those relations are represented in language.  Personally, I think that work within philosophy is moving in the right direction if the contributions made therein lead us to make more successful predictions about the causal structure of the world.  And I believe this to be so even if this progress includes generalizations that may not be exactly right.  As long as we can structure our thinking to make more successful predictions, then we’re moving forward as far as I’m concerned.  In any case, I liked the book overall and thought that the interlocutory style gave the reader a nice break from the typical form seen in philosophical argumentation.  I highly recommend it!

Some Thoughts on “Fear & Trembling”

leave a comment »

I’ve been meaning to write this post for quite some time, but haven’t had the opportunity until now, so here it goes.  I want to explore some of Kierkegaard’s philosophical claims or themes in his book Fear and Trembling.  Kierkegaard regarded himself as a Christian and so there are a lot of literary themes revolving around faith and the religious life, but he also centers a lot of his philosophy around the subjective individual, all of which I’d like to look at in more detail.

In Fear and Trembling, we hear about the story found in Genesis (Ch. 22) where Abraham attempts to sacrifice his own beloved son Isaac, after hearing God command him to do so.  For those unfamiliar with this biblical story, after they journey out to mount Moriah he binds Isaac to an alter, and as Abraham draws his knife to slit the throat of his beloved son, an angel appears just in time and tells him to stop “for now I know that you fear God”.  Then Abraham sees a goat nearby and sacrifices it instead of his son.  Kierkegaard uses this story in various ways, and considers four alternative versions of it (and their consequences), to explicate the concept of faith as admirable though fundamentally incomprehensible and unintelligible.

He begins his book by telling us about a man who has deeply admired this story of Abraham ever since he first heard it as a child, with this admiration for it growing stronger over time while understanding the story less and less.  The man considers four alternative versions of the story to try and better understand Abraham and how he did what he did, but never manages to obtain this understanding.

I’d like to point out here that an increased confusion would be expected if the man has undergone moral and intellectual growth during his journey from childhood to adulthood.  We tend to be more impulsive, irrational and passionate as children, with less regard for any ethical framework to live by.  And sure enough, Kierkegaard even mentions the importance of passion in making a leap of faith.  Nevertheless, as we continue to mature and accumulate life experience, we tend to develop some control over our passions and emotions, we build up our intellect and rationality, and also further develop an ethic with many ethical behaviors becoming habituated if cultivated over time.  If a person cultivates moral virtues like compassion, honesty, and reasonableness, then it would be expected that they’d find Abraham’s intended act of murder (let alone filicide) repugnant.  But, regardless of the reasons for the man’s lack of understanding, he admires the story more and more, likely because it reveres Abraham as the father of faith, and portrays faith itself as a most honorable virtue.

Kierkegaard’s main point in Fear and Trembling is that one has to suspend their relation to the ethical (contrary to Kant and Hegel), in order to make any leap of faith, and that there’s no rational decision making process involved.  And so it seems clear that Kierkegaard knows that what Abraham did in this story was entirely unethical (attempting to kill an innocent child) in at least one sense of the word ethical, but he believes nevertheless that this doesn’t matter.

To see where he’s coming from, we need to understand Kierkegaard’s idea that there are basically three ways or stages of living, namely the aesthetic, the ethical, and the religious.  The aesthetic life is that of sensuous or felt experience, infinite potentiality through imagination, hiddenness or privacy, and an overarching egotism focused on the individual.  The ethical life supersedes or transcends this aesthetic way of life by relating one to “the universal”, that is, to the common good of all people, to social contracts, and to the betterment of others over oneself.  The ethical life, according to Kierkegaard, also consists of public disclosure or transparency.  Finally, the religious life supersedes the ethical (and thus also supersedes the aesthetic) but shares some characteristics of both the aesthetic and the ethical.

The religious, like the aesthetic, operates on the level of the individual, but with the added component of the individual having a direct relation to God.  And just like the ethical, the religious appeals to a conception of good and evil behavior, but God is the arbiter in this way of life rather than human beings or their nature.  Thus the sphere of ethics that Abraham might normally commit himself to in other cases is thought to be superseded by the religious sphere, the sphere of faith.  Within this sphere of faith, Abraham assumes that anything that God commands is Abraham’s absolute duty to uphold, and he also has faith that this will lead to the best ends.  This of course, is known as a form of divine command theory, which is actually an ethical and meta-ethical theory.  Although Kierkegaard claims that the religious is somehow above the ethical, it is for the most part just another way of living that involves another ethical principle.  In this case, the ethical principle is for one to do whatever God commands them to (even if these commands are inconsistent or morally repugnant from a human perspective), and this should be done rather than abiding by our moral conscience or some other set of moral rules, social mores, or any standards based on human judgment, human nature, etc.

It appears that the primary distinction between the ethical and the religious is the leap of faith that is made in the latter stage of living which involves an act performed “in virtue of the absurd”.  For example, Abraham’s faith in God was really a faith that God wouldn’t actually make him kill his son Isaac.  Had Abraham been lacking in this particular faith, Kierkegaard seems to argue that Abraham’s conscience and moral perspective (which includes “the universal”) would never have allowed him to do what he did.  Thus, Abraham’s faith, according to Kierkegaard, allowed him to (at least temporarily) suspend the ethical in virtue of the absurd notion that somehow the ethical would be maintained in the end.  In other words, Abraham thought that he could obey God’s command, even if this command was prima facie immoral, because he had faith that God wouldn’t actually make Abraham perform an unethical act.

I find it interesting that this particular function or instantiation of faith, as outlined by Kierkegaard, makes for an unusual interpretation of divine command theory.  If divine command theory attempts to define good or moral behavior as that which God commands, and if a leap of faith (such as that which Abraham took) can involve a belief that the end result of an unconscionable commandment is actually its negation or retraction, then a leap of faith such as that taken by Abraham would serve to contradict divine command theory to at least some degree.  It would seem that Kierkegaard wants to believe in the basic premise of divine command theory and therefore have an absolute duty to obey whatever God commands, and yet he also wants to believe that if this command goes against a human moral system or the human conscience, it will not end up doing so when one goes to carry out what has actually been commanded of them.  This seems to me to be an unusual pair of beliefs for one to hold simultaneously, for divine command theory allows for Abraham to have actually carried out the murder of his son (with no angel stopping him at the last second), and this heinous act would have been considered a moral one under such an awful theory.  And yet, Abraham had faith that this divine command would somehow be nullified and therefore reconciled with his own conscience and relation to the universal.

Kierkegaard has something to say about beliefs, and how they differ from faith-driven dispositions, and it’s worth noting this since most of us use the term “belief” as including that which one has faith in.  For Kierkegaard, belief implies that one is assured of its truth in some way, whereas faith requires one to accept the possibility that what they have faith in could be proven wrong.  Thus, it wasn’t enough for Abraham to believe in an absolute duty to obey whatever God commanded of him, because that would have simply been a case of obedience, and not faith.  Instead, Abraham also had to have faith that God would let Abraham spare his son Isaac, while accepting the possibility that he may be proven wrong and end up having to kill his son after all.  As such, Kierkegaard wouldn’t accept the way the term “faith” is often used in modern religious parlance.  Religious practitioners often say that they have faith in something and yet “know it to be true”, “know it for certain”, “know it will happen”, etc.  But if Abraham truly believed (let alone knew for certain) that God wouldn’t make him kill Isaac, then God’s command wouldn’t have served as any true test of faith.  So while Abraham may have believed that he had to kill his son, he also had faith that his son wouldn’t die, hence making a leap of faith in virtue of the absurd.

This distinction between belief and faith also seems to highlight Kierkegaard’s belief in some kind of prophetic consequentialist ethical framework.  Whereas most Christians tend to side with a Kantian deontological ethical system, Kierkegaard points out that ethical systems have rules which are meant to promote the well-being of large groups of people.  And since humans lack the ability to see far into the future, it’s possible that some rules made under this kind of ignorance may actually lead to an end that harms twenty people and only helps one.  Kierkegaard believes that faith in God can answer this uncertainty and circumvent the need to predict the outcome of our moral rules by guaranteeing a better end given the vastly superior knowledge that God has access to.  And any ethical system that appeals to the ends as justifying the means is a form of consequentialism (utilitarianism is perhaps the most common type of ethical consequentialism).

Although I disagree with Kiergegaard on a lot of points, such as his endorsement of divine command theory, and his appeal to an epistemologically bankrupt behavior like taking a leap of faith, I actually agree with Kierkegaard on his teleological ethical reasoning.  He’s right in his appealing to the ends in order to justify the means, and he’s right to want maximal knowledge involved in determining how best to achieve those ends.  It seems clear to me that all moral systems ultimately break down to a form of consequentialism anyway (a set of hypothetical imperatives), and any disagreement between moral systems is really nothing more than a disagreement about what is factual or a disagreement about which consequences should be taken into account (e.g. happiness of the majority, happiness of the least well off, self-contentment for the individual, how we see ourselves as a person, etc.).

It also seems clear that if you are appealing to some set of consequences in determining what is and is not moral behavior, then having maximal knowledge is your best chance of achieving those ends.  But we can only determine the reliability of the knowledge by seeing how well it predicts the future (through inferred causal relations), and that means we can only establish the veracity of any claimed knowledge through empirical means.  Since nobody has yet been able to establish that a God (or gods) exists through any empirical means, it goes without saying that nobody has been able to establish the veracity of any God-knowledge.

Lacking the ability to test this, one would also need to have faith in God’s knowledge, which means they’ve merely replaced one form of uncertainty (the predicted versus actual ends of human moral systems) with another form of uncertainty (the predicted versus actual knowledge of God).  Since the predicted versus actual ends of our moral systems can actually be tested, while the knowledge of God cannot, then we have a greater uncertainty in God’s knowledge than in the efficacy and accuracy of our own moral systems.  This is a problem for Kierkegaard, because his position seems to be that the leap of faith taken by Abraham was essentially grounded on the assumption that God had superior knowledge to achieve the best telos, and thus his position is entirely unsupportable.

Aside from the problems inherent in Kierkegaard’s beliefs about faith and God, I do like his intense focus on the priority of the individual.  As mentioned already, both the aesthetic and religious ways of life that have been described operate on this individual level.  However, one criticism I have to make about Kierkegaard’s life-stage trichotomy is that morality/ethics actually does operate on the individual level even if it also indirectly involves the community or society at large.  And although it is not egotistic like the aesthetic life is said to be, it is egoistic because rational self-interest is in fact at the heart of all moral systems that are consistent and sufficiently motivating to follow.

If you maximize your personal satisfaction and life fulfillment by committing what you believe to be a moral act over some alternative that you believe will make you less fulfilled and thus less overall satisfied (such as not obeying God), then you are acting for your own self-interest (by obeying God), even if you are not acting in an explicitly selfish way.  A person can certainly be wrong about what will actually make them most satisfied and fulfilled, but this doesn’t negate one’s intention to do so.  Acting for the betterment of others over oneself (i.e. living by or for “the universal”) involves behaviors that lead you to a more fulfilling life, in part based on how those actions affect your view of yourself and your character.  If one believes in gods or a God, then their perspective on their belief of how God sees them will also affect their view of themselves.  In short, a properly formulated ethics is centered around the individual even if it seems otherwise.

Given the fact that Kierkegaard seems to have believed that the ethical life revolved around the universal rather than the individual, perhaps it’s no wonder that he would choose to elevate some kind of individualistic stage of life, namely the religious life, over that of the ethical.  It would be interesting to see how his stages of life may have looked had he believed in a more individualistic theory of ethics.  I find that an egoistic ethical framework actually fits quite nicely with the rest of Kierkegaard’s overtly individualistic philosophy.

He ends this book by pointing out that passion is required in order to have faith, and passion isn’t something that somebody can teach us, unlike the epistemic fruits of rational reflection.  Instead, passion has to be experienced firsthand in order for us to understand it at all.  He contrasts this passion with the disinterested intellectualization involved in reflection, which was the means used in Hegel’s approach to try and understand faith.

Kierkegaard doesn’t think that Hegel’s method will suffice since it isn’t built upon a fundamentally subjective experiential foundation and instead tries to understand faith and systematize it through an objective analysis based on logic and rational reflection.  Although I see logic and rational reflection as most important for best achieving our overall happiness and life fulfillment, I can still appreciate the significant role of passion and felt experience within the human condition, our attraction to it, and it’s role in religious belief.  I can also appreciate how our overall satisfaction and life fulfillment are themselves instantiated and evaluated as a subjective felt experience, and one that is entirely individualistic.  And so I can’t help but agree with Kierkegaard, in recognizing that there is no substitute for a subjective experience, and no way to adequately account for the essence of those experiences through entirely non-subjective (objective) means.

The individual subject and their conscious experience is of primary importance (it’s the only thing we can be certain exists), and the human need to find meaning in an apparently meaningless world is perhaps the most important facet of that ongoing conscious experience.  Even though I disagree with a lot of what Kierkegaard believed, it wasn’t all bull$#!+.  I think he captured and expressed some very important points about the individual and some of the psychological forces that color the view of our personal identity and our own existence.

Predictive Processing: Unlocking the Mysteries of Mind & Body (Part I)

leave a comment »

I’ve been away from writing for a while because I’ve had some health problems relating to my neck.  A few weeks ago I had double-cervical-disc replacement surgery and so I’ve been unable to write and respond to comments and so forth for a little while.  I’m in the second week following my surgery now and have finally been able to get back to writing, which feels very good given that I’m unable to lift or resume martial arts for the time being.  Anyway, I want to resume my course of writing beginning with a post-series that pertains to Predictive Processing (PP) and the Bayesian brain.  I’ve written one post on this topic a little over a year ago (which can be found here) as I’ve become extremely interested in this topic for the last several years now.

The Predictive Processing (PP) theory of perception shows a lot of promise in terms of finding an overarching schema that can account for everything that the brain seems to do.  While its technical application is to account for the acts of perception and active inference in particular, I think it can be used more broadly to account for other descriptions of our mental life such as beliefs (and knowledge), desires, emotions, language, reasoning, cognitive biases, and even consciousness itself.  I want to explore some of these relationships as viewed through a PP lens more because I think it is the key framework needed to reconcile all of these aspects into one coherent picture, especially within the evolutionary context of an organism driven to survive.  Let’s begin this post-series by first looking at how PP relates to perception (including imagination), beliefs, emotions, and desires (and by extension, the actions resulting from particular desires).

Within a PP framework, beliefs can be best described as simply the set of particular predictions that the brain employs which encompass perception, desires, action, emotion, etc., and which are ultimately mediated and updated in order to reduce prediction errors based on incoming sensory evidence (and which approximates a Bayesian form of inference).  Perception then, which is constituted by a subset of all our beliefs (with many of them being implicit or unconscious beliefs), is more or less a form of controlled hallucination in the sense that what we consciously perceive is not the actual sensory evidence itself (not even after processing it), but rather our brain’s “best guess” of what the causes for the incoming sensory evidence are.

Desires can be best described as another subset of one’s beliefs, and a set of beliefs which has the special characteristic of being able to drive action or physical behavior in some way (whether driving internal bodily states, or external ones that move the body in various ways).  Finally, emotions can be thought of as predictions pertaining to the causes of internal bodily states and which may be driven or changed by changes in other beliefs (including changes in desires or perceptions).

When we believe something to be true or false, we are basically just modeling some kind of causal relationship (or its negation) which is able to manifest itself into a number of highly-weighted predicted perceptions and actions.  When we believe something to be likely true or likely false, the same principle applies but with a lower weight or precision on the predictions that directly corresponds to the degree of belief or disbelief (and so new sensory evidence will more easily sway such a belief).  And just like our perceptions, which are mediated by a number of low and high-level predictions pertaining to incoming sensory data, any prediction error that the brain encounters results in either updating the perceptual predictions to new ones that better reduce the prediction error and/or performing some physical action that reduces the prediction error (e.g. rotating your head, moving your eyes, reaching for an object, excreting hormones in your body, etc.).

In all these cases, we can describe the brain as having some set of Bayesian prior probabilities pertaining to the causes of incoming sensory data, and these priors changing over time in response to prediction errors arising from new incoming sensory evidence that fails to be “explained away” by the predictive models currently employed.  Strong beliefs are associated with high prior probabilities (highly-weighted predictions) and therefore need much more counterfactual sensory evidence to be overcome or modified than for weak beliefs which have relatively low priors (low-weighted predictions).

To illustrate some of these concepts, let’s consider a belief like “apples are a tasty food”.  This belief can be broken down into a number of lower level, highly-weighted predictions such as the prediction that eating a piece of what we call an “apple” will most likely result in qualia that accompany the perception of a particular satisfying taste, the lower level prediction that doing so will also cause my perception of hunger to change, and the higher level prediction that it will “give me energy” (with these latter two predictions stemming from the more basic category of “food” contained in the belief).  Another prediction or set of predictions is that these expectations will apply to not just one apple but a number of apples (different instances of one type of apple, or different types of apples altogether), and a host of other predictions.

These predictions may even result (in combination with other perceptions or beliefs) in an actual desire to eat an apple which, under a PP lens could be described as the highly weighted prediction of what it would feel like to find an apple, to reach for an apple, to grab it, to bite off a piece of it, to chew it, and to swallow it.  If I merely imagine doing such things, then the resulting predictions will necessarily carry such a small weight that they won’t be able to influence any actual motor actions (even if these imagined perceptions are able to influence other predictions that may eventually lead to some plan of action).  Imagined perceptions will also not carry enough weight (when my brain is functioning normally at least) to trick me into thinking that they are actual perceptions (by “actual”, I simply mean perceptions that correspond to incoming sensory data).  This low-weighting attribute of imagined perceptual predictions thus provides a viable way for us to have an imagination and to distinguish it from perceptions corresponding to incoming sensory data, and to distinguish it from predictions that directly cause bodily action.  On the other hand, predictions that are weighted highly enough (among other factors) will be uniquely capable of affecting our perception of the real world and/or instantiating action.

This latter case of desire and action shows how the PP model takes the organism to be an embodied prediction machine that is directly influencing and being influenced by the world that its body interacts with, with the ultimate goal of reducing any prediction error encountered (which can be thought of as maximizing Bayesian evidence).  In this particular example, the highly-weighted prediction of eating an apple is simply another way of describing a desire to eat an apple, which produces some degree of prediction error until the proper actions have taken place in order to reduce said error.  The only two ways of reducing this prediction error are to change the desire (or eliminate it) to one that no longer involves eating an apple, and/or to perform bodily actions that result in actually eating an apple.

Perhaps if I realize that I don’t have any apples in my house, but I realize that I do have bananas, then my desire will change to one that predicts my eating a banana instead.  Another way of saying this is that my higher-weighted prediction of satisfying hunger supersedes my prediction of eating an apple specifically, thus one desire is able to supersede another.  However, if the prediction weight associated with my desire to eat an apple is high enough, it may mean that my predictions will motivate me enough to avoid eating the banana, and instead to predict what it is like to walk out of my house, go to the store, and actually get an apple (and therefore, to actually do so).  Furthermore, it may motivate me to predict actions that lead me to earn the money such that I can purchase the apple (if I don’t already have the money to do so).  To do this, I would be employing a number of predictions having to do with performing actions that lead to me obtaining money, using money to purchase goods, etc.

This is but a taste of what PP has to offer, and how we can look at basic concepts within folk psychology, cognitive science, and theories of mind in a new light.  Associated with all of these beliefs, desires, emotions, and actions (which again, are simply different kinds of predictions under this framework), is a number of elements pertaining to ontology (i.e. what kinds of things we think exist in the world) and pertaining to language as well, and I’d like to explore this relationship in my next post.  This link can be found here.

Is Death Bad For You? A Response to Shelly Kagan

leave a comment »

I’ve enjoyed reading and listening to the philosopher Shelly Kagan, both in debate, lectures, and various articles.  One topic he’s well known for is that of death, specifically the fear of death, and trying to understand the details behind, and justification for, the general attitude people have toward the concept of death.  I’ve written a little about the fear of death long ago, but coming across an article of Kagan’s reignited my interest in the topic.  He wrote an article a few years ago in The Chronicle, where he expounds on some of the ontological puzzles related to the concept of death.  I thought I’d briefly summarize the article’s main points and give a response to it here.

Can Death Be Bad For Us?

Kagan begins with the assumption that the death of a person’s body results in the end of that person’s existence.  This is certainly a reasonable assumption as there’s no evidence to the contrary, that is, that persons can exist without a living body.  Simple enough.  Then he asks the question, if death is the end of our existence, then how can being dead be bad for us?  While some would say that death is particularly bad for the survivors of the deceased since they miss the person who’s died and the relationship they once had with that person.  But it seems more complicated than that, because we could likewise have an experience where a cherished friend or family member leaves us and goes somewhere far away such that we can be confident that we’ll never see that person ever again.

Both the death of that person, and the alternative of their leaving forever to go somewhere such that we’ll never have contact with them again, result in the same loss of relationship.  Yet most people would say that if we knew about their dying instead of simply leaving forever, there’s more to be sad about in terms of death being bad for them, not simply bad for us.  And this sadness results from more than simply knowing how they died — the process of death itself — which could have been unpleasant, but also could have been entirely benign (such as dying peacefully in one’s sleep).  Similarly, Kagan tells us, the prospect of dying can be unpleasant as well, but he asserts, this only seems to make sense if death itself is bad for us.

Kagan suggests:

Maybe nonexistence is bad for me, not in an intrinsic way, like pain, and not in an instrumental way, like unemployment leading to poverty, which in turn leads to pain and suffering, but in a comparative way—what economists call opportunity costs. Death is bad for me in the comparative sense, because when I’m dead I lack life—more particularly, the good things in life. That explanation of death’s badness is known as the deprivation account.

While the deprivation account seems plausible, Kagan thinks that accepting it results in a couple of potential problems.  He argues, if something is true, it seems as if there must be some time when it’s true.  So when would it be true that death is bad for us?  Not now, he says.  Because we’re not dead now.  Not after we’re dead either, because then we no longer exist so nothing can be bad for a being that no longer exists.  This seems to lead to the conclusion that either death isn’t bad for anyone after all, or alternatively, that not all facts are datable.  He gives us another possible example of an undatable fact.  If Kagan shoots “John” today such that John slowly bleeds to death after two days, but Kagan dies tomorrow (before John dies) then after John dies, can we say that Kagan killed John?  If Kagan did kill John, when did he kill him?  Kagan no longer existed when John died so how can we say that Kagan killed John?

I think we could agree with this and say that while it’s true that Kagan didn’t technically kill John, a trivial response to this supposed conundrum is to say that Kagan’s actions led to John’s death.  This seems to solve that conundrum by working within the constraints of language, while highlighting the fact that when we say someone killed X what we really mean is that someone’s actions led to the death of X, thus allowing us to be consistent with our conceptions of existence, causality, killing, blame, etc.

Existence Requirement, Non-Existential Asymmetry, & It’s Implications

In any case, if all facts are datable (or at least facts like these), then we should be able to say when exactly death is bad for us.  Can things only be bad for us when we exist?  If so, this is what Kagan refers to as the existence requirement.  If we don’t accept such a requirement — that one must exist in order for things to be bad for us — that produces other problems, like being able to say for example that non-existence could be bad for someone who has never existed but that could have possibly existed.  This seems to be a pretty strange claim to hold to.  So if we refuse to accept that it’s a tragedy for possibly existent people to never come into existence, then we’d have to accept the existence requirement, which I would contend is a more plausible assumption to accept.  But if we do so, then it seems that we have to accept that death isn’t in fact bad for us.

Kagan suggests that we may be able to reinterpret the existence requirement, and he does this by distinguishing between two versions, a modest version which asserts that something can be bad for you only if you exist at some time or another, and a bold version which asserts that something can be bad for you only if you exist at the same time as that thing.  Accepting the modest version seems to allow us a way out of the problems posed here, but that it too has some counter-intuitive implications.

He illustrates this with another example:

Suppose that somebody’s got a nice long life. He lives 90 years. Now, imagine that, instead, he lives only 50 years. That’s clearly worse for him. And if we accept the modest existence requirement, we can indeed say that, because, after all, whether you live 50 years or 90 years, you did exist at some time or another. So the fact that you lost the 40 years you otherwise would have had is bad for you. But now imagine that instead of living 50 years, the person lives only 10 years. That’s worse still. Imagine he dies after one year. That’s worse still. An hour? Worse still. Finally, imagine I bring it about that he never exists at all. Oh, that’s fine.

He thinks this must be accepted if we accept the modest version of the existence requirement, but how can this be?  If one’s life is shortened relative to what they would have had, this is bad, and gets progressively worse as the life is hypothetically shortened, until a life span of zero is reached, in which case they no longer meet the modest existence requirement and thus can’t have anything be bad for them.  So it’s as if it gets infinitely worse as the potential life span approaches the limit of zero, and then when zero is reached, becomes benign and is no longer an issue.

I think a reasonable response to this scenario is to reject the claim that hypothetically shrinking the life span to zero is suddenly no longer an issue.  What seems to be glossed over in this example is the fact that this is a set of comparisons of one hypothetical life to another hypothetical life (two lives with different non-zero life spans), resulting in a final comparison between one hypothetical life and no life at all (a life span of zero).  This example illustrates whether or not something is better or worse in comparison, not whether something is good or bad intrinsically speaking.  The fact that somebody lived for as long as 90 years or only for 10 years isn’t necessarily good or bad but only better or worse in comparison to somebody who’s lived for a different length of time.

The Intrinsic Good of Existence & Intuitions On Death

However, I would go further and say that there is an intrinsic good to existing or being alive, and that most people would agree with such a claim (and that the strong will to live that most of us possess is evidence of our acknowledging such a good).  That’s not to say that never having lived is bad, but only to say that living is good.  If not living is neither good nor bad but considered a neutral or inconsequential state, then we can hold the position that living is better than not living, even if not living isn’t bad at all (after all it’s neutral, neither good nor bad).  Thus we can still maintain our modest existence requirement while consistently holding these views.  We can say that not living is neither good nor bad, that living 10 years is good (and better than not living), that living 50 years is even better, and that living 90 years is even better yet (assuming, for the sake of argument, that the quality of life is equivalently good in every year of one’s life).  What’s important to note here is that not having lived in the first place doesn’t involve the loss of a good, because there was never any good to begin with.  On the other hand, extending the life span involves increasing the quantity of the good, by increasing it’s duration.

Kagan seems to agree overall with the deprivation account of why we believe death is bad for us, but that some puzzles like those he presented still remain.  I think one of the important things to take away from this article is the illustration that we have obvious limitations in the language that we use to describe our ontological conceptions.  These scenarios and our intuitions about them also seem to show that we all generally accept that living or existence is intrinsically good.  It may also highlight the fact that many people intuit that some part of us (such as a soul) continues to exist after death such that death can be bad for us after all (since our post-death “self” would still exist).  While the belief in souls is irrational, it may help to explain some common intuitions about death.

Dying vs. Death, & The Loss of An Intrinsic Value

Remember that Kagan began his article by distinguishing between how one dies, the prospect of dying and death itself.  He asked us, how can the prospect of dying be bad if death itself (which is only true when we no longer exist) isn’t bad for us. Well, perhaps we should consider that when people say that death is bad for us they tend to mean that dying itself is bad for us.  That is to say, the prospect of dying isn’t unpleasant because death is bad for us, but rather because dying itself is bad for us.  If dying occurs while we’re still alive, resulting in one’s eventual loss of life, then dying can be bad for us even if we accepted the bold existence requirement — that something can only be bad for us if we exist at the same time as that thing.  So if the “thing” we’re referring to is our dying rather than our death, this would be consistent with the deprivation account of death, would allow us to put a date (or time interval) on such an event, and would seem to resolve the aforementioned problems.

As for Kagan’s opening question, when is death bad for us?  If we accept my previous response that dying is what’s bad for us, rather than death, then it would stand to reason that death itself isn’t ever bad for us (or doesn’t have to be), but rather what is bad for us is the loss of life that occurs as we die.  If I had to identify exactly when the “badness” that we’re actually referring to occurs, I suppose I would choose an increment of time before one’s death occurs (with an exclusive upper bound set to the time of death).  If time is quantized, as per quantum mechanics, then that means that the smallest interval of time is one Planck second.  So I would argue that at the very least, the last Planck second of our life (if not a longer interval), marks the event or time interval of our dying.

It is this last interval of time ticking away that is bad for us because it leads to our loss of life, which is a loss of an intrinsic good.  So while I would argue that never having received an intrinsic good in the first place isn’t bad (such as never having lived), the loss of (or the process of losing) an intrinsic good is bad.  So I agree with Kagan that the deprivation account is on the right track, but I also think the problems he’s posed are resolvable by thinking more carefully about the terminology we use when describing these concepts.

Conscious Realism & The Interface Theory of Perception

with 6 comments

A few months ago I was reading an interesting article in The Atlantic about Donald Hoffman’s Interface Theory of Perception.  As a person highly interested in consciousness studies, cognitive science, and the mind-body problem, I found the basic concepts of his theory quite fascinating.  What was most interesting to me was the counter-intuitive connection between evolution and perception that Hoffman has proposed.  Now it is certainly reasonable and intuitive to assume that evolutionary natural selection would favor perceptions that are closer to “the truth” or closer to the objective reality that exists independent of our minds, simply because of the idea that perceptions that are more accurate will be more likely to lead to survival than perceptions that are not accurate.  As an example, if I were to perceive lions as inert objects like trees, I would be more likely to be naturally selected against and eaten by a lion when compared to one who perceives lions as a mobile predator that could kill them.

While this is intuitive and reasonable to some degree, what Hoffman actually shows, using evolutionary game theory, is that with respect to organisms with comparable complexity, those with perceptions that are closer to reality are never going to be selected for nearly as much as those with perceptions that are tuned to fitness instead.  More so, truth in this case will be driven to extinction when it is up against perceptual models that are tuned to fitness.  That is to say, evolution will select for organisms that perceive the world in a way that is less accurate (in terms of the underlying reality) as long as the perception is tuned for survival benefits.  The bottom line is that given some specific level of complexity, it is more costly to process more information (costing more time and resources), and so if a “heuristic” method for perception can evolve instead, one that “hides” all the complex information underlying reality and instead provides us with a species-specific guide to adaptive behavior, that will always be the preferred choice.

To see this point more clearly, let’s consider an example.  Let’s imagine there’s an animal that regularly eats some kind of insect, such as a beetle, but it needs to eat a particular sized beetle or else it has a relatively high probability of eating the wrong kind of beetle (and we can assume that the “wrong” kind of beetle would be deadly to eat).  Now let’s imagine two possible types of evolved perception: it could have really accurate perceptions about the various sizes of beetles that it encounters so it can distinguish many different sizes from one another (and then choose the proper size range to eat), or it could evolve less accurate perceptions such that all beetles that are either too small or too large appear as indistinguishable from one another (maybe all the wrong-sized beetles whether too large or too small look like indistinguishable red-colored blobs) and perhaps all the beetles that are in the ideal size range for eating appear as green-colored blobs (that are again, indistinguishable from one another).  So the only discrimination in this latter case of perception is between red and green colored blobs.

Both types of perception would solve the problem of which beetles to eat or not eat, but the latter type (even if much less accurate) would bestow a fitness advantage over the former type, by allowing the animal to process much less information about the environment by not focusing on relatively useless information (like specific beetle size).  In this case, with beetle size as the only variable under consideration for survival, evolution would select for the organism that knows less total information about beetle size, as long as it knows what is most important about distinguishing the edible beetles from the poisonous beetles.  Now we can imagine that in some cases, the fitness function could align with the true structure of reality, but this is not what we ever expect to see generically in the world.  At best we may see some kind of overlap between the two but if there doesn’t have to be any then truth will go extinct.

Perception is Analogous to a Desktop Computer Interface

Hoffman analogizes this concept of a “perception interface” with the desktop interface of a personal computer.  When we see icons of folders on the desktop and drag one of those icons to the trash bin, we shouldn’t take that interface literally, because there isn’t literally a folder being moved to a literal trash bin but rather it is simply an interface that hides most if not all of what is really going on in the background — all those various diodes, resistors and transistors that are manipulated in order to modify stored information that is represented in binary code.

The desktop interface ultimately provides us with an easy and intuitive way of accomplishing these various information processing tasks because trying to do so in the most “truthful” way — by literally manually manipulating every diode, resistor, and transistor to accomplish the same task — would be far more cumbersome and less effective than using the interface.  Therefore the interface, by hiding this truth from us, allows us to “navigate” through that computational world with more fitness.  In this case, having more fitness simply means being able to accomplish information processing goals more easily, with less resources, etc.

Hoffman goes on to say that even though we shouldn’t take the desktop interface literally, obviously we should still take it seriously, because moving that folder to the trash bin can have direct implications on our lives, by potentially destroying months worth of valuable work on a manuscript that is contained in that folder.  Likewise we should take our perceptions seriously, even if we don’t take them literally.  We know that stepping in front of a moving train will likely end our conscious experience even if it is for causal reasons that we have no epistemic access to via our perception, given the species-specific “desktop interface” that evolution has endowed us with.

Relevance to the Mind-body Problem

The crucial point with this analogy is the fact that if our knowledge was confined to the desktop interface of the computer, we’d never be able to ascertain the underlying reality of the “computer”, because all that information that we don’t need to know about that underlying reality is hidden from us.  The same would apply to our perception, where it would be epistemically isolated from the underlying objective reality that exists.  I want to add to this point that even though it appears that we have found the underlying guts of our consciousness, i.e., the findings in neuroscience, it would be mistaken to think that this approach will conclusively answer the mind-body problem because the interface that we’ve used to discover our brains’ underlying neurobiology is still the “desktop” interface.

So while we may think we’ve found the underlying guts of “the computer”, this is far from certain, given the possibility of and support for this theory.  This may end up being the reason why many philosophers claim there is a “hard problem” of consciousness and one that can’t be solved.  It could be that we simply are stuck in the desktop interface and there’s no way to find out about the underlying reality that gives rise to that interface.  All we can do is maximize our knowledge of the interface itself and that would be our epistemic boundary.

Predictions of the Theory

Now if this was just a fancy idea put forward by Hoffman, that would be interesting in its own right, but the fact that it is supported by evolutionary game theory and genetic algorithm simulations shows that the theory is more than plausible.  Even better, the theory is actually a scientific theory (and not just a hypothesis), because it has made falsifiable predictions as well.  It predicts that “each species has its own interface (with some similarities between phylogenetically related species), almost surely no interface performs reconstructions (read the second link for more details on this), each interface is tailored to guide adaptive behavior in the relevant niche, much of the competition between and within species exploits strengths and limitations of interfaces, and such competition can lead to arms races between interfaces that critically influence their adaptive evolution.”  The theory predicts that interfaces are essential to understanding evolution and the competition between organisms, whereas the reconstruction theory makes such understanding impossible.  Thus, evidence of interfaces should be widespread throughout nature.

In his paper, he mentions the Jewel beetle as a case in point.  This beetle has a perceptual category, desirable females, which works well in its niche, and it uses it to choose larger females because they are the best mates.  According to the reconstructionist thesis, the male’s perception of desirable females should incorporate a statistical estimate of the true sizes of the most fertile females, but it doesn’t do this.  Instead, it has a category based on “bigger is better” and although this bestows a high fitness behavior for the male beetle in its evolutionary niche, if it comes into contact with a “stubbie” beer bottle, it falls into an infinite loop by being drawn to this supernormal stimuli since it is smooth, brown, and extremely large.  We can see that the “bigger is better” perceptual category relies on less information about the true nature of reality and instead chooses an “informational shortcut”.  The evidence of supernormal stimuli which have been found with many species further supports the theory and is evidence against the reconstructionist claim that perceptual categories estimate the statistical structure of the world.

More on Conscious Realism (Consciousness is all there is?)

This last link provided here shows the mathematical formalism of Hoffman’s conscious realist theory as proved by Chetan Prakash.  It contains a thorough explanation of the conscious realist theory (which goes above and beyond the interface theory of perception) and it also provides answers to common objections put forward by other scientists and philosophers on this theory.