The Open Mind

Cogito Ergo Sum

Posts Tagged ‘Consciousness

The Experientiality of Matter

with 5 comments

If there’s one thing that Nietzsche advocated for, it was to eliminate dogmatism and absolutism from one’s thinking.  And although I generally agree with him here, I do think that there is one exception to this rule.  One thing that we can be absolutely certain of is our own conscious experience.  Nietzsche actually had something to say about this (in Beyond Good and Evil, which I explored in a previous post), where he responds to Descartes’ famous adage “cogito ergo sum” (the very adage associated with my blog!), and he basically says that Descartes’ conclusion (“I think therefore I am”) shows a lack of reflection concerning the meaning of “I think”.  He wonders how he (and by extension, Descartes) could possibly know for sure that he was doing the thinking rather than the thought doing the thinking, and he even considers the possibility that what is generally described as thinking may actually be something more like willing or feeling or something else entirely.

Despite Nietzsche’s criticism against Descartes in terms of what thought is exactly or who or what actually does the thinking, we still can’t deny that there is thought.  Perhaps if we replace “I think therefore I am” with something more like “I am conscious, therefore (my) conscious experience exists”, then we can retain some core of Descartes’ insight while throwing out the ambiguities or uncertainties associated with how exactly one interprets that conscious experience.

So I’m in agreement with Nietzsche in the sense that we can’t be certain of any particular interpretation of our conscious experience, including whether or not there is an ego or a self (which Nietzsche actually describes as a childish superstition similar to the idea of a soul), nor can we make any certain inferences about the world’s causal relations stemming from that conscious experience.  Regardless of these limitations, we can still be sure that conscious experience exists, even if it can’t be ascribed to an “I” or a “self” or any particular identity (let alone a persistent identity).

Once we’re cognizant of this certainty, and if we’re able to crawl out of the well of solipsism and eventually build up a theory about reality (for pragmatic reasons at the very least), then we must remain aware of the priority of consciousness in any resultant theory we construct about reality, with regard to its structure or any of its other properties.  Personally, I believe that some form of naturalistic physicalism (a realistic physicalism) is the best candidate for an ontological theory that is the most parsimonious and explanatory for all that we experience in our reality.  However, most people that make the move to adopt some brand of physicalism seem to throw the baby out with the bathwater (so to speak), whereby consciousness gets eliminated by assuming it’s an illusion or that it’s not physical (therefore having no room for it in a physicalist theory, aside from its neurophysiological attributes).

Although I used to feel differently about consciousness (and it’s relationship to physicalism), where I thought it was plausible for it to be some kind of an illusion, upon further reflection I’ve come to realize that this was a rather ridiculous position to hold.  Consciousness can’t be an illusion in the proper sense of the word, because the experience of consciousness is real.  Even if I’m hallucinating where my perceptions don’t directly correspond with the actual incoming sensory information transduced through my body’s sensory receptors, then we can only say that the perceptions are illusory insofar as they don’t directly map onto that incoming sensory information.  But I still can’t say that having these experiences is itself an illusion.  And this is because consciousness is self-evident and experiential in that it constitutes whatever is experienced no matter what that experience consists of.

As for my thoughts on physicalism, I came to realize that positing consciousness as an intrinsic property of at least some kinds of physical material (analogous to a property like mass) allows us to avoid having to call consciousness non-physical.  If it is simply an experiential property of matter, that doesn’t negate its being a physical property of that matter.  It may be that we can’t access this property in such a way as to evaluate it with external instrumentation, like we can for all the other properties of matter that we know of such as mass, charge, spin, or what-have-you, but that doesn’t mean an experiential property should be off limits for any physicalist theory.  It’s just that most physicalists assume that everything can or has to be reducible to the externally accessible properties that our instrumentation can measure.  And this suggests that they’ve simply assumed that the physical can only include externally accessible properties of matter, rather than both internally and externally accessible properties of matter.

Now it’s easy to see why science might push philosophy in this direction because its methodology is largely grounded on third-party verification and a form of objectivity involving the ability to accurately quantify everything about a phenomenon with little or no regard for introspection or subjectivity.  And I think that this has caused many a philosopher to paint themselves into a corner by assuming that any ontological theory underlying the totality of our reality must be constrained in the same way that the physical sciences are.  To see why this is an unwarranted assumption, let’s consider a “black box” that can only be evaluated by a certain method externally.  It would be fallacious to conclude that just because we are unable to access the inside of the box, that the box must therefore be empty inside or that there can’t be anything substantially different inside the box compared to what is outside the box.

We can analogize this limitation of studying consciousness with our ability to study black holes within the field of astrophysics, where we’ve come to realize that accessing any information about their interior (aside from how much mass there is) is impossible to do from the outside.  And if we managed to access this information (if there is any) from the inside by leaping past its outer event horizon, it would be impossible for us to escape and share any of that information.  The best we can do is to learn what we can from the outside behavior of the black hole in terms of its interaction with surrounding matter and light and infer something about the inside, like how much matter it contains (e.g. we can infer the mass of a black hole from its outer surface area).  And we can learn a little bit more by considering what is needed to create or destroy a black hole, thus creating or destroying any interior qualities that may or may not exist.

A black hole can only form from certain configurations of matter, particularly aggregates that are above a certain mass and density.  And it can only be destroyed by starving it to death, by depriving it of any new matter, where it will slowly die by evaporating entirely into Hawking radiation, thus destroying anything that was on the inside in the process.  So we can infer that any internal qualities it does have, however inaccessible they may be, can be brought into and out of existence with certain physical processes.

Similarly, we can infer some things about consciousness by observing one’s external behavior including inferring some conditions that can create, modify, or destroy that type of consciousness, but we are unable to know what it’s like to be on the inside of that system once it exists.  We’re only able to know about the inside of our own conscious system, where we are in some sense inside our own black hole with nobody else able to access this perspective.  And I think it is easy enough to imagine that certain configurations of matter simply have an intrinsic, externally inaccessible experiential property, just as certain configurations of matter lead to the creation of a black hole with its own externally inaccessible and qualitatively unknown internal properties.  Despite the fact that we can’t access the black hole’s interior with a strictly external method, to determine its internal properties, this doesn’t mean we should assume that whatever properties may exist inside it are therefore fundamentally non-physical.  Just as we wouldn’t consider alternate dimensions (such as those predicted in M-theory/String-Theory) that we can’t physically access to be non-physical.  Perhaps one or more of these inaccessible dimensions (if they exist) is what accounts for an intrinsic experiential property within matter (though this is entirely speculative and need not be true for the previous points to hold, but it’s an interesting thought nevertheless).

Here’s a relevant quote from the philosopher Galen Strawson, where he outlines what physicalism actually entails:

Real physicalists must accept that at least some ultimates are intrinsically experience-involving. They must at least embrace micropsychism. Given that everything concrete is physical, and that everything physical is constituted out of physical ultimates, and that experience is part of concrete reality, it seems the only reasonable position, more than just an ‘inference to the best explanation’… Micropsychism is not yet panpsychism, for as things stand realistic physicalists can conjecture that only some types of ultimates are intrinsically experiential. But they must allow that panpsychism may be true, and the big step has already been taken with micropsychism, the admission that at least some ultimates must be experiential. ‘And were the inmost essence of things laid open to us’ I think that the idea that some but not all physical ultimates are experiential would look like the idea that some but not all physical ultimates are spatio-temporal (on the assumption that spacetime is indeed a fundamental feature of reality). I would bet a lot against there being such radical heterogeneity at the very bottom of things. In fact (to disagree with my earlier self) it is hard to see why this view would not count as a form of dualism… So now I can say that physicalism, i.e. real physicalism, entails panexperientialism or panpsychism. All physical stuff is energy, in one form or another, and all energy, I trow, is an experience-involving phenomenon. This sounded crazy to me for a long time, but I am quite used to it, now that I know that there is no alternative short of ‘substance dualism’… Real physicalism, realistic physicalism, entails panpsychism, and whatever problems are raised by this fact are problems a real physicalist must face.

— Galen Strawson, Consciousness and Its Place in Nature: Does Physicalism Entail Panpsychism?
.
While I don’t believe that all matter has a mind per se, because a mind is generally conceived as being a complex information processing structure, I think it is likely that all matter has an experiential quality of some kind, even if most instantiations of it are entirely unrecognizable as what we’d generally consider to be “consciousness” or “mentality”.  I believe that the intuitive gap here is born from the fact that the only minds we are confident exist are those instantiated by a brain, which has the ability to make an incredibly large number of experiential discriminations between various causal relations, thus giving it a capacity that is incredibly complex and not observed anywhere else in nature.  On the other hand, a particle of matter on its own would be hypothesized to have the capacity to make only one or two of these kinds of discriminations, making it unintelligent and thus incapable of brain-like activity.  Once a person accepts that an experiential quality can come in varying degrees from say one experiential “bit” to billions or trillions of “bits” (or more), then we can plausibly see how matter could give rise to systems that have a vast range in their causal power, from rocks that don’t appear to do much at all, to living organisms that have the ability to store countless representations of causal relations (memory) allowing them to behave in increasingly complex ways.  And perhaps best of all, this approach solves the mind-body problem by eliminating the mystery of how fundamentally non-experiential stuff could possibly give rise to experientiality and consciousness.
Advertisements

Some Thoughts on “Fear & Trembling”

leave a comment »

I’ve been meaning to write this post for quite some time, but haven’t had the opportunity until now, so here it goes.  I want to explore some of Kierkegaard’s philosophical claims or themes in his book Fear and Trembling.  Kierkegaard regarded himself as a Christian and so there are a lot of literary themes revolving around faith and the religious life, but he also centers a lot of his philosophy around the subjective individual, all of which I’d like to look at in more detail.

In Fear and Trembling, we hear about the story found in Genesis (Ch. 22) where Abraham attempts to sacrifice his own beloved son Isaac, after hearing God command him to do so.  For those unfamiliar with this biblical story, after they journey out to mount Moriah he binds Isaac to an alter, and as Abraham draws his knife to slit the throat of his beloved son, an angel appears just in time and tells him to stop “for now I know that you fear God”.  Then Abraham sees a goat nearby and sacrifices it instead of his son.  Kierkegaard uses this story in various ways, and considers four alternative versions of it (and their consequences), to explicate the concept of faith as admirable though fundamentally incomprehensible and unintelligible.

He begins his book by telling us about a man who has deeply admired this story of Abraham ever since he first heard it as a child, with this admiration for it growing stronger over time while understanding the story less and less.  The man considers four alternative versions of the story to try and better understand Abraham and how he did what he did, but never manages to obtain this understanding.

I’d like to point out here that an increased confusion would be expected if the man has undergone moral and intellectual growth during his journey from childhood to adulthood.  We tend to be more impulsive, irrational and passionate as children, with less regard for any ethical framework to live by.  And sure enough, Kierkegaard even mentions the importance of passion in making a leap of faith.  Nevertheless, as we continue to mature and accumulate life experience, we tend to develop some control over our passions and emotions, we build up our intellect and rationality, and also further develop an ethic with many ethical behaviors becoming habituated if cultivated over time.  If a person cultivates moral virtues like compassion, honesty, and reasonableness, then it would be expected that they’d find Abraham’s intended act of murder (let alone filicide) repugnant.  But, regardless of the reasons for the man’s lack of understanding, he admires the story more and more, likely because it reveres Abraham as the father of faith, and portrays faith itself as a most honorable virtue.

Kierkegaard’s main point in Fear and Trembling is that one has to suspend their relation to the ethical (contrary to Kant and Hegel), in order to make any leap of faith, and that there’s no rational decision making process involved.  And so it seems clear that Kierkegaard knows that what Abraham did in this story was entirely unethical (attempting to kill an innocent child) in at least one sense of the word ethical, but he believes nevertheless that this doesn’t matter.

To see where he’s coming from, we need to understand Kierkegaard’s idea that there are basically three ways or stages of living, namely the aesthetic, the ethical, and the religious.  The aesthetic life is that of sensuous or felt experience, infinite potentiality through imagination, hiddenness or privacy, and an overarching egotism focused on the individual.  The ethical life supersedes or transcends this aesthetic way of life by relating one to “the universal”, that is, to the common good of all people, to social contracts, and to the betterment of others over oneself.  The ethical life, according to Kierkegaard, also consists of public disclosure or transparency.  Finally, the religious life supersedes the ethical (and thus also supersedes the aesthetic) but shares some characteristics of both the aesthetic and the ethical.

The religious, like the aesthetic, operates on the level of the individual, but with the added component of the individual having a direct relation to God.  And just like the ethical, the religious appeals to a conception of good and evil behavior, but God is the arbiter in this way of life rather than human beings or their nature.  Thus the sphere of ethics that Abraham might normally commit himself to in other cases is thought to be superseded by the religious sphere, the sphere of faith.  Within this sphere of faith, Abraham assumes that anything that God commands is Abraham’s absolute duty to uphold, and he also has faith that this will lead to the best ends.  This of course, is known as a form of divine command theory, which is actually an ethical and meta-ethical theory.  Although Kierkegaard claims that the religious is somehow above the ethical, it is for the most part just another way of living that involves another ethical principle.  In this case, the ethical principle is for one to do whatever God commands them to (even if these commands are inconsistent or morally repugnant from a human perspective), and this should be done rather than abiding by our moral conscience or some other set of moral rules, social mores, or any standards based on human judgment, human nature, etc.

It appears that the primary distinction between the ethical and the religious is the leap of faith that is made in the latter stage of living which involves an act performed “in virtue of the absurd”.  For example, Abraham’s faith in God was really a faith that God wouldn’t actually make him kill his son Isaac.  Had Abraham been lacking in this particular faith, Kierkegaard seems to argue that Abraham’s conscience and moral perspective (which includes “the universal”) would never have allowed him to do what he did.  Thus, Abraham’s faith, according to Kierkegaard, allowed him to (at least temporarily) suspend the ethical in virtue of the absurd notion that somehow the ethical would be maintained in the end.  In other words, Abraham thought that he could obey God’s command, even if this command was prima facie immoral, because he had faith that God wouldn’t actually make Abraham perform an unethical act.

I find it interesting that this particular function or instantiation of faith, as outlined by Kierkegaard, makes for an unusual interpretation of divine command theory.  If divine command theory attempts to define good or moral behavior as that which God commands, and if a leap of faith (such as that which Abraham took) can involve a belief that the end result of an unconscionable commandment is actually its negation or retraction, then a leap of faith such as that taken by Abraham would serve to contradict divine command theory to at least some degree.  It would seem that Kierkegaard wants to believe in the basic premise of divine command theory and therefore have an absolute duty to obey whatever God commands, and yet he also wants to believe that if this command goes against a human moral system or the human conscience, it will not end up doing so when one goes to carry out what has actually been commanded of them.  This seems to me to be an unusual pair of beliefs for one to hold simultaneously, for divine command theory allows for Abraham to have actually carried out the murder of his son (with no angel stopping him at the last second), and this heinous act would have been considered a moral one under such an awful theory.  And yet, Abraham had faith that this divine command would somehow be nullified and therefore reconciled with his own conscience and relation to the universal.

Kierkegaard has something to say about beliefs, and how they differ from faith-driven dispositions, and it’s worth noting this since most of us use the term “belief” as including that which one has faith in.  For Kierkegaard, belief implies that one is assured of its truth in some way, whereas faith requires one to accept the possibility that what they have faith in could be proven wrong.  Thus, it wasn’t enough for Abraham to believe in an absolute duty to obey whatever God commanded of him, because that would have simply been a case of obedience, and not faith.  Instead, Abraham also had to have faith that God would let Abraham spare his son Isaac, while accepting the possibility that he may be proven wrong and end up having to kill his son after all.  As such, Kierkegaard wouldn’t accept the way the term “faith” is often used in modern religious parlance.  Religious practitioners often say that they have faith in something and yet “know it to be true”, “know it for certain”, “know it will happen”, etc.  But if Abraham truly believed (let alone knew for certain) that God wouldn’t make him kill Isaac, then God’s command wouldn’t have served as any true test of faith.  So while Abraham may have believed that he had to kill his son, he also had faith that his son wouldn’t die, hence making a leap of faith in virtue of the absurd.

This distinction between belief and faith also seems to highlight Kierkegaard’s belief in some kind of prophetic consequentialist ethical framework.  Whereas most Christians tend to side with a Kantian deontological ethical system, Kierkegaard points out that ethical systems have rules which are meant to promote the well-being of large groups of people.  And since humans lack the ability to see far into the future, it’s possible that some rules made under this kind of ignorance may actually lead to an end that harms twenty people and only helps one.  Kierkegaard believes that faith in God can answer this uncertainty and circumvent the need to predict the outcome of our moral rules by guaranteeing a better end given the vastly superior knowledge that God has access to.  And any ethical system that appeals to the ends as justifying the means is a form of consequentialism (utilitarianism is perhaps the most common type of ethical consequentialism).

Although I disagree with Kiergegaard on a lot of points, such as his endorsement of divine command theory, and his appeal to an epistemologically bankrupt behavior like taking a leap of faith, I actually agree with Kierkegaard on his teleological ethical reasoning.  He’s right in his appealing to the ends in order to justify the means, and he’s right to want maximal knowledge involved in determining how best to achieve those ends.  It seems clear to me that all moral systems ultimately break down to a form of consequentialism anyway (a set of hypothetical imperatives), and any disagreement between moral systems is really nothing more than a disagreement about what is factual or a disagreement about which consequences should be taken into account (e.g. happiness of the majority, happiness of the least well off, self-contentment for the individual, how we see ourselves as a person, etc.).

It also seems clear that if you are appealing to some set of consequences in determining what is and is not moral behavior, then having maximal knowledge is your best chance of achieving those ends.  But we can only determine the reliability of the knowledge by seeing how well it predicts the future (through inferred causal relations), and that means we can only establish the veracity of any claimed knowledge through empirical means.  Since nobody has yet been able to establish that a God (or gods) exists through any empirical means, it goes without saying that nobody has been able to establish the veracity of any God-knowledge.

Lacking the ability to test this, one would also need to have faith in God’s knowledge, which means they’ve merely replaced one form of uncertainty (the predicted versus actual ends of human moral systems) with another form of uncertainty (the predicted versus actual knowledge of God).  Since the predicted versus actual ends of our moral systems can actually be tested, while the knowledge of God cannot, then we have a greater uncertainty in God’s knowledge than in the efficacy and accuracy of our own moral systems.  This is a problem for Kierkegaard, because his position seems to be that the leap of faith taken by Abraham was essentially grounded on the assumption that God had superior knowledge to achieve the best telos, and thus his position is entirely unsupportable.

Aside from the problems inherent in Kierkegaard’s beliefs about faith and God, I do like his intense focus on the priority of the individual.  As mentioned already, both the aesthetic and religious ways of life that have been described operate on this individual level.  However, one criticism I have to make about Kierkegaard’s life-stage trichotomy is that morality/ethics actually does operate on the individual level even if it also indirectly involves the community or society at large.  And although it is not egotistic like the aesthetic life is said to be, it is egoistic because rational self-interest is in fact at the heart of all moral systems that are consistent and sufficiently motivating to follow.

If you maximize your personal satisfaction and life fulfillment by committing what you believe to be a moral act over some alternative that you believe will make you less fulfilled and thus less overall satisfied (such as not obeying God), then you are acting for your own self-interest (by obeying God), even if you are not acting in an explicitly selfish way.  A person can certainly be wrong about what will actually make them most satisfied and fulfilled, but this doesn’t negate one’s intention to do so.  Acting for the betterment of others over oneself (i.e. living by or for “the universal”) involves behaviors that lead you to a more fulfilling life, in part based on how those actions affect your view of yourself and your character.  If one believes in gods or a God, then their perspective on their belief of how God sees them will also affect their view of themselves.  In short, a properly formulated ethics is centered around the individual even if it seems otherwise.

Given the fact that Kierkegaard seems to have believed that the ethical life revolved around the universal rather than the individual, perhaps it’s no wonder that he would choose to elevate some kind of individualistic stage of life, namely the religious life, over that of the ethical.  It would be interesting to see how his stages of life may have looked had he believed in a more individualistic theory of ethics.  I find that an egoistic ethical framework actually fits quite nicely with the rest of Kierkegaard’s overtly individualistic philosophy.

He ends this book by pointing out that passion is required in order to have faith, and passion isn’t something that somebody can teach us, unlike the epistemic fruits of rational reflection.  Instead, passion has to be experienced firsthand in order for us to understand it at all.  He contrasts this passion with the disinterested intellectualization involved in reflection, which was the means used in Hegel’s approach to try and understand faith.

Kierkegaard doesn’t think that Hegel’s method will suffice since it isn’t built upon a fundamentally subjective experiential foundation and instead tries to understand faith and systematize it through an objective analysis based on logic and rational reflection.  Although I see logic and rational reflection as most important for best achieving our overall happiness and life fulfillment, I can still appreciate the significant role of passion and felt experience within the human condition, our attraction to it, and it’s role in religious belief.  I can also appreciate how our overall satisfaction and life fulfillment are themselves instantiated and evaluated as a subjective felt experience, and one that is entirely individualistic.  And so I can’t help but agree with Kierkegaard, in recognizing that there is no substitute for a subjective experience, and no way to adequately account for the essence of those experiences through entirely non-subjective (objective) means.

The individual subject and their conscious experience is of primary importance (it’s the only thing we can be certain exists), and the human need to find meaning in an apparently meaningless world is perhaps the most important facet of that ongoing conscious experience.  Even though I disagree with a lot of what Kierkegaard believed, it wasn’t all bull$#!+.  I think he captured and expressed some very important points about the individual and some of the psychological forces that color the view of our personal identity and our own existence.

Predictive Processing: Unlocking the Mysteries of Mind & Body (Part VI)

leave a comment »

This is the last post I’m going to write for this particular post-series on Predictive Processing (PP).  Here’s the links to parts 1, 2, 3, 4, and 5.  I’ve already explored a bit on how a PP framework can account for folk psychological concepts like beliefs, desires, and emotions, how it accounts for action, language and ontology, knowledge, and also perception, imagination, and reasoning.  In this final post for this series, I’m going to explore consciousness itself and how some theories of consciousness fit very nicely within a PP framework.

Consciousness as Prediction (Predicting The Self)

Earlier in this post-series I explored how PP treats perception (whether online or an offline form like imagination) as simply predictions pertaining to incoming visual information with varying degrees of precision weighting assigned to the resulting prediction error.  In a sense then, consciousness just is prediction.  At the very least, it is a subset of the predictions, likely those that are higher up in the predictive hierarchy.  This is all going to depend on which aspect or level of consciousness we are trying to explain, and as philosophers and cognitive scientists well know, consciousness is difficult to pin down and define in any way.

By consciousness, we could mean any kind of awareness at all, or we could limit this term to only apply to a system that is aware of itself.  Either way we have to be careful here if we’re looking to distinguish between consciousness generally speaking (consciousness in any form, which may be unrecognizable to us) and the unique kind of consciousness that we as human beings experience.  If an ant is conscious, it doesn’t likely have any of the richness that we have in our experience nor is it likely to have self-awareness like we do (even though a dolphin, which has a large neocortex and prefrontal cortex, is far more likely to).  So we have to keep these different levels of consciousness in mind in order to properly assess their being explained by any cognitive framework.

Looking through a PP lens, we can see that what we come to know about the world and our own bodily states is a matter of predictive models pertaining to various inferred causal relations.  These inferred causal relations ultimately stem from bottom-up sensory input.  But when this information is abstracted at higher and higher levels, eventually one can (in principle) get to a point where those higher level models begin to predict the existence of a unified prediction engine.  In other words, a subset of the highest-level predictive models may eventually predict itself as a self.  We might describe this process as the emergence of some level of self-awareness, even if higher levels of self-awareness aren’t possible unless particular kinds of higher level models have been generated.

What kinds of predictions might be involved with this kind of emergence?  Well, we might expect that predictions pertaining to our own autobiographical history, which is largely composed of episodic memories of our past experience, would contribute to this process (e.g. “I remember when I went to that amusement park with my friend Mary, and we both vomited!”).  If we begin to infer what is common or continuous between those memories of past experiences (even if only 1 second in the past), we may discover that there is a form of psychological continuity or identity present.  And if this psychological continuity (this type of causal relation) is coincident with an incredibly stable set of predictions pertaining to (especially internal) bodily states, then an embodied subject or self can plausibly emerge from it.

This emergence of an embodied self is also likely fueled by predictions pertaining to other objects that we infer existing as subjects.  For instance, in order to develop a theory of mind about other people, that is, in order to predict how other people will behave, we can’t simply model their external behavior as we can for something like a rock falling down a hill.  This can work up to a point, but eventually it’s just not good enough as behaviors become more complex.  Animal behavior, most especially that of humans, is far more complex than that of inanimate objects and as such it is going to be far more effective to infer some internal hidden causes for that behavior.  Our predictions would work well if they involved some kind of internal intentionality and goal-directedness operating within any animate object, thereby transforming that object into a subject.  This should be no less true for how we model our own behavior.

If this object that we’ve now inferred to be a subject seems to behave in ways that we see ourselves as behaving (especially if it’s another human being), then we can begin to infer some kind of equivalence despite being separate subjects.  We can begin to infer that they too have beliefs, desires, and emotions, and thus that they have an internal perspective that we can’t directly access just as they aren’t able to access ours.  And we can also see ourselves from a different perspective based on how we see those other subjects from an external perspective.  Since I can’t easily see myself from an external perspective, when I look at others and infer that we are similar kinds of beings, then I can begin to see myself as having both an internal and external side.  I can begin to infer that others see me similar to the way that I see them, thus further adding to my concept of self and the boundaries that define that self.

Multiple meta-cognitive predictions can be inferred based on all of these interactions with others, and from the introspective interactions with our brain’s own models.  Once this happens, a cognitive agent like ourselves may begin to think about thinking and think about being a thinking being, and so on and so forth.  All of these cognitive moves would seem to provide varying degrees of self-hood or self-awareness.  And these can all be thought of as the brain’s best guesses that account for its own behavior.  Either way, it seems that the level of consciousness that is intrinsic to an agent’s experience is going to be dependent on what kinds of higher level models and meta-models are operating within the agent.

Consciousness as Integrated Information

One prominent theory of consciousness is the Integrated Information Theory of Consciousness, otherwise known as IIT.  This theory, initially formulated by Giulio Tononi back in 2004 and which has undergone development ever since, posits that consciousness is ultimately dependent on the degree of information integration that is inherent in the causal properties of some system.  Another way of saying this is that the causal system specified is unified such that every part of the system must be able to affect and be affected by the rest of the system.  If you were to physically isolate one part of a system from the rest of it (and if this part was the least significant to the rest of the system), then the resulting change in the cause-effect structure of the system would quantify the degree of integration.  A large change in the cause-effect structure based on this part’s isolation from the system (that is, by having introduced what is called a minimum partition to the system) would imply a high degree of information integration and vice versa.  And again, a high degree of integration implies a high degree of consciousness.

Notice how this information integration axiom in IIT posits that a cognitive system that is entirely feed-forward will not be conscious.  So if our brain processed incoming sensory information from the bottom up and there was no top-down generative model feeding downward through the system, then IIT would predict that our brain wouldn’t be able to produce consciousness.  PP on the other hand, posits a feedback system (as opposed to feed-forward) where the bottom-up sensory information that flows upward is met with a downward flow of top-down predictions trying to explain away that sensory information.  The brain’s predictions cause a change in the resulting prediction error, and this prediction error serves as feedback to modify the brain’s predictions.  Thus, a cognitive architecture like that suggested by PP is predicted to produce consciousness according to the most fundamental axiom of IIT.

Additionally, PP posits cross-modal sensory features and functionality where the brain integrates (especially lower level) predictions spanning various spatio-temporal scales from different sensory modalities, into a unified whole.  For example, if I am looking at and petting a black cat lying on my lap and hearing it purr, PP posits that my perceptual experience and contextual understanding of that experience are based on having integrated the visual, tactile, and auditory expectations that I’ve associated to constitute such an experience of a “black cat”.  It is going to be contingent on a conjunction of predictions that are occurring simultaneously in order to produce a unified experience rather than a barrage of millions or billions of separate causal relations (let alone those which stem from different sensory modalities) or having millions or billions of separate conscious experiences (which would seem to necessitate separate consciousnesses if they are happening at the same time).

Evolution of Consciousness

Since IIT identifies consciousness with integrated information, it can plausibly account for why it evolved in the first place.  The basic idea here is that a brain that is capable of integrating information is more likely to exploit and understand an environment that has a complex causal structure on multiple time scales than a brain that has informationally isolated modules.  This idea has been tested and confirmed to some degree by artificial life simulations (animats) where adaptation and integration are both simulated.  The organism in these simulations was a Braitenberg-like vehicle that had to move through a maze.  After 60,000 generations of simulated brains evolving through natural selection, it was found that there was a monotonic relationship between their ability to get through the maze and the amount of simulated information integration in their brains.

This increase in adaptation was the result of an effective increase in the number of concepts that the organism could make use of given the limited number of elements and connections possible in its cognitive architecture.  In other words, given a limited number of connections in a causal system (such as a group of neurons), you can pack more functions per element if the level of integration with respect to those connections is high, thus giving an evolutionary advantage to those with higher integration.  Therefore, when all else is equal in terms of neural economy and resources, higher integration gives an organism the ability to take advantage of more regularities in their environment.

From a PP perspective, this makes perfect sense because the complex causal structure of the environment is described as being modeled at many different levels of abstraction and at many different spatio-temporal scales.  All of these modeled causal relations are also described as having a hierarchical structure with models contained within models, and with many associations existing between various models.  These associations between models can be accounted for by a cognitive architecture that re-uses certain sets of neurons in multiple models, so the association is effectively instantiated by some literal degree of neuronal overlap.  And of course, these associations between multiply-leveled predictions allows the brain to exploit (and create!) as many regularities in the environment as possible.  In short, both PP and IIT make a lot of practical sense from an evolutionary perspective.

That’s All Folks!

And this concludes my post-series on the Predictive Processing (PP) framework and how I see it as being applicable to a far more broad account of mentality and brain function, than it is generally assumed to be.  If there’s any takeaways from this post-series, I hope you can at least appreciate the parsimony and explanatory scope of predictive processing and viewing the brain as a creative and highly capable prediction engine.

Conscious Realism & The Interface Theory of Perception

with 6 comments

A few months ago I was reading an interesting article in The Atlantic about Donald Hoffman’s Interface Theory of Perception.  As a person highly interested in consciousness studies, cognitive science, and the mind-body problem, I found the basic concepts of his theory quite fascinating.  What was most interesting to me was the counter-intuitive connection between evolution and perception that Hoffman has proposed.  Now it is certainly reasonable and intuitive to assume that evolutionary natural selection would favor perceptions that are closer to “the truth” or closer to the objective reality that exists independent of our minds, simply because of the idea that perceptions that are more accurate will be more likely to lead to survival than perceptions that are not accurate.  As an example, if I were to perceive lions as inert objects like trees, I would be more likely to be naturally selected against and eaten by a lion when compared to one who perceives lions as a mobile predator that could kill them.

While this is intuitive and reasonable to some degree, what Hoffman actually shows, using evolutionary game theory, is that with respect to organisms with comparable complexity, those with perceptions that are closer to reality are never going to be selected for nearly as much as those with perceptions that are tuned to fitness instead.  More so, truth in this case will be driven to extinction when it is up against perceptual models that are tuned to fitness.  That is to say, evolution will select for organisms that perceive the world in a way that is less accurate (in terms of the underlying reality) as long as the perception is tuned for survival benefits.  The bottom line is that given some specific level of complexity, it is more costly to process more information (costing more time and resources), and so if a “heuristic” method for perception can evolve instead, one that “hides” all the complex information underlying reality and instead provides us with a species-specific guide to adaptive behavior, that will always be the preferred choice.

To see this point more clearly, let’s consider an example.  Let’s imagine there’s an animal that regularly eats some kind of insect, such as a beetle, but it needs to eat a particular sized beetle or else it has a relatively high probability of eating the wrong kind of beetle (and we can assume that the “wrong” kind of beetle would be deadly to eat).  Now let’s imagine two possible types of evolved perception: it could have really accurate perceptions about the various sizes of beetles that it encounters so it can distinguish many different sizes from one another (and then choose the proper size range to eat), or it could evolve less accurate perceptions such that all beetles that are either too small or too large appear as indistinguishable from one another (maybe all the wrong-sized beetles whether too large or too small look like indistinguishable red-colored blobs) and perhaps all the beetles that are in the ideal size range for eating appear as green-colored blobs (that are again, indistinguishable from one another).  So the only discrimination in this latter case of perception is between red and green colored blobs.

Both types of perception would solve the problem of which beetles to eat or not eat, but the latter type (even if much less accurate) would bestow a fitness advantage over the former type, by allowing the animal to process much less information about the environment by not focusing on relatively useless information (like specific beetle size).  In this case, with beetle size as the only variable under consideration for survival, evolution would select for the organism that knows less total information about beetle size, as long as it knows what is most important about distinguishing the edible beetles from the poisonous beetles.  Now we can imagine that in some cases, the fitness function could align with the true structure of reality, but this is not what we ever expect to see generically in the world.  At best we may see some kind of overlap between the two but if there doesn’t have to be any then truth will go extinct.

Perception is Analogous to a Desktop Computer Interface

Hoffman analogizes this concept of a “perception interface” with the desktop interface of a personal computer.  When we see icons of folders on the desktop and drag one of those icons to the trash bin, we shouldn’t take that interface literally, because there isn’t literally a folder being moved to a literal trash bin but rather it is simply an interface that hides most if not all of what is really going on in the background — all those various diodes, resistors and transistors that are manipulated in order to modify stored information that is represented in binary code.

The desktop interface ultimately provides us with an easy and intuitive way of accomplishing these various information processing tasks because trying to do so in the most “truthful” way — by literally manually manipulating every diode, resistor, and transistor to accomplish the same task — would be far more cumbersome and less effective than using the interface.  Therefore the interface, by hiding this truth from us, allows us to “navigate” through that computational world with more fitness.  In this case, having more fitness simply means being able to accomplish information processing goals more easily, with less resources, etc.

Hoffman goes on to say that even though we shouldn’t take the desktop interface literally, obviously we should still take it seriously, because moving that folder to the trash bin can have direct implications on our lives, by potentially destroying months worth of valuable work on a manuscript that is contained in that folder.  Likewise we should take our perceptions seriously, even if we don’t take them literally.  We know that stepping in front of a moving train will likely end our conscious experience even if it is for causal reasons that we have no epistemic access to via our perception, given the species-specific “desktop interface” that evolution has endowed us with.

Relevance to the Mind-body Problem

The crucial point with this analogy is the fact that if our knowledge was confined to the desktop interface of the computer, we’d never be able to ascertain the underlying reality of the “computer”, because all that information that we don’t need to know about that underlying reality is hidden from us.  The same would apply to our perception, where it would be epistemically isolated from the underlying objective reality that exists.  I want to add to this point that even though it appears that we have found the underlying guts of our consciousness, i.e., the findings in neuroscience, it would be mistaken to think that this approach will conclusively answer the mind-body problem because the interface that we’ve used to discover our brains’ underlying neurobiology is still the “desktop” interface.

So while we may think we’ve found the underlying guts of “the computer”, this is far from certain, given the possibility of and support for this theory.  This may end up being the reason why many philosophers claim there is a “hard problem” of consciousness and one that can’t be solved.  It could be that we simply are stuck in the desktop interface and there’s no way to find out about the underlying reality that gives rise to that interface.  All we can do is maximize our knowledge of the interface itself and that would be our epistemic boundary.

Predictions of the Theory

Now if this was just a fancy idea put forward by Hoffman, that would be interesting in its own right, but the fact that it is supported by evolutionary game theory and genetic algorithm simulations shows that the theory is more than plausible.  Even better, the theory is actually a scientific theory (and not just a hypothesis), because it has made falsifiable predictions as well.  It predicts that “each species has its own interface (with some similarities between phylogenetically related species), almost surely no interface performs reconstructions (read the second link for more details on this), each interface is tailored to guide adaptive behavior in the relevant niche, much of the competition between and within species exploits strengths and limitations of interfaces, and such competition can lead to arms races between interfaces that critically influence their adaptive evolution.”  The theory predicts that interfaces are essential to understanding evolution and the competition between organisms, whereas the reconstruction theory makes such understanding impossible.  Thus, evidence of interfaces should be widespread throughout nature.

In his paper, he mentions the Jewel beetle as a case in point.  This beetle has a perceptual category, desirable females, which works well in its niche, and it uses it to choose larger females because they are the best mates.  According to the reconstructionist thesis, the male’s perception of desirable females should incorporate a statistical estimate of the true sizes of the most fertile females, but it doesn’t do this.  Instead, it has a category based on “bigger is better” and although this bestows a high fitness behavior for the male beetle in its evolutionary niche, if it comes into contact with a “stubbie” beer bottle, it falls into an infinite loop by being drawn to this supernormal stimuli since it is smooth, brown, and extremely large.  We can see that the “bigger is better” perceptual category relies on less information about the true nature of reality and instead chooses an “informational shortcut”.  The evidence of supernormal stimuli which have been found with many species further supports the theory and is evidence against the reconstructionist claim that perceptual categories estimate the statistical structure of the world.

More on Conscious Realism (Consciousness is all there is?)

This last link provided here shows the mathematical formalism of Hoffman’s conscious realist theory as proved by Chetan Prakash.  It contains a thorough explanation of the conscious realist theory (which goes above and beyond the interface theory of perception) and it also provides answers to common objections put forward by other scientists and philosophers on this theory.

Darwin’s Big Idea May Be The Biggest Yet

with 13 comments

Back in 1859, Charles Darwin released his famous theory of evolution by natural selection whereby inherent variations in the individual members of some population of organisms under consideration would eventually lead to speciation events due to those variations producing a differential in survival and reproductive success and thus leading to the natural selection of some subset of organisms within that population.  As Darwin explained in his On The Origin of Species:

If during the long course of ages and under varying conditions of life, organic beings vary at all in the several parts of their organisation, and I think this cannot be disputed; if there be, owing to the high geometrical powers of increase of each species, at some age, season, or year, a severe struggle for life, and this certainly cannot be disputed; then, considering the infinite complexity of the relations of all organic beings to each other and to their conditions of existence, causing an infinite diversity in structure, constitution, and habits, to be advantageous to them, I think it would be a most extraordinary fact if no variation ever had occurred useful to each being’s own welfare, in the same way as so many variations have occurred useful to man. But if variations useful to any organic being do occur, assuredly individuals thus characterised will have the best chance of being preserved in the struggle for life; and from the strong principle of inheritance they will tend to produce offspring similarly characterised. This principle of preservation, I have called, for the sake of brevity, Natural Selection.

While Darwin’s big idea completely transformed biology in terms of it providing (for the first time in history) an incredibly robust explanation for the origin of the diversity of life on this planet, his idea has since inspired other theories pertaining to perhaps the three largest mysteries that humans have ever explored: the origin of life itself (not just the diversity of life after it had begun, which was the intended scope of Darwin’s theory), the origin of the universe (most notably, why the universe is the way it is and not some other way), and also the origin of consciousness.

Origin of Life

In order to solve the first mystery (the origin of life itself), geologists, biologists, and biochemists are searching for plausible models of abiogenesis, whereby the general scheme of these models would involve chemical reactions (pertaining to geology) that would have begun to incorporate certain kinds of energetically favorable organic chemistries such that organic, self-replicating molecules eventually resulted.  Now, where Darwin’s idea of natural selection comes into play with life’s origin is in regard to the origin and evolution of these self-replicating molecules.  First of all, in order for any molecule at all to build up in concentration requires a set of conditions such that the reaction leading to the production of the molecule in question is more favorable than the reverse reaction where the product transforms back into the initial starting materials.  If merely one chemical reaction (out of a countless number of reactions occurring on the early earth) led to a self-replicating product, this would increasingly favor the production of that product, and thus self-replicating molecules themselves would be naturally selected for.  Once one of them was produced, there would have been a cascade effect of exponential growth, at least up to the limit set by the availability of the starting materials and energy sources present.

Now if we assume that at least some subset of these self-replicating molecules (if not all of them) had an imperfect fidelity in the copying process (which is highly likely) and/or underwent even a slight change after replication by reacting with other neighboring molecules (also likely), this would provide them with a means of mutation.  Mutations would inevitably lead to some molecules becoming more effective self-replicators than others, and then evolution through natural selection would take off, eventually leading to modern RNA/DNA.  So not only does Darwin’s big idea account for the evolution of diversity of life on this planet, but the basic underlying principle of natural selection would also account for the origin of self-replicating molecules in the first place, and subsequently the origin of RNA and DNA.

Origin of the Universe

Another grand idea that is gaining heavy traction in cosmology is that of inflationary cosmology, where this theory posits that the early universe underwent a period of rapid expansion, and due to quantum mechanical fluctuations in the microscopically sized inflationary region, seed universes would have resulted with each one having slightly different properties, one of which that would have expanded to be the universe that we live in.  Inflationary cosmology is currently heavily supported because it has led to a number of predictions, many of which that have already been confirmed by observation (it explains many large-scale features of our universe such as its homogeneity, isotropy, flatness, and other features).  What I find most interesting with inflationary theory is that it predicts the existence of a multiverse, whereby we are but one of an extremely large number of other universes (predicted to be on the order of 10^500, if not an infinite number), with each one having slightly different constants and so forth.

Once again, Darwin’s big idea, when applied to inflationary cosmology, would lead to the conclusion that our universe is the way it is because it was naturally selected to be that way.  The fact that its constants are within a very narrow range such that matter can even form, would make perfect sense, because even if an infinite number of universes exist with different constants, we would only expect to find ourselves in one that has the constants within the necessary range in order for matter, let alone life to exist.  So any universe that harbors matter, let alone life, would be naturally selected for against all the other universes that didn’t have the right properties to do so, including for example, universes that had too high or too low of a cosmological constant (such as those that would have instantly collapsed into a Big Crunch or expanded into a heat death far too quickly for any matter or life to have formed), or even universes that didn’t have the proper strong nuclear force to hold atomic nuclei together, or any other number of combinations that wouldn’t work.  So any universe that contains intelligent life capable of even asking the question of their origins, must necessarily have its properties within the required range (often referred to as the anthropic principle).

After our universe formed, the same principle would also apply to each galaxy and each solar system within those galaxies, whereby because variations exist in each galaxy and within each substituent solar system (differential properties analogous to different genes in a gene pool), then only those that have an acceptable range of conditions are capable of harboring life.  With over 10^22 stars in the observable universe (an unfathomably large number), and billions of years to evolve different conditions within each solar system surrounding those many stars, it isn’t surprising that eventually the temperature and other conditions would be acceptable for liquid water and organic chemistries to occur in many of those solar systems.  Even if there was only one life permitting planet per galaxy (on average), that would add up to over 100 billion life permitting planets in the observable universe alone (with many orders of magnitude more life permitting planets in the non-observable universe).  So given enough time, and given some mechanism of variation (in this case, differences in star composition and dynamics), natural selection in a sense can also account for the evolution of some solar systems that do in fact have life permitting conditions in a universe such as our own.

Origin of Consciousness

The last significant mystery I’d like to discuss involves the origin of consciousness.  While there are many current theories pertaining to different aspects of consciousness, and while there has been much research performed in the neurosciences, cognitive sciences, psychology, etc., pertaining to how the brain works and how it correlates to various aspects of the mind and consciousness, the brain sciences (though neuroscience in particular) are in their relative infancy and so there are still many questions that haven’t been answered yet.  One promising theory that has already been shown to account for many aspects of consciousness is Gerald Edelman’s theory of neuronal group selection (NGS) otherwise known as neural Darwinism (ND), which is a large scale theory of brain function.  As one might expect from the name, the mechanism of natural selection is integral to this theory.  In ND, the basic idea consists of three parts as read on the Wiki:

  1. Anatomical connectivity in the brain occurs via selective mechanochemical events that take place epigenetically during development.  This creates a diverse primary neurological repertoire by differential reproduction.
  2. Once structural diversity is established anatomically, a second selective process occurs during postnatal behavioral experience through epigenetic modifications in the strength of synaptic connections between neuronal groups.  This creates a diverse secondary repertoire by differential amplification.
  3. Re-entrant signaling between neuronal groups allows for spatiotemporal continuity in response to real-world interactions.  Edelman argues that thalamocortical and corticocortical re-entrant signaling are critical to generating and maintaining conscious states in mammals.

In a nutshell, the basic differentiated structure of the brain that forms in early development is accomplished through cellular proliferation, migration, distribution, and branching processes that involve selection processes operating on random differences in the adhesion molecules that these processes use to bind one neuronal cell to another.  These crude selection processes result in a rough initial configuration that is for the most part fixed.  However, because there are a diverse number of sets of different hierarchical arrangements of neurons in various neuronal groups, there are bound to be functionally equivalent groups of neurons that are not equivalent in structure, but are all capable of responding to the same types of sensory input.  Because some of these groups should in theory be better than others at responding to some particular type of sensory stimuli, this creates a form of neuronal/synaptic competition in the brain, whereby those groups of neurons that happen to have the best synaptic efficiency for the stimuli in question are naturally selected over the others.  This in turn leads to an increased probability that the same network will respond to similar or identical signals in the future.  Each time this occurs, synaptic strengths increase in the most efficient networks for each particular type of stimuli, and this would account for a relatively quick level of neural plasticity in the brain.

The last aspect of the theory involves what Edelman called re-entrant signaling whereby a sampling of the stimuli from functionally different groups of neurons occurring at the same time leads to a form of self-organizing intelligence.  This would provide a means for explaining how we experience spatiotemporal consistency in our experience of sensory stimuli.  Basically, we would have functionally different parts of the brain, such as various maps in the visual centers that pertain to color versus others that pertain to orientation or shape, that would effectively amalgamate the two (previously segregated) regions such that they can function in parallel and thus correlate with one another producing an amalgamation of the two types of neural maps.  Once this re-entrant signaling is accomplished between higher order or higher complexity maps in the brain, such as those pertaining to value-dependent memory storage centers, language centers, and perhaps back to various sensory cortical regions, this would create an even richer level of synchronization, possibly leading to consciousness (according to the theory).  In all of the aspects of the theory, the natural selection of differentiated neuronal structures, synaptic connections and strengths and eventually that of larger re-entrant connections would be responsible for creating the parallel and correlated processes in the brain believed to be required for consciousness.  There’s been an increasing amount of support for this theory, and more evidence continues to accumulate in support of it.  In any case, it is a brilliant idea and one with a lot of promise in potentially explaining one of the most fundamental aspects of our existence.

Darwin’s Big Idea May Be the Biggest Yet

In my opinion, Darwin’s theory of evolution through natural selection was perhaps the most profound theory ever discovered.  I’d even say that it beats Einstein’s theory of Relativity because of its massive explanatory scope and carryover to other disciplines, such as cosmology, neuroscience, and even the immune system (see Edelman’s Nobel work on the immune system, where he showed how the immune system works through natural selection as well, as opposed to some type of re-programming/learning).  Based on the basic idea of natural selection, we have been able to provide a number of robust explanations pertaining to many aspects of why the universe is likely to be the way it is, how life likely began, how it evolved afterward, and it may possibly be the answer to how life eventually evolved brains capable of being conscious.  It is truly one of the most fascinating principles I’ve ever learned about and I’m honestly awe struck by its beauty, simplicity, and explanatory power.

Learning About Our Cognitive Biases: An Antidote to Irrational & Dogmatic Thinking (Part 3 of 3)

leave a comment »

This is part 3 of 3 of this post.  Click here to read part 2.

Reactive Devaluation

Studies have shown that if a claim is believed to have come from a friend or ally, the person receiving it will be more likely to think that it has merit and is truthful.  Likewise, if it is believed to have come from an enemy or some member of the opposition, it is significantly devalued.  This bias is known as reactive devaluation.  A few psychologists actually determined that the likeabillity of the source of the message is one of the key factors involved with this effect, and people will actually value information coming from a likeable source more than a source they don’t like, with the same order of magnitude that they would value information coming from an expert versus a non-expert.  This is quite troubling.  We’ve often heard about how political campaigns and certain debates are seen more as popularity contests than anything else, and this bias is likely a primary reason for why this is often the case.

Unfortunately, a large part of society has painted a nasty picture of atheism, skepticism, and the like.  As it turns out, people who do not believe in a god (mainly the Christian god) are the least trusted minority in America, according to a sociological study performed a few years ago (Johanna Olexy and Lee Herring, “Atheists Are Distrusted: Atheists Identified as America’s Most Distrusted Minority, According to Sociological Study,” American Sociological Association News, May 3, 2006).  Regarding the fight against dogmatism, the rational thinker needs to carefully consider how they can improve their likeability to the recipients of their message, if nothing else, by going above and beyond to remain kind, respectful, and maintain a positive and uplifting attitude during their argument presentation.  In any case, the person arguing will always be up against any societal expectations and preferences that may reduce that person’s likeability, so there’s also a need to remind others as well as ourselves about what is most important when discussing an issue — the message rather than the messenger.

The Backfire Effect & Psychological Reactance

There have already been several cognitive biases mentioned that interfere with people’s abilities to accept new evidence that contradicts their beliefs.  To make matters worse, psychologists have discovered that people often react to disconfirming evidence by actually strengthening their beliefs.  This is referred to as the backfire effect.  This can make refutations futile most of the time, but there are some strategies that help to combat this bias.  The bias is largely exacerbated by a person’s emotional involvement and the fact that people don’t want to appear to be unintelligent or incorrect in front of others.  If the debating parties let tempers cool down before resuming the debate, this can be quite helpful regarding the emotional element.  Additionally, if a person can show their opponent how accepting the disconfirming evidence will benefit them, they’ll be more likely to accept it.  There are times when some of the disconfirming evidence mentioned is at least partially absorbed by the person, and they just need some more time in a less tense environment to think things over a bit.  If the situation gets too heated, or if a person risks losing face with peers around them, the ability to persuade that person only decreases.

Another similar cognitive bias is referred to as reactance, which is basically a motivational reaction to instances of perceived infringement of behavioral freedoms.  That is, if a person feels that someone is taking away their choices or limiting the number of them, such as in cases where they are being heavily pressured into accepting a different point of view, they will often respond defensively.  They may feel that freedoms are being taken away from them, and it is only natural for a person to defend whatever freedoms they see themselves having or potentially losing.  Whenever one uses reverse psychology, they are in fact playing on at least an implicit knowledge of this reactance effect.  Does this mean that rational thinkers should use reverse psychology to persuade dogmatists to accept reason and evidence?  I personally don’t think that this is the right approach because this could also backfire and I would prefer to be honest and straightforward, even if this results in less efficacy.  At the very least however, one needs to be aware of this bias and needs to be careful with how they phrase their arguments, how they present the evidence, and to maintain a calm and respectful attitude during these discussions, so that the other person doesn’t feel the need to defend themselves with the cost of ignoring evidence.

Pareidolia & Other Perceptual Illusions

Have you ever seen faces of animals or people while looking at clouds, shadows, or other similar situations?  The brain has many pattern recognition modules and will often respond erroneously to vague or even random stimuli, such that we perceive something familiar, even when it isn’t actually there.  This is called pareidolia, and is a type of apophenia (i.e. seeing patterns in random data).  Not surprisingly, many people have reported instances of seeing various religious imagery and so forth, most notably the faces of prominent religious figures, from ordinary phenomena.  People have also erroneously heard “hidden messages” in records when playing them in reverse, and similar illusory perceptions.  This results from the fact that the brain often fills in gaps and if one expects to see a pattern (even if unconsciously driven by some emotional significance), then the brain’s pattern recognition modules can often “over-detect” or result in false positives by too much sensitivity and by conflating unconscious imagery with one’s actual sensory experiences, leading to a lot of distorted perceptions.  It goes without saying that this cognitive bias is perfect for reinforcing superstitious or supernatural beliefs, because what people tend to see or experience from this effect are those things that are most emotionally significant to them.  After it occurs, people believe that what they saw or heard must have been real and therefore significant.

Most people are aware that hallucinations occur in some people from time to time, under certain neurological conditions (generally caused by an imbalance of neurotransmitters).  A bias like pareidolia can effectively produce similar experiences, but without requiring the more specific neurological conditions that a textbook hallucination requires.  That is, the brain doesn’t need to be in any drug-induced state nor does it need to be physically abnormal in any way for this to occur, making it much more common than hallucinations.  Since pareidolia is also compounded with one’s confirmation bias, and since the brain is constantly implementing cognitive dissonance reduction mechanisms in the background (as mentioned earlier), this can also result in groups of people having the same illusory experience, specifically if the group shares the same unconscious emotional motivations for “seeing” or experiencing something in particular.  These circumstances along with the power of suggestion from even just one member of the group can lead to what are known as collective hallucinations.  Since collective hallucinations like these inevitably lead to a feeling of corroboration and confirmation between the various people experiencing it (say, seeing a “spirit” or “ghost” of someone significant), they can reinforce one another’s beliefs, despite the fact that the experience was based on a cognitive illusion largely induced by powerful human emotions.  It’s likely no coincidence that this type of group phenomenon seems to only occur with members of a religion or cult, specifically those that are in a highly emotional and synchronized psychological state.  There have been many cases of large groups of people from various religions around the world claiming to see some prominent religious figure or other supernatural being, illustrating just how universal this phenomena is within a highly emotional group context.

Hyperactive Agency Detection

There is a cognitive bias that is somewhat related to the aforementioned perceptual illusion biases which is known as hyperactive agency detection.  This bias refers to our tendency to erroneously assign agency to events that happen without an agent causing them.  Human beings all have a theory of mind that is based in part on detecting agency, where we assume that other people are causal agents in the sense that they have intentions and goals just like we do.  From an evolutionary perspective, we can see that our ability to recognize that others are intentional agents allows us to better predict another person’s behavior which is extremely beneficial to our survival (notably in cases where we suspect others are intending to harm us).

Unfortunately our brain’s ability to detect agency is hyperactive in that it errs on the side of caution by over-detecting possible agency, as opposed to under-detecting (which could negatively affect our survival prospects).  As a result, people will often ascribe agency to things and events in nature that have no underlying intentional agent.  For example, people often anthropomorphize (or implicitly assume agency to) machines and get angry when they don’t work properly (such as an automobile that breaks down while driving to work), often with the feeling that the machine is “out to get us”, is “evil”, etc.  Most of the time, if we have feelings like this, they are immediately checked and balanced by our rational minds that remind us that “it’s only a car”, or “it’s not conscious”, etc.  Similarly, when natural disasters or other similar events occur, our hyperactive agency detection snaps into gear, trying to find “someone to blame”.  This cognitive bias explains quite well, at least as a contributing factor as to why so many ancient cultures assumed that there were gods that caused earthquakes, lightning, thunder, volcanoes, hurricanes, etc.  Quite simply, they needed to ascribe the events as having been caused by someone.

Likewise, if we are walking in a forest and hear some strange sounds in the bushes, we may often assume that some intentional agent is present (whether another person or other animal), even though it may simply be the wind that is causing the noise.  Once again, we can see in this case the evolutionary benefit of this agency detection, even with the occasional “false positive”, the noise in the bushes could very well be a predator or other source of danger, so it is usually better for our brains to assume that the noise is coming from an intentional agent.  It’s also entirely possible that even in cases where the source of sound was determined to be the wind, early cultures may have ascribed an intentional stance to the wind itself (e.g. a “wind” god, etc.).  In all of these cases, we can see how our hyperactive agency detection can lead to irrational, theistic and other supernatural belief systems, and we need to be aware of such a bias when evaluating our intuitions of how the world works.

Risk Compensation

It is only natural that the more a person feels protected from various harms, the less careful they tend to behave, and vice versa.  This apparent tendency, sometimes referred to as risk compensation, is often a harmless approach because generally speaking, if one is well protected from harm, they do have less to worry about than one who is not, and thus they can take greater relative risks as a result of that larger cushion of safety.  Where this tends to cause a problem however, is in cases when the perceived level of protection is far higher than it actually is.  The worst case scenario that comes to mind is if a person that appeals to the supernatural thinks that it doesn’t matter what happens to them or that no harm can come to them because of some belief in a supernatural source of protection, let alone one that they believe provides the most protection possible.  In this scenario, their level of risk compensation is fully biased to a negative extreme, and they will be more likely to behave irresponsibly because they don’t have realistic expectations of the consequences of their actions.  In a post-enlightenment society, we’ve acquired a higher responsibility to heed the knowledge gained from scientific discoveries so that we can behave more rationally as we learn more about the consequences of our actions.

It’s not at all difficult to see how the effects of various religious dogma on one’s level of risk compensation can inhibit that society from making rational, responsible decisions.  We have several serious issues plaguing modern society including those related to environmental sustainability, climate change, pollution, war, etc.  If believers in the supernatural have an attitude that everything is “in God’s hands” or that “everything will be okay” because of their belief in a protective god, they are far less likely to take these kinds of issues seriously because they fail to acknowledge the obvious harm on everyone in society.  What’s worse is that not only are dogmatists reducing their own safety, but they’re also reducing the safety of their children, and of everyone else in society that is trying to behave rationally with realistic expectations.  If we had two planets, one for rational thinkers, and one for dogmatists, this would no longer be a problem for everyone.  The reality is that we share one planet and thus we all suffer the consequences of any one person’s irresponsible actions.  If we want to make more rational decisions, especially those related to the safety of our planet’s environment, society, and posterity, people need to adjust their risk compensation such that it is based on empirical evidence using a rational, proven scientific methodology.  This point clearly illustrates the need to phase out supernatural beliefs since false beliefs that don’t correspond with reality can be quite dangerous to everybody regardless of who carries the beliefs.  In any case, when one is faced with the choice between knowledge versus ignorance, history has demonstrated which choice is best.

Final thoughts

The fact that we are living in an information age with increasing global connectivity, and the fact that we are living in a society that is becoming increasingly reliant on technology, will likely help to combat the ill effects of these cognitive biases.  These factors are making it increasingly difficult to hide one’s self from the evidence and the proven efficacy of using the scientific method in order to form accurate beliefs regarding the world around us.  Societal expectations are also changing as a result of these factors, and it is becoming less and less socially acceptable to be irrational, dogmatic, and to not take scientific methodology more seriously.  In all of these cases, having knowledge about these cognitive biases will help people to combat them.  Even if we can’t eliminate the biases, we can safeguard ourselves by preparing for any potential ill effects that those biases may produce.  This is where reason and science come into play, providing us with the means to counter our cognitive biases by applying a rational skepticism to all of our beliefs, and by using a logical and reliable methodology based on empirical evidence to arrive at a belief that we can be justifiably confident (though never certain) in.  As Darwin once said, “Ignorance more frequently begets confidence than does knowledge; it is those who know little, and not those who know much, who so positively assert that this or that problem will never be solved by science.”  Science has constantly been ratcheting our way toward a better understanding of the universe we live in, despite the fact that the results we obtain from science are often at odds with our own intuitions about how we think the world is.  Only relatively recently has science started to provide us with information regarding why our intuitions are often at odds with the scientific evidence.

We’ve made huge leaps within neuroscience, cognitive science, and psychology, and these discoveries are giving us the antidote that we need in order to reduce or eliminate irrational and dogmatic thinking.  Furthermore, those that study logic have an additional advantage in that they are more likely to spot fallacious arguments that are used to support this or that belief, and thus they are less likely to be tricked or persuaded by arguments that often appear to be sound and strong, but actually aren’t.  People fall prey to fallacious arguments all the time, and it is mostly because they haven’t learned how to spot them (which courses in logic and other reasoning can help to mitigate).  Thus, overall I believe it is imperative that we integrate knowledge concerning our cognitive biases into our children’s educational curriculum, starting at a young age (in a format that is understandable and engaging of course).  My hope is that if we start to integrate cognitive education (and complementary courses in logic and reasoning) as a part of our foundational education curriculum, we will see a cascade effect of positive benefits that will guide our society more safely into the future.

Learning About Our Cognitive Biases: An Antidote to Irrational & Dogmatic Thinking (Part 2 of 3)

leave a comment »

This is part 2 of 3 of this post.  Click here to read part 1.

The Feeling of Certainty & The Overconfidence Effect

There are a lot of factors that affect how certain we feel that one belief or another is in fact true.  One factor that affects this feeling of certainty is what I like to call the time equals truth fallacy.  With this cognitive bias, we tend to feel more certain about our beliefs as time progresses, despite not gaining any new evidence to support those beliefs. Since the time with a particular belief is the only factor involved here, it will have greater effects on those that are relatively isolated or sheltered from new information.  So if a person has been indoctrinated with a particular set of beliefs (let alone irrational beliefs), and they are effectively “cut off” from any sources of information that could serve to refute those beliefs, it will become increasingly more difficult to persuade them later on.  This situation sounds all too familiar, as it is often the case that dogmatic groups will isolate themselves and shelter their members from outside influences, for this very reason.  If the group can isolate their members for long enough (or the members actively isolate themselves), the dogma effectively gets burned into them, and the brainwashing is far more successful.  That this cognitive bias has been exploited by various dogmatic/religious leaders throughout history is hardly surprising considering how effective it is.

Though I haven’t researched or come across any proposed evolutionary reasons behind the development of this cognitive bias, I do have at least one hypothesis pertaining to its origin.  I’d say that this bias may have resulted from the fact that the beliefs that matter most (from an evolutionary perspective) are those that are related to our survival goals (e.g. finding food sources or evading a predator).  So naturally, any beliefs that didn’t cause noticeable harm to an organism weren’t correlated with harm, and thus were more likely to be correct (i.e. keep using whatever works and don’t fix what ain’t broke).  However, once we started adding many more beliefs to our repertoire, specifically those that didn’t directly affect our survival (including many beliefs in the supernatural), the same cognitive rules and heuristics were still being applied as before, although these new types of beliefs (when false) haven’t been naturally selected against, because they haven’t been detrimental enough to our survival (at least not yet, or not enough to force a significant evolutionary change).  So once again, evolution may have produced this bias for very advantageous reasons, but it is a sub-optimal heuristic (as always) and one that has become a significant liability after we started evolving culturally as well.

Another related cognitive bias, and one that is quite well established, is the overconfidence effect, whereby a person’s subjective confidence level or feeling of confidence in his or her judgements or beliefs are predictably higher than the actual objective accuracy of those judgements and beliefs. In a nutshell, people tend to have far more confidence in their beliefs and decisions than is warranted.  A common example cited to illustrate the intensity of this bias pertains to people taking certain quizzes, claiming to be “99% certain” about their answers to certain questions, and then finding out afterwards that they were wrong about half of the time.  In other cases, this overconfidence effect can be even worse.

In a sense, the feeling of certainty is like an emotion, which, like our other emotions, occur as a natural reflex, regardless of the cause, and independently of reason.  Just like other emotions such as anger, pleasure, or fear, the feeling of certainty can be produced by a seizure, certain drugs, and even electrical stimulation of certain regions of the brain.  In all of these cases, even when no particular beliefs are being thought about, the brain can produce a feeling of certainty nevertheless.  Research has shown that the feeling of knowing or certainty can also be induced through various brain washing and trance-inducing techniques such as a high repetition of words, rhythmic music, sleep deprivation (or fasting), and other types of social/emotional manipulation.  It is hardly a coincidence that many religions often employ a number of these techniques within their repertoire.

The most important point to take away from learning about these “confidence/certainty” biases is to understand that the feeling of certainty is not a reliable way of determining the accuracy of one’s beliefs.  Furthermore, one must realize that no matter how certain we may feel about a particular belief, we could be wrong, and often times are.  Dogmatic belief systems such as those found in many religions are often propagated by a misleading and mistaken feeling of certainty, even if that feeling is more potent than any ever experienced before.  Often times when people ascribing to these dogmatic belief systems are questioned about the lack of empirical evidence supporting their beliefs, even if all of their arguments have been refuted, they simply reply by saying “I just know.”  Irrational, and entirely unjustified responses like these illustrate the dire need for people to become aware of just how fallible their feeling of certainty can be.

Escalation of Commitment

If a person has invested their whole lives in some belief system, even if they encounter undeniable evidence that their beliefs were wrong, they are more likely to ignore it or rationalize it away than to modify their beliefs, and thus they will likely continue investing more time and energy in those false beliefs.  This is due to an effect known as escalation of commitment.  Basically, the higher the cumulative investment in a particular course of action, the more likely someone will feel justified in continuing to increase that investment, despite new evidence showing them that they’d be better off abandoning that investment and cutting their losses.  When it comes to trying to “convert” a dogmatic believer into a more rational, free thinker, this irrational tendency severely impedes any chance of success, more so when that person has been investing themselves in the dogma for a longer period of time, since they ultimately have a lot more to lose.  To put it another way, a person’s religion is often a huge part of their personal identity, so regardless of any undeniable evidence presented that refutes their beliefs, in order to accept that evidence they will have to abandon a large part of themselves which is obviously going to make that acceptance and intellectual honesty quite difficult to implement.  Furthermore, if a person’s family or friends have all invested in the same false beliefs as themselves, even if that person discovers that those beliefs are wrong, they risk their entire family and/or friends rejecting them and then forever losing those relationships that are dearest to them.  We can also see how this escalation of commitment is further reinforced by the time equals truth fallacy mentioned earlier.

Negativity Bias

When I’ve heard various Christians proselytizing to myself or others, one tactic that I’ve seen used over and over again is the use of fear-mongering with theologically based threats of eternal punishment and torture.  If we don’t convert, we’re told, we’re doomed to burn in hell for eternity.  Their incessant use of this tactic suggests that it was likely effective on themselves contributing to their own conversion (it was in fact one of the reasons for my former conversion to Christianity).  Similar tactics have been used in some political campaigns in order to persuade voters by deliberately scaring them into taking one position over another.  Though this strategy is more effective on some than others, there is an underlying cognitive bias in all of us that contributes to its efficacy.  This is known as the negativity bias.  With this bias, information or experiences that are of a more negative nature will tend to have a greater effect on our psychological states and resulting behavior when compared to positive information or experiences that are equally intense.

People will remember threats to their well-being a lot more than they remember pleasurable experiences.  This looks like another example of a simple survival strategy implemented in our brains.  Similar to my earlier hypothesis regarding the time equals truth heuristic, it is far more important to remember and avoid dangerous or life-threatening experiences than it is to remember and seek out pleasurable experiences when all else is equal.  It only takes one bad experience to end a person’s life, whereas it is less critical to experience some minimum number of pleasurable experiences.  Therefore, it makes sense as an evolutionary strategy to allocate more cognitive resources and memory for avoiding the dangerous and negative experiences, and a negativity bias helps us to accomplish that.

Unfortunately, just as with the time equals truth bias, since our cultural evolution has involved us adopting certain beliefs that no longer pertain directly to our survival, the heuristic is often being executed improperly or in the wrong context.  This increases the chances that we will fall prey to adopting irrational, dogmatic belief systems when they are presented to us in a way that utilizes fear-mongering and various forms of threats to our well-being.  When it comes to conceiving of an overtly negative threat, can anyone imagine one more significant than the threat of eternal torture?  It is, by definition, supposed to be the worst scenario imaginable, and thus it is the most effective kind of threat to play on our negativity bias, and lead to irrational beliefs.  If the dogmatic believer is also convinced that their god can hear their thoughts, they’re also far less likely to think about their dogmatic beliefs critically, for they have no mental privacy to do so.

This bias in particular reminds me of Blaise Pascal’s famous Wager, which basically asserts that it is better to believe in God than to risk the consequences of not doing so.  If God doesn’t exist, we have “only” a finite loss (according to Pascal).  If God does exist, then one’s belief leads to an eternal reward, and one’s disbelief leads to an eternal punishment.  Therefore, it is only logical (Pascal asserts) that one should believe in God, since it is the safest position to adopt.  Unfortunately, Pascal’s premises are not sound, and therefore the conclusion is invalid.  For one, is belief in God sufficient enough to avoid the supposed eternal punishment, or does it have to be more than that, such as some other religious tenets, declarations, or rituals?  Second, which god should one believe in?  There have been thousands of gods proposed by various believers over several millennia, and there were obviously many more than we currently have records of in history, therefore Pascal’s Wager merely narrows it down to a choice of several thousand known gods.  Third, even if we didn’t have to worry about choosing the correct god, if it turned out that there was no god, would a life with that belief not have carried a significant cost?  If the belief also involved a host of dogmatic moral prescriptions, rituals, and other specific ways to live one’s life, etc., including perhaps the requirement to abstain from many pleasurable human experiences, this cost could be quite great.  Furthermore, if one applies Pascal’s wager to any number of theoretical possibilities that posit a possible infinite loss over a finite loss, one would be inclined to apply the same principle to any of those possibilities, which is obviously irrational.  It is clear that Pascal hadn’t thought this one through very well, and I wouldn’t doubt that his judgement during this apologetic formulation was highly clouded by his own negativity bias (since the fear of punishment seems to be the primary focus of his “wager”).  As was mentioned earlier, we can see how effective this bias has been on a number of religious converts, and we need to be diligent about watching out for information that is presented to us with threatening strings attached, because it can easily cloud our judgement and lead to the adoption of irrational beliefs and dogma.

Belief Bias & Argument Proximity Effects

When we analyze arguments, we often judge the strength of those arguments based on how plausible their conclusion is, rather than how well those arguments support that conclusion.  In other words, people tend to focus their attention on the conclusion of an argument, and if they think that the conclusion is likely to be true (based on their prior beliefs), this affects their perspective of how strong or weak the arguments themselves appear to be.  This is obviously an incorrect way to analyze arguments, as within logic, only the arguments themselves can be used to determine the validity of the conclusion, not the other way around.  This implies that what is intuitive to us is often incorrect, and thus our cognitive biases are constantly at odds with logic and rationality.  This is why it takes a lot of practice to learn how to apply logic effectively in one’s thinking.  It just doesn’t come naturally, even though we often think it does.

Another cognitive deficit regarding how people analyze arguments irrationally is what I like to call the argument proximity effect.  Basically, when strong arguments are presented along with weak arguments that support a particular conclusion, the strong arguments will often appear to be weaker or less persuasive because of their proximity or association with the weaker ones.  This is partly due to the fact that if a person thinks that they can defeat the moderate or weak arguments, they will often believe that they can also defeat the stronger argument, if only they were given enough time to do so.  It is as if the strong arguments become “tainted” by the weaker ones, even though the opposite is true since the arguments are independent of one another.  That is, when a person has a number of arguments to support their position, a combination of strong and weak arguments is always better than only having the strong arguments, because there are simply more arguments that need to be addressed and rebutted in the former than in the latter.  Another reason for this effect is that the presence of weak arguments also weakens the credibility of the person presenting them, and so then the stronger arguments aren’t taken as seriously by the recipient.  Just as with our belief bias, this cognitive deficit is ultimately caused by not employing logic properly, if at all.  Making people aware of this cognitive flaw is only half the battle, as once again we also need to learn about logic and how to apply it effectively in our thinking.

To read part 3 of 3, click here.