The Open Mind

Cogito Ergo Sum

Archive for the ‘Memory’ Category

The illusion of Persistent Identity & the Role of Information in Identity

with 8 comments

After reading and commenting on a post at “A Philosopher’s Take” by James DiGiovanna titled Responsibility, Identity, and Artificial Beings: Persons, Supra-persons and Para-persons, I decided to expand on the topic of personal identity.

Personal Identity Concepts & Criteria

I think when most people talk about personal identity, they are referring to how they see themselves and how they see others in terms of personality and some assortment of (usually prominent) cognitive and behavioral traits.  Basically, they see it as what makes a person unique and in some way distinguishable from another person.  And even this rudimentary concept can be broken down into at least two parts, namely, how we see ourselves (self-ascribed identity) and how others see us (which we could call the inferred identity of someone else), since they are likely going to differ.  While most people tend to think of identity in these ways, when philosophers talk about personal identity, they are usually referring to the unique numerical identity of a person.  Roughly speaking, this amounts to basically whatever conditions or properties that are both necessary and sufficient such that a person at one point in time and a person at another point in time can be considered the same person — with a temporal continuity between those points in time.

Usually the criterion put forward for this personal identity is supposed to be some form of spatiotemporal and/or psychological continuity.  I certainly wouldn’t be the first person to point out that the question of which criterion is correct has already framed the debate with the assumption that a personal (numerical) identity exists in the first place and even if it did exist, it also assumes that the criterion is something that would be determinable in some way.  While it is not unfounded to believe that some properties exist that we could ascribe to all persons (simply because of what we find in common with all persons we’ve interacted with thus far), I think it is far too presumptuous to believe that there is a numerical identity underlying our basic conceptions of personal identity and a determinable criterion for it.  At best, I think if one finds any kind of numerical identity for persons that persist over time, it is not going to be compatible with our intuitions nor is it going to be applicable in any pragmatic way.

As I mention pragmatism, I am sympathetic to Parfit’s views in the sense that regardless of what one finds the criteria for numerical personal identity to be (if it exists), the only thing that really matters to us is psychological continuity anyway.  So despite the fact that Locke’s view — that psychological continuity (via memory) was the criterion for personal identity — was in fact shown to be based on circular and illogical arguments (per Butler, Reid and others), nevertheless I give applause to his basic idea.  Locke seemed to be on the right track, in that psychological continuity (in some sense involving memory and consciousness) is really the essence of what we care about when defining persons, even if it can’t be used as a valid criterion in the way he proposed.

(Non) Persistence & Pragmatic Use of a Personal Identity Concept

I think that the search for, and long debates over, what the best criterion for personal identity is, has illustrated that what people have been trying to label as personal identity should probably be relabeled as some sort of pragmatic pseudo-identity. The pragmatic considerations behind the common and intuitive conceptions of personal identity have no doubt steered the debate pertaining to any possible criteria for helping to define it, and so we can still value those considerations even if a numerical personal identity doesn’t really exist (that is, even if it is nothing more than a pseudo-identity) and even if a diachronic numerical personal identity does exist but isn’t useful in any way.

If the object/subject that we refer to as “I” or “me” is constantly changing with every passing moment of time both physically and psychologically, then I tend to think that the self (that many people ascribe as the “agent” of our personal identity) is an illusion of some sort.  I tend to side more with Hume on this point (or at least James Giles’ fair interpretation of Hume) in that my views seem to be some version of a no-self or eliminativist theory of personal identity.  As Hume pointed out, even though we intuitively ascribe a self and thereby some kind of personal identity, there is no logical reason supported by our subjective experience to think it is anything but a figment of the imagination.  This illusion results from our perceptions flowing from one to the next, with a barrage of changes taking place with this “self” over time that we simply don’t notice taking place — at least not without critical reflection on our past experiences of this ever-changing “self”.  The psychological continuity that Locke described seems to be the main driving force behind this illusory self since there is an overlap in the memories of the succession of persons.

I think one could say that if there is any numerical identity that is associated with the term “I” or “me”, it only exists for a short moment of time in one specific spatio-temporal slice, and then as the next perceivable moment elapses, what used to be “I” will become someone else, even if the new person that comes into being is still referred to as “I” or “me” by a person that possesses roughly the same configuration of matter in its body and brain as the previous person.  Since the neighboring identities have an overlap in accessible memory including autobiographical memories, memories of past experiences generally, and the memories pertaining to the evolving desires that motivate behavior, we shouldn’t expect this succession of persons to be noticed or perceived by the illusory self because each identity has access to a set of memories that is sufficiently similar to the set of memories accessible to the previous or successive identity.  And this sufficient degree of similarity in those identities’ memories allow for a seemingly persistent autobiographical “self” with goals.

As for the pragmatic reasons for considering all of these “I”s and “me”s to be the same person and some singular identity over time, we can see that there is a causal dependency between each member of this “chain of spatio-temporal identities” that I think exists, and so treating that chain of interconnected identities as one being is extremely intuitive and also incredibly useful for accomplishing goals (which is likely the reason why evolution would favor brains that can intuit this concept of a persistent “self” and the near uni-directional behavior that results from it).  There is a continuity of memory and behaviors (even though both change over time, both in terms of the number of memories and their accuracy) and this continuity allows for a process of conditioning to modify behavior in ways that actively rely on those chains of memories of past experiences.  We behave as if we are a single person moving through time and space (and as if we are surrounded by other temporally extended single person’s behaving in similar ways) and this provides a means of assigning ethical and causal responsibility to something or more specifically to some agent.  Quite simply, by having those different identities referenced under one label and physically attached to or instantiated by something localized, that allows for that pragmatic pseudo-identity to persist over time in order for various goals (whether personal or interpersonal/societal) to be accomplished.

“The Persons Problem” and a “Speciation” Analogy

I came up with an analogy that I thought was very fitting to this concept.  One could analogize this succession of identities that get clumped into one bulk pragmatic-pseudo-identity with the evolutionary concept of speciation.  For example, a sequence of identities somehow constitute an intuitively persistent personal identity, just as a sequence of biological generations somehow constitute a particular species due to the high degree of similarity between them all.  The apparent difficulty lies in the fact that, at some point after enough identities have succeeded one another, even the intuitive conception of a personal identity changes markedly to the point of being unrecognizable from its ancestral predecessor, just as enough biological generations transpiring eventually leads to what we call a new species.  It’s difficult to define exactly when that speciation event happens (hence the species problem), and we have a similar problem with personal identity I think.  Where does it begin and end?  If personal identity changes over the course of a lifetime, when does one person become another?  I could think of “me” as the same “me” that existed one year ago, but if I go far enough back in time, say to when I was five years old, it is clear that “I” am a completely different person now when compared to that five year old (different beliefs, goals, worldview, ontology, etc.).  There seems to have been an identity “speciation” event of some sort even though it is hard to define exactly when that was.

Biologists have tried to solve their species problem by coming up with various criteria to help for taxonomical purposes at the very least, but what they’ve wound up with at this point is several different criteria for defining a species that are each effective for different purposes (e.g. biological-species concept, morpho-species concept, phylogenetic-species concept, etc.), and without any single “correct” answer since they are all situationally more or less useful.  Similarly, some philosophers have had a persons problem that they’ve been trying to solve and I gather that it is insoluble for similar “fuzzy boundary” reasons (indeterminate properties, situationally dependent properties, etc.).

The Role of Information in a Personal Identity Concept

Anyway, rather than attempt to solve the numerical personal identity problem, I think that philosophers need to focus more on the importance of the concept of information and how it can be used to try and arrive at a more objective and pragmatic description of the personal identity of some cognitive agent (even if it is not used as a criterion for numerical identity, since information can be copied and the copies can be distinguished from one another numerically).  I think this is especially true once we take some of the concerns that James DiGiovanna brought up concerning the integration of future AI into our society.

If all of the beliefs, behaviors, and causal driving forces in a cognitive agent can be represented in terms of information, then I think we can implement more universal conditioning principles within our ethical and societal framework since they will be based more on the information content of the person’s identity without putting as much importance on numerical identity nor as much importance on our intuitions of persisting people (since they will be challenged by several kinds of foreseeable future AI scenarios).

To illustrate this point, I’ll address one of James DiGiovanna’s conundrums.  James asks us:

To give some quick examples: suppose an AI commits a crime, and then, judging its actions wrong, immediately reforms itself so that it will never commit a crime again. Further, it makes restitution. Would it make sense to punish the AI? What if it had completely rewritten its memory and personality, so that, while there was still a physical continuity, it had no psychological content in common with the prior being? Or suppose an AI commits a crime, and then destroys itself. If a duplicate of its programming was started elsewhere, would it be guilty of the crime? What if twelve duplicates were made? Should they each be punished?

In the first case, if the information constituting the new identity of the AI after reprogramming is such that it no longer needs any kind of conditioning, then it would be senseless to punish the AI — other than to appease humans that may be angry that they couldn’t themselves avoid punishment in this way, due to having a much slower and less effective means of reprogramming themselves.  I would say that the reprogrammed AI is guilty of the crime, but only if its reprogrammed memory still included information pertaining to having performed those past criminal behaviors.  However, if those “criminal memories” are now gone via the reprogramming then I’d say that the AI is not guilty of the crime because the information constituting its identity doesn’t match that of the criminal AI.  It would have no recollection of having committed the crime and so “it” would not have committed the crime since that “it” was lost in the reprogramming process due to the dramatic change in information that took place.

In the latter scenario, if the information constituting the identity of the destroyed AI was re-instantiated elsewhere, then I would say that it is in fact guilty of the crime — though it would not be numerically guilty of the crime but rather qualitatively guilty of the crime (to differentiate between the numerical and qualitative personal identity concepts that are embedded in the concept of guilt).  If twelve duplicates of this information were instantiated into new AI hardware, then likewise all twelve of those cognitive agents would be qualitatively guilty of the crime.  What actions should be taken based on qualitative guilt?  I think it means that the AI should be punished or more specifically that the judicial system should perform the reconditioning required to modify their behavior as if it had committed the crime (especially if the AI believes/remembers that it has committed the crime), for the better of society.  If this can be accomplished through reprogramming, then that would be the most rational thing to do without any need for traditional forms of punishment.

We can analogize this with another thought experiment with human beings.  If we imagine a human that has had its memories changed so that it believes it is Charles Manson, has all of Charles Manson’s memories and intentions, then that person should be treated as if they are Charles Manson and thus incarcerated/punished accordingly to rehabilitate them or protect the other members of society.  This is assuming of course that we had reliable access to that kind of mind-reading knowledge.  If we did, the information constituting the identity of that person would be what is most important — not what the actual previous actions of the person were — because the “previous person” was someone else, due to that gross change in information.

Darwin’s Big Idea May Be The Biggest Yet

with 13 comments

Back in 1859, Charles Darwin released his famous theory of evolution by natural selection whereby inherent variations in the individual members of some population of organisms under consideration would eventually lead to speciation events due to those variations producing a differential in survival and reproductive success and thus leading to the natural selection of some subset of organisms within that population.  As Darwin explained in his On The Origin of Species:

If during the long course of ages and under varying conditions of life, organic beings vary at all in the several parts of their organisation, and I think this cannot be disputed; if there be, owing to the high geometrical powers of increase of each species, at some age, season, or year, a severe struggle for life, and this certainly cannot be disputed; then, considering the infinite complexity of the relations of all organic beings to each other and to their conditions of existence, causing an infinite diversity in structure, constitution, and habits, to be advantageous to them, I think it would be a most extraordinary fact if no variation ever had occurred useful to each being’s own welfare, in the same way as so many variations have occurred useful to man. But if variations useful to any organic being do occur, assuredly individuals thus characterised will have the best chance of being preserved in the struggle for life; and from the strong principle of inheritance they will tend to produce offspring similarly characterised. This principle of preservation, I have called, for the sake of brevity, Natural Selection.

While Darwin’s big idea completely transformed biology in terms of it providing (for the first time in history) an incredibly robust explanation for the origin of the diversity of life on this planet, his idea has since inspired other theories pertaining to perhaps the three largest mysteries that humans have ever explored: the origin of life itself (not just the diversity of life after it had begun, which was the intended scope of Darwin’s theory), the origin of the universe (most notably, why the universe is the way it is and not some other way), and also the origin of consciousness.

Origin of Life

In order to solve the first mystery (the origin of life itself), geologists, biologists, and biochemists are searching for plausible models of abiogenesis, whereby the general scheme of these models would involve chemical reactions (pertaining to geology) that would have begun to incorporate certain kinds of energetically favorable organic chemistries such that organic, self-replicating molecules eventually resulted.  Now, where Darwin’s idea of natural selection comes into play with life’s origin is in regard to the origin and evolution of these self-replicating molecules.  First of all, in order for any molecule at all to build up in concentration requires a set of conditions such that the reaction leading to the production of the molecule in question is more favorable than the reverse reaction where the product transforms back into the initial starting materials.  If merely one chemical reaction (out of a countless number of reactions occurring on the early earth) led to a self-replicating product, this would increasingly favor the production of that product, and thus self-replicating molecules themselves would be naturally selected for.  Once one of them was produced, there would have been a cascade effect of exponential growth, at least up to the limit set by the availability of the starting materials and energy sources present.

Now if we assume that at least some subset of these self-replicating molecules (if not all of them) had an imperfect fidelity in the copying process (which is highly likely) and/or underwent even a slight change after replication by reacting with other neighboring molecules (also likely), this would provide them with a means of mutation.  Mutations would inevitably lead to some molecules becoming more effective self-replicators than others, and then evolution through natural selection would take off, eventually leading to modern RNA/DNA.  So not only does Darwin’s big idea account for the evolution of diversity of life on this planet, but the basic underlying principle of natural selection would also account for the origin of self-replicating molecules in the first place, and subsequently the origin of RNA and DNA.

Origin of the Universe

Another grand idea that is gaining heavy traction in cosmology is that of inflationary cosmology, where this theory posits that the early universe underwent a period of rapid expansion, and due to quantum mechanical fluctuations in the microscopically sized inflationary region, seed universes would have resulted with each one having slightly different properties, one of which that would have expanded to be the universe that we live in.  Inflationary cosmology is currently heavily supported because it has led to a number of predictions, many of which that have already been confirmed by observation (it explains many large-scale features of our universe such as its homogeneity, isotropy, flatness, and other features).  What I find most interesting with inflationary theory is that it predicts the existence of a multiverse, whereby we are but one of an extremely large number of other universes (predicted to be on the order of 10^500, if not an infinite number), with each one having slightly different constants and so forth.

Once again, Darwin’s big idea, when applied to inflationary cosmology, would lead to the conclusion that our universe is the way it is because it was naturally selected to be that way.  The fact that its constants are within a very narrow range such that matter can even form, would make perfect sense, because even if an infinite number of universes exist with different constants, we would only expect to find ourselves in one that has the constants within the necessary range in order for matter, let alone life to exist.  So any universe that harbors matter, let alone life, would be naturally selected for against all the other universes that didn’t have the right properties to do so, including for example, universes that had too high or too low of a cosmological constant (such as those that would have instantly collapsed into a Big Crunch or expanded into a heat death far too quickly for any matter or life to have formed), or even universes that didn’t have the proper strong nuclear force to hold atomic nuclei together, or any other number of combinations that wouldn’t work.  So any universe that contains intelligent life capable of even asking the question of their origins, must necessarily have its properties within the required range (often referred to as the anthropic principle).

After our universe formed, the same principle would also apply to each galaxy and each solar system within those galaxies, whereby because variations exist in each galaxy and within each substituent solar system (differential properties analogous to different genes in a gene pool), then only those that have an acceptable range of conditions are capable of harboring life.  With over 10^22 stars in the observable universe (an unfathomably large number), and billions of years to evolve different conditions within each solar system surrounding those many stars, it isn’t surprising that eventually the temperature and other conditions would be acceptable for liquid water and organic chemistries to occur in many of those solar systems.  Even if there was only one life permitting planet per galaxy (on average), that would add up to over 100 billion life permitting planets in the observable universe alone (with many orders of magnitude more life permitting planets in the non-observable universe).  So given enough time, and given some mechanism of variation (in this case, differences in star composition and dynamics), natural selection in a sense can also account for the evolution of some solar systems that do in fact have life permitting conditions in a universe such as our own.

Origin of Consciousness

The last significant mystery I’d like to discuss involves the origin of consciousness.  While there are many current theories pertaining to different aspects of consciousness, and while there has been much research performed in the neurosciences, cognitive sciences, psychology, etc., pertaining to how the brain works and how it correlates to various aspects of the mind and consciousness, the brain sciences (though neuroscience in particular) are in their relative infancy and so there are still many questions that haven’t been answered yet.  One promising theory that has already been shown to account for many aspects of consciousness is Gerald Edelman’s theory of neuronal group selection (NGS) otherwise known as neural Darwinism (ND), which is a large scale theory of brain function.  As one might expect from the name, the mechanism of natural selection is integral to this theory.  In ND, the basic idea consists of three parts as read on the Wiki:

  1. Anatomical connectivity in the brain occurs via selective mechanochemical events that take place epigenetically during development.  This creates a diverse primary neurological repertoire by differential reproduction.
  2. Once structural diversity is established anatomically, a second selective process occurs during postnatal behavioral experience through epigenetic modifications in the strength of synaptic connections between neuronal groups.  This creates a diverse secondary repertoire by differential amplification.
  3. Re-entrant signaling between neuronal groups allows for spatiotemporal continuity in response to real-world interactions.  Edelman argues that thalamocortical and corticocortical re-entrant signaling are critical to generating and maintaining conscious states in mammals.

In a nutshell, the basic differentiated structure of the brain that forms in early development is accomplished through cellular proliferation, migration, distribution, and branching processes that involve selection processes operating on random differences in the adhesion molecules that these processes use to bind one neuronal cell to another.  These crude selection processes result in a rough initial configuration that is for the most part fixed.  However, because there are a diverse number of sets of different hierarchical arrangements of neurons in various neuronal groups, there are bound to be functionally equivalent groups of neurons that are not equivalent in structure, but are all capable of responding to the same types of sensory input.  Because some of these groups should in theory be better than others at responding to some particular type of sensory stimuli, this creates a form of neuronal/synaptic competition in the brain, whereby those groups of neurons that happen to have the best synaptic efficiency for the stimuli in question are naturally selected over the others.  This in turn leads to an increased probability that the same network will respond to similar or identical signals in the future.  Each time this occurs, synaptic strengths increase in the most efficient networks for each particular type of stimuli, and this would account for a relatively quick level of neural plasticity in the brain.

The last aspect of the theory involves what Edelman called re-entrant signaling whereby a sampling of the stimuli from functionally different groups of neurons occurring at the same time leads to a form of self-organizing intelligence.  This would provide a means for explaining how we experience spatiotemporal consistency in our experience of sensory stimuli.  Basically, we would have functionally different parts of the brain, such as various maps in the visual centers that pertain to color versus others that pertain to orientation or shape, that would effectively amalgamate the two (previously segregated) regions such that they can function in parallel and thus correlate with one another producing an amalgamation of the two types of neural maps.  Once this re-entrant signaling is accomplished between higher order or higher complexity maps in the brain, such as those pertaining to value-dependent memory storage centers, language centers, and perhaps back to various sensory cortical regions, this would create an even richer level of synchronization, possibly leading to consciousness (according to the theory).  In all of the aspects of the theory, the natural selection of differentiated neuronal structures, synaptic connections and strengths and eventually that of larger re-entrant connections would be responsible for creating the parallel and correlated processes in the brain believed to be required for consciousness.  There’s been an increasing amount of support for this theory, and more evidence continues to accumulate in support of it.  In any case, it is a brilliant idea and one with a lot of promise in potentially explaining one of the most fundamental aspects of our existence.

Darwin’s Big Idea May Be the Biggest Yet

In my opinion, Darwin’s theory of evolution through natural selection was perhaps the most profound theory ever discovered.  I’d even say that it beats Einstein’s theory of Relativity because of its massive explanatory scope and carryover to other disciplines, such as cosmology, neuroscience, and even the immune system (see Edelman’s Nobel work on the immune system, where he showed how the immune system works through natural selection as well, as opposed to some type of re-programming/learning).  Based on the basic idea of natural selection, we have been able to provide a number of robust explanations pertaining to many aspects of why the universe is likely to be the way it is, how life likely began, how it evolved afterward, and it may possibly be the answer to how life eventually evolved brains capable of being conscious.  It is truly one of the most fascinating principles I’ve ever learned about and I’m honestly awe struck by its beauty, simplicity, and explanatory power.

The Origin and Evolution of Life: Part I

leave a comment »

In the past, various people have argued that life originating at all let alone evolving higher complexity over time was thermodynamically unfavorable due to the decrease in entropy involved with both circumstances, and thus it was believed to violate the second law of thermodynamics.  For those unfamiliar with the second law, it basically asserts that the amount of entropy (often referred to as disorder) in a closed system tends to increase over time, or to put it another way, the amount of energy available to do useful work in a closed system tends to decrease over time.  So it has been argued that since the origin of life and the evolution of life with greater complexity would entail decreases in entropy, these events are therefore either at best unfavorable (and therefore the result of highly improbable chance), or worse yet they are altogether impossible.

We’ve known for quite some time now that these thermodynamic arguments aren’t at all valid because earth isn’t a thermodynamically closed or isolated system due to the constant supply of energy we receive from the sun.  Because we get a constant supply of energy from the sun, and because the entropy increase from the sun far outweighs the decrease in entropy produced from all biological systems on earth, the net entropy of the entire system increases and thus fits right in line with the second law as we would expect.

However, even though the emergence and evolution of life on earth do not violate the second law and are thus physically possible, that still doesn’t show that they are probable processes.  What we need to know is how favorable the reactions are that are required for initiating and then sustaining these processes.  Several very important advancements have been made in abiogenesis over the last ten to fifteen years, with the collaboration of geologists and biochemists, and it appears that they are in fact not only possible but actually probable processes for a few reasons.

One reason is that the chemical reactions that living systems undergo produce a net entropy as well, despite the drop of entropy associated with every cell and/or it’s arrangement with respect to other cells.  This is because all living systems give off heat with every favorable chemical reaction that is constantly driving the metabolism and perpetuation of those living systems. This gain in entropy caused by heat loss more than compensates for the loss in entropy that results with the production and maintenance of all the biological components, whether lipids, sugars, nucleic acids or amino acids and more complex proteins.  Beyond this, as more complexity arises during the evolution of the cells and living systems, the entropy that those systems produce tends to increase even more and so living systems with a higher level of complexity appear to produce a greater net entropy (on average) than less complex living systems.  Furthermore, once photosynthetic organisms evolved in particular, any entropy (heat) that they give off in the form of radiation ends up being of lower energy (infrared) than the photons given off by the sun to power those reactions in the first place.  Thus, we can see that living systems effectively dissipate the incoming energy from the sun, and energy dissipation is energetically favorable.

Living systems seem to serve as a controllable channel of energy flow for that energy dissipation, just like lightning, the eye of a hurricane, or a tornado, where high energy states in the form of charge gradients or pressure or temperature gradients end up falling to a lower energy state by dissipating that energy through specific focused channels that spontaneously form (e.g. individual concentrated lightning bolts, the eye of a hurricane, vortices, etc.).  These channels for energy flow are favorable and form because they allow the energy to be dissipated faster since the channels are initiated by some direction of energy flow that is able to self-amplify into a path of decreasing resistance for that energy dissipation.  Life and the metabolic processes involved with it, seem to direct energy flow in ways that are very similar to these other naturally arising processes in non-living physical systems.  Interestingly enough, a relevant hypothesis has been proposed for why consciousness and eventually self-awareness would have evolved (beyond the traditional reasons proposed by natural selection).  If an organism can evolve the ability to predict where energy is going to flow, where an energy dissipation channel will form (or form more effective ones themselves), conscious organisms can then behave in ways that much more effectively dissipate energy even faster (and also by catalyzing more entropy production), thus showing why certain forms of biological complexity such as consciousness, memory, etc., would have also been favored from a thermodynamic perspective.

Thus, the origin of life as well as the evolution of biological complexity appears to be increasingly favored by the second law, thus showing a possible fundamental physical driving force behind the origin and evolution of life.  Basically, the origin and evolution of life appear to be effectively entropy engines and catalytic energy dissipation channels, and these engines and channels produce entropy at a greater rate than the planet otherwise would in the absence of that life, thus showing at least one possible driving force behind life, namely, the second law of thermodynamics.  So ironically, not only does the origin and evolution of life not violate the second law of thermodynamics, but it actually seems to be an inevitable (or at least favorable) result because of the second law.  Some of these concepts are still being developed in various theories and require further testing to better validate them but they are in fact supported by well-established physics and by consistent and sound mathematical models.

Perhaps the most poetic concept I’ve recognized with these findings is that life is effectively speeding up the heat death of the universe.  That is, the second law of thermodynamics suggests that the universe will eventually lose all of its useful energy when all the stars burn out and all matter eventually spreads out and decays into lower and lower energy photons, and thus the universe is destined to undergo a heat death.  Life, because it is producing entropy faster than the universe otherwise would in the absence of that life, is actually speeding up this inevitable death of the universe, which is quite fascinating when you think about it.  At the very least, it should give a new perspective to those that ask the question “what is the meaning or purpose of life?”  Even if we don’t think it is proper to think of life as having any kind of objective purpose in the universe, what life is in fact doing is accelerating the death of not only itself, but of the universe as a whole.  Personally, this further reinforces the idea that we should all ascribe our own meaning and purpose to our lives, because we should be enjoying the finite amount of time that we have, not only as individuals, but as a part of the entire collective life that exists in our universe.

To read about the newest and most promising discoveries that may explain how life got started in the first place, read part two here.

Knowledge: An Expansion of the Platonic Definition

with 12 comments

In the first post I ever wrote on this blog, titled: Knowledge and the “Brain in a Vat” scenario, I discussed some elements concerning the Platonic definition of knowledge, that is, that knowledge is ultimately defined as “justified true belief”.  I further refined the Platonic definition (in order to account for the well-known Gettier Problem) such that knowledge could be better described as “justified non-coincidentally-true belief”.  Beyond that, I also discussed how one’s conception of knowledge (or how it should be defined) should consider the possibility that our reality may be nothing more than the product of a mad scientist feeding us illusory sensations/perceptions with our brain in a vat, and thus, that how we define things and adhere to those definitions plays a crucial role in our conception and mutual understanding of any kind of knowledge.  My concluding remarks in that post were:

“While I’m aware that anything discussed about the metaphysical is seen by some philosophers to be completely and utterly pointless, my goal in making the definition of knowledge compatible with the BIV scenario is merely to illustrate that if knowledge exists in both “worlds” (and our world is nothing but a simulation), then the only knowledge we can prove has to be based on definitions — which is a human construct based on hierarchical patterns observed in our reality.”

While my views on what knowledge is or how it should be defined have changed somewhat in the past three years or so since I wrote that first blog post, in this post, I’d like to elaborate on this key sentence, specifically with regard to how knowledge is ultimately dependent on the recall and use of previously observed patterns in our reality as I believe that this is the most important aspect regarding how to define knowledge.  After making a few related comments on another blog (, I decided to elaborate on some of those comments accordingly.

I’ve elsewhere mentioned how there is a plethora of evidence that suggests that intelligence is ultimately a product of pattern recognition (1, 2, 3).  That is, if we recognize patterns in nature and then commit them to memory, we can later use those remembered patterns to our advantage in order to accomplish goals effectively.  The more patterns that we can recognize and remember, specifically those that do in fact correlate with reality (as opposed to erroneously “recognized” patterns that are actually non-existent), the better our chances of predicting the consequences of our actions accurately, and thus the better chances we have at obtaining our goals.  In short, the more patterns that we can recognize and remember, the greater our intelligence.  It is therefore no coincidence that intelligence tests are primarily based on gauging one’s ability to recognize patterns (e.g. solving Raven’s Progressive Matrices, puzzles, etc.).

To emphasize the role of pattern recognition as it applies to knowledge, if we use my previously modified Platonic definition of knowledge, that is,  that knowledge is defined as “justified, non-coincidentally-true belief”, then I must break down the individual terms of this definition as follows, starting with “belief”:

  • Belief = Recognized patterns of causality that are stored into memory for later recall and use.
  • Non-Coincidentally-True = The belief positively and consistently correlates with reality, and thus not just through luck or chance.
  • Justified = Empirical evidence exists to support said belief.

So in summary, I have defined knowledge (more specifically) as:

“Recognized patterns of causality that are stored into memory for later recall and use, that positively and consistently correlate with reality, and for which that correlation has been validated by empirical evidence (e.g. successful predictions made and/or goals accomplished through the use of said recalled patterns)”.

This means that if we believe something to be true that is unfalsifiable (such as religious beliefs that rely on faith), since it has not met the justification criteria, it fails to be considered knowledge (even if it is still considered a “belief”).  Also, if we are able to make a successful prediction with the patterns we’ve recognized, yet are only able to do so once, due to the lack of consistency, we likely just got lucky and didn’t actually correctly identify a pattern that correlates with reality, and thus this would fail to count as knowledge.  Finally, one should also note that the patterns that are recognized were not specifically defined as “consciously” recognized/remembered, nor was it specified that the patterns couldn’t be innately acquired/stored into memory (through DNA coded or other pre-sensory neural developmental mechanisms).  Thus, even procedural knowledge like learning to ride a bike or other forms of “muscle memory” used to complete a task, or any innate form of knowledge (acquired before/without sensory input) would be an example of unconscious or implicit knowledge that still fulfills this definition I’ve given above.  In the case of unconscious/implicit knowledge, we would have to accept that “beliefs” can also be unconscious/implicit (in order to remain consistent with the definition I’ve chosen), and I don’t see this as being a problem at all.  One just has to keep in mind that when people use the term “belief”, they are likely going to be referring to only those that are in our consciousness, a subset of all beliefs that exist, and thus still correct and adherent to the definition laid out here.

This is how I prefer to define “knowledge”, and I think it is a robust definition that successfully solves many (though certainly not all) of the philosophical problems that one tends to encounter in epistemology.

Learning About Our Cognitive Biases: An Antidote to Irrational & Dogmatic Thinking (Part 3 of 3)

leave a comment »

This is part 3 of 3 of this post.  Click here to read part 2.

Reactive Devaluation

Studies have shown that if a claim is believed to have come from a friend or ally, the person receiving it will be more likely to think that it has merit and is truthful.  Likewise, if it is believed to have come from an enemy or some member of the opposition, it is significantly devalued.  This bias is known as reactive devaluation.  A few psychologists actually determined that the likeabillity of the source of the message is one of the key factors involved with this effect, and people will actually value information coming from a likeable source more than a source they don’t like, with the same order of magnitude that they would value information coming from an expert versus a non-expert.  This is quite troubling.  We’ve often heard about how political campaigns and certain debates are seen more as popularity contests than anything else, and this bias is likely a primary reason for why this is often the case.

Unfortunately, a large part of society has painted a nasty picture of atheism, skepticism, and the like.  As it turns out, people who do not believe in a god (mainly the Christian god) are the least trusted minority in America, according to a sociological study performed a few years ago (Johanna Olexy and Lee Herring, “Atheists Are Distrusted: Atheists Identified as America’s Most Distrusted Minority, According to Sociological Study,” American Sociological Association News, May 3, 2006).  Regarding the fight against dogmatism, the rational thinker needs to carefully consider how they can improve their likeability to the recipients of their message, if nothing else, by going above and beyond to remain kind, respectful, and maintain a positive and uplifting attitude during their argument presentation.  In any case, the person arguing will always be up against any societal expectations and preferences that may reduce that person’s likeability, so there’s also a need to remind others as well as ourselves about what is most important when discussing an issue — the message rather than the messenger.

The Backfire Effect & Psychological Reactance

There have already been several cognitive biases mentioned that interfere with people’s abilities to accept new evidence that contradicts their beliefs.  To make matters worse, psychologists have discovered that people often react to disconfirming evidence by actually strengthening their beliefs.  This is referred to as the backfire effect.  This can make refutations futile most of the time, but there are some strategies that help to combat this bias.  The bias is largely exacerbated by a person’s emotional involvement and the fact that people don’t want to appear to be unintelligent or incorrect in front of others.  If the debating parties let tempers cool down before resuming the debate, this can be quite helpful regarding the emotional element.  Additionally, if a person can show their opponent how accepting the disconfirming evidence will benefit them, they’ll be more likely to accept it.  There are times when some of the disconfirming evidence mentioned is at least partially absorbed by the person, and they just need some more time in a less tense environment to think things over a bit.  If the situation gets too heated, or if a person risks losing face with peers around them, the ability to persuade that person only decreases.

Another similar cognitive bias is referred to as reactance, which is basically a motivational reaction to instances of perceived infringement of behavioral freedoms.  That is, if a person feels that someone is taking away their choices or limiting the number of them, such as in cases where they are being heavily pressured into accepting a different point of view, they will often respond defensively.  They may feel that freedoms are being taken away from them, and it is only natural for a person to defend whatever freedoms they see themselves having or potentially losing.  Whenever one uses reverse psychology, they are in fact playing on at least an implicit knowledge of this reactance effect.  Does this mean that rational thinkers should use reverse psychology to persuade dogmatists to accept reason and evidence?  I personally don’t think that this is the right approach because this could also backfire and I would prefer to be honest and straightforward, even if this results in less efficacy.  At the very least however, one needs to be aware of this bias and needs to be careful with how they phrase their arguments, how they present the evidence, and to maintain a calm and respectful attitude during these discussions, so that the other person doesn’t feel the need to defend themselves with the cost of ignoring evidence.

Pareidolia & Other Perceptual Illusions

Have you ever seen faces of animals or people while looking at clouds, shadows, or other similar situations?  The brain has many pattern recognition modules and will often respond erroneously to vague or even random stimuli, such that we perceive something familiar, even when it isn’t actually there.  This is called pareidolia, and is a type of apophenia (i.e. seeing patterns in random data).  Not surprisingly, many people have reported instances of seeing various religious imagery and so forth, most notably the faces of prominent religious figures, from ordinary phenomena.  People have also erroneously heard “hidden messages” in records when playing them in reverse, and similar illusory perceptions.  This results from the fact that the brain often fills in gaps and if one expects to see a pattern (even if unconsciously driven by some emotional significance), then the brain’s pattern recognition modules can often “over-detect” or result in false positives by too much sensitivity and by conflating unconscious imagery with one’s actual sensory experiences, leading to a lot of distorted perceptions.  It goes without saying that this cognitive bias is perfect for reinforcing superstitious or supernatural beliefs, because what people tend to see or experience from this effect are those things that are most emotionally significant to them.  After it occurs, people believe that what they saw or heard must have been real and therefore significant.

Most people are aware that hallucinations occur in some people from time to time, under certain neurological conditions (generally caused by an imbalance of neurotransmitters).  A bias like pareidolia can effectively produce similar experiences, but without requiring the more specific neurological conditions that a textbook hallucination requires.  That is, the brain doesn’t need to be in any drug-induced state nor does it need to be physically abnormal in any way for this to occur, making it much more common than hallucinations.  Since pareidolia is also compounded with one’s confirmation bias, and since the brain is constantly implementing cognitive dissonance reduction mechanisms in the background (as mentioned earlier), this can also result in groups of people having the same illusory experience, specifically if the group shares the same unconscious emotional motivations for “seeing” or experiencing something in particular.  These circumstances along with the power of suggestion from even just one member of the group can lead to what are known as collective hallucinations.  Since collective hallucinations like these inevitably lead to a feeling of corroboration and confirmation between the various people experiencing it (say, seeing a “spirit” or “ghost” of someone significant), they can reinforce one another’s beliefs, despite the fact that the experience was based on a cognitive illusion largely induced by powerful human emotions.  It’s likely no coincidence that this type of group phenomenon seems to only occur with members of a religion or cult, specifically those that are in a highly emotional and synchronized psychological state.  There have been many cases of large groups of people from various religions around the world claiming to see some prominent religious figure or other supernatural being, illustrating just how universal this phenomena is within a highly emotional group context.

Hyperactive Agency Detection

There is a cognitive bias that is somewhat related to the aforementioned perceptual illusion biases which is known as hyperactive agency detection.  This bias refers to our tendency to erroneously assign agency to events that happen without an agent causing them.  Human beings all have a theory of mind that is based in part on detecting agency, where we assume that other people are causal agents in the sense that they have intentions and goals just like we do.  From an evolutionary perspective, we can see that our ability to recognize that others are intentional agents allows us to better predict another person’s behavior which is extremely beneficial to our survival (notably in cases where we suspect others are intending to harm us).

Unfortunately our brain’s ability to detect agency is hyperactive in that it errs on the side of caution by over-detecting possible agency, as opposed to under-detecting (which could negatively affect our survival prospects).  As a result, people will often ascribe agency to things and events in nature that have no underlying intentional agent.  For example, people often anthropomorphize (or implicitly assume agency to) machines and get angry when they don’t work properly (such as an automobile that breaks down while driving to work), often with the feeling that the machine is “out to get us”, is “evil”, etc.  Most of the time, if we have feelings like this, they are immediately checked and balanced by our rational minds that remind us that “it’s only a car”, or “it’s not conscious”, etc.  Similarly, when natural disasters or other similar events occur, our hyperactive agency detection snaps into gear, trying to find “someone to blame”.  This cognitive bias explains quite well, at least as a contributing factor as to why so many ancient cultures assumed that there were gods that caused earthquakes, lightning, thunder, volcanoes, hurricanes, etc.  Quite simply, they needed to ascribe the events as having been caused by someone.

Likewise, if we are walking in a forest and hear some strange sounds in the bushes, we may often assume that some intentional agent is present (whether another person or other animal), even though it may simply be the wind that is causing the noise.  Once again, we can see in this case the evolutionary benefit of this agency detection, even with the occasional “false positive”, the noise in the bushes could very well be a predator or other source of danger, so it is usually better for our brains to assume that the noise is coming from an intentional agent.  It’s also entirely possible that even in cases where the source of sound was determined to be the wind, early cultures may have ascribed an intentional stance to the wind itself (e.g. a “wind” god, etc.).  In all of these cases, we can see how our hyperactive agency detection can lead to irrational, theistic and other supernatural belief systems, and we need to be aware of such a bias when evaluating our intuitions of how the world works.

Risk Compensation

It is only natural that the more a person feels protected from various harms, the less careful they tend to behave, and vice versa.  This apparent tendency, sometimes referred to as risk compensation, is often a harmless approach because generally speaking, if one is well protected from harm, they do have less to worry about than one who is not, and thus they can take greater relative risks as a result of that larger cushion of safety.  Where this tends to cause a problem however, is in cases when the perceived level of protection is far higher than it actually is.  The worst case scenario that comes to mind is if a person that appeals to the supernatural thinks that it doesn’t matter what happens to them or that no harm can come to them because of some belief in a supernatural source of protection, let alone one that they believe provides the most protection possible.  In this scenario, their level of risk compensation is fully biased to a negative extreme, and they will be more likely to behave irresponsibly because they don’t have realistic expectations of the consequences of their actions.  In a post-enlightenment society, we’ve acquired a higher responsibility to heed the knowledge gained from scientific discoveries so that we can behave more rationally as we learn more about the consequences of our actions.

It’s not at all difficult to see how the effects of various religious dogma on one’s level of risk compensation can inhibit that society from making rational, responsible decisions.  We have several serious issues plaguing modern society including those related to environmental sustainability, climate change, pollution, war, etc.  If believers in the supernatural have an attitude that everything is “in God’s hands” or that “everything will be okay” because of their belief in a protective god, they are far less likely to take these kinds of issues seriously because they fail to acknowledge the obvious harm on everyone in society.  What’s worse is that not only are dogmatists reducing their own safety, but they’re also reducing the safety of their children, and of everyone else in society that is trying to behave rationally with realistic expectations.  If we had two planets, one for rational thinkers, and one for dogmatists, this would no longer be a problem for everyone.  The reality is that we share one planet and thus we all suffer the consequences of any one person’s irresponsible actions.  If we want to make more rational decisions, especially those related to the safety of our planet’s environment, society, and posterity, people need to adjust their risk compensation such that it is based on empirical evidence using a rational, proven scientific methodology.  This point clearly illustrates the need to phase out supernatural beliefs since false beliefs that don’t correspond with reality can be quite dangerous to everybody regardless of who carries the beliefs.  In any case, when one is faced with the choice between knowledge versus ignorance, history has demonstrated which choice is best.

Final thoughts

The fact that we are living in an information age with increasing global connectivity, and the fact that we are living in a society that is becoming increasingly reliant on technology, will likely help to combat the ill effects of these cognitive biases.  These factors are making it increasingly difficult to hide one’s self from the evidence and the proven efficacy of using the scientific method in order to form accurate beliefs regarding the world around us.  Societal expectations are also changing as a result of these factors, and it is becoming less and less socially acceptable to be irrational, dogmatic, and to not take scientific methodology more seriously.  In all of these cases, having knowledge about these cognitive biases will help people to combat them.  Even if we can’t eliminate the biases, we can safeguard ourselves by preparing for any potential ill effects that those biases may produce.  This is where reason and science come into play, providing us with the means to counter our cognitive biases by applying a rational skepticism to all of our beliefs, and by using a logical and reliable methodology based on empirical evidence to arrive at a belief that we can be justifiably confident (though never certain) in.  As Darwin once said, “Ignorance more frequently begets confidence than does knowledge; it is those who know little, and not those who know much, who so positively assert that this or that problem will never be solved by science.”  Science has constantly been ratcheting our way toward a better understanding of the universe we live in, despite the fact that the results we obtain from science are often at odds with our own intuitions about how we think the world is.  Only relatively recently has science started to provide us with information regarding why our intuitions are often at odds with the scientific evidence.

We’ve made huge leaps within neuroscience, cognitive science, and psychology, and these discoveries are giving us the antidote that we need in order to reduce or eliminate irrational and dogmatic thinking.  Furthermore, those that study logic have an additional advantage in that they are more likely to spot fallacious arguments that are used to support this or that belief, and thus they are less likely to be tricked or persuaded by arguments that often appear to be sound and strong, but actually aren’t.  People fall prey to fallacious arguments all the time, and it is mostly because they haven’t learned how to spot them (which courses in logic and other reasoning can help to mitigate).  Thus, overall I believe it is imperative that we integrate knowledge concerning our cognitive biases into our children’s educational curriculum, starting at a young age (in a format that is understandable and engaging of course).  My hope is that if we start to integrate cognitive education (and complementary courses in logic and reasoning) as a part of our foundational education curriculum, we will see a cascade effect of positive benefits that will guide our society more safely into the future.

Learning About Our Cognitive Biases: An Antidote to Irrational & Dogmatic Thinking (Part 2 of 3)

leave a comment »

This is part 2 of 3 of this post.  Click here to read part 1.

The Feeling of Certainty & The Overconfidence Effect

There are a lot of factors that affect how certain we feel that one belief or another is in fact true.  One factor that affects this feeling of certainty is what I like to call the time equals truth fallacy.  With this cognitive bias, we tend to feel more certain about our beliefs as time progresses, despite not gaining any new evidence to support those beliefs. Since the time with a particular belief is the only factor involved here, it will have greater effects on those that are relatively isolated or sheltered from new information.  So if a person has been indoctrinated with a particular set of beliefs (let alone irrational beliefs), and they are effectively “cut off” from any sources of information that could serve to refute those beliefs, it will become increasingly more difficult to persuade them later on.  This situation sounds all too familiar, as it is often the case that dogmatic groups will isolate themselves and shelter their members from outside influences, for this very reason.  If the group can isolate their members for long enough (or the members actively isolate themselves), the dogma effectively gets burned into them, and the brainwashing is far more successful.  That this cognitive bias has been exploited by various dogmatic/religious leaders throughout history is hardly surprising considering how effective it is.

Though I haven’t researched or come across any proposed evolutionary reasons behind the development of this cognitive bias, I do have at least one hypothesis pertaining to its origin.  I’d say that this bias may have resulted from the fact that the beliefs that matter most (from an evolutionary perspective) are those that are related to our survival goals (e.g. finding food sources or evading a predator).  So naturally, any beliefs that didn’t cause noticeable harm to an organism weren’t correlated with harm, and thus were more likely to be correct (i.e. keep using whatever works and don’t fix what ain’t broke).  However, once we started adding many more beliefs to our repertoire, specifically those that didn’t directly affect our survival (including many beliefs in the supernatural), the same cognitive rules and heuristics were still being applied as before, although these new types of beliefs (when false) haven’t been naturally selected against, because they haven’t been detrimental enough to our survival (at least not yet, or not enough to force a significant evolutionary change).  So once again, evolution may have produced this bias for very advantageous reasons, but it is a sub-optimal heuristic (as always) and one that has become a significant liability after we started evolving culturally as well.

Another related cognitive bias, and one that is quite well established, is the overconfidence effect, whereby a person’s subjective confidence level or feeling of confidence in his or her judgements or beliefs are predictably higher than the actual objective accuracy of those judgements and beliefs. In a nutshell, people tend to have far more confidence in their beliefs and decisions than is warranted.  A common example cited to illustrate the intensity of this bias pertains to people taking certain quizzes, claiming to be “99% certain” about their answers to certain questions, and then finding out afterwards that they were wrong about half of the time.  In other cases, this overconfidence effect can be even worse.

In a sense, the feeling of certainty is like an emotion, which, like our other emotions, occur as a natural reflex, regardless of the cause, and independently of reason.  Just like other emotions such as anger, pleasure, or fear, the feeling of certainty can be produced by a seizure, certain drugs, and even electrical stimulation of certain regions of the brain.  In all of these cases, even when no particular beliefs are being thought about, the brain can produce a feeling of certainty nevertheless.  Research has shown that the feeling of knowing or certainty can also be induced through various brain washing and trance-inducing techniques such as a high repetition of words, rhythmic music, sleep deprivation (or fasting), and other types of social/emotional manipulation.  It is hardly a coincidence that many religions often employ a number of these techniques within their repertoire.

The most important point to take away from learning about these “confidence/certainty” biases is to understand that the feeling of certainty is not a reliable way of determining the accuracy of one’s beliefs.  Furthermore, one must realize that no matter how certain we may feel about a particular belief, we could be wrong, and often times are.  Dogmatic belief systems such as those found in many religions are often propagated by a misleading and mistaken feeling of certainty, even if that feeling is more potent than any ever experienced before.  Often times when people ascribing to these dogmatic belief systems are questioned about the lack of empirical evidence supporting their beliefs, even if all of their arguments have been refuted, they simply reply by saying “I just know.”  Irrational, and entirely unjustified responses like these illustrate the dire need for people to become aware of just how fallible their feeling of certainty can be.

Escalation of Commitment

If a person has invested their whole lives in some belief system, even if they encounter undeniable evidence that their beliefs were wrong, they are more likely to ignore it or rationalize it away than to modify their beliefs, and thus they will likely continue investing more time and energy in those false beliefs.  This is due to an effect known as escalation of commitment.  Basically, the higher the cumulative investment in a particular course of action, the more likely someone will feel justified in continuing to increase that investment, despite new evidence showing them that they’d be better off abandoning that investment and cutting their losses.  When it comes to trying to “convert” a dogmatic believer into a more rational, free thinker, this irrational tendency severely impedes any chance of success, more so when that person has been investing themselves in the dogma for a longer period of time, since they ultimately have a lot more to lose.  To put it another way, a person’s religion is often a huge part of their personal identity, so regardless of any undeniable evidence presented that refutes their beliefs, in order to accept that evidence they will have to abandon a large part of themselves which is obviously going to make that acceptance and intellectual honesty quite difficult to implement.  Furthermore, if a person’s family or friends have all invested in the same false beliefs as themselves, even if that person discovers that those beliefs are wrong, they risk their entire family and/or friends rejecting them and then forever losing those relationships that are dearest to them.  We can also see how this escalation of commitment is further reinforced by the time equals truth fallacy mentioned earlier.

Negativity Bias

When I’ve heard various Christians proselytizing to myself or others, one tactic that I’ve seen used over and over again is the use of fear-mongering with theologically based threats of eternal punishment and torture.  If we don’t convert, we’re told, we’re doomed to burn in hell for eternity.  Their incessant use of this tactic suggests that it was likely effective on themselves contributing to their own conversion (it was in fact one of the reasons for my former conversion to Christianity).  Similar tactics have been used in some political campaigns in order to persuade voters by deliberately scaring them into taking one position over another.  Though this strategy is more effective on some than others, there is an underlying cognitive bias in all of us that contributes to its efficacy.  This is known as the negativity bias.  With this bias, information or experiences that are of a more negative nature will tend to have a greater effect on our psychological states and resulting behavior when compared to positive information or experiences that are equally intense.

People will remember threats to their well-being a lot more than they remember pleasurable experiences.  This looks like another example of a simple survival strategy implemented in our brains.  Similar to my earlier hypothesis regarding the time equals truth heuristic, it is far more important to remember and avoid dangerous or life-threatening experiences than it is to remember and seek out pleasurable experiences when all else is equal.  It only takes one bad experience to end a person’s life, whereas it is less critical to experience some minimum number of pleasurable experiences.  Therefore, it makes sense as an evolutionary strategy to allocate more cognitive resources and memory for avoiding the dangerous and negative experiences, and a negativity bias helps us to accomplish that.

Unfortunately, just as with the time equals truth bias, since our cultural evolution has involved us adopting certain beliefs that no longer pertain directly to our survival, the heuristic is often being executed improperly or in the wrong context.  This increases the chances that we will fall prey to adopting irrational, dogmatic belief systems when they are presented to us in a way that utilizes fear-mongering and various forms of threats to our well-being.  When it comes to conceiving of an overtly negative threat, can anyone imagine one more significant than the threat of eternal torture?  It is, by definition, supposed to be the worst scenario imaginable, and thus it is the most effective kind of threat to play on our negativity bias, and lead to irrational beliefs.  If the dogmatic believer is also convinced that their god can hear their thoughts, they’re also far less likely to think about their dogmatic beliefs critically, for they have no mental privacy to do so.

This bias in particular reminds me of Blaise Pascal’s famous Wager, which basically asserts that it is better to believe in God than to risk the consequences of not doing so.  If God doesn’t exist, we have “only” a finite loss (according to Pascal).  If God does exist, then one’s belief leads to an eternal reward, and one’s disbelief leads to an eternal punishment.  Therefore, it is only logical (Pascal asserts) that one should believe in God, since it is the safest position to adopt.  Unfortunately, Pascal’s premises are not sound, and therefore the conclusion is invalid.  For one, is belief in God sufficient enough to avoid the supposed eternal punishment, or does it have to be more than that, such as some other religious tenets, declarations, or rituals?  Second, which god should one believe in?  There have been thousands of gods proposed by various believers over several millennia, and there were obviously many more than we currently have records of in history, therefore Pascal’s Wager merely narrows it down to a choice of several thousand known gods.  Third, even if we didn’t have to worry about choosing the correct god, if it turned out that there was no god, would a life with that belief not have carried a significant cost?  If the belief also involved a host of dogmatic moral prescriptions, rituals, and other specific ways to live one’s life, etc., including perhaps the requirement to abstain from many pleasurable human experiences, this cost could be quite great.  Furthermore, if one applies Pascal’s wager to any number of theoretical possibilities that posit a possible infinite loss over a finite loss, one would be inclined to apply the same principle to any of those possibilities, which is obviously irrational.  It is clear that Pascal hadn’t thought this one through very well, and I wouldn’t doubt that his judgement during this apologetic formulation was highly clouded by his own negativity bias (since the fear of punishment seems to be the primary focus of his “wager”).  As was mentioned earlier, we can see how effective this bias has been on a number of religious converts, and we need to be diligent about watching out for information that is presented to us with threatening strings attached, because it can easily cloud our judgement and lead to the adoption of irrational beliefs and dogma.

Belief Bias & Argument Proximity Effects

When we analyze arguments, we often judge the strength of those arguments based on how plausible their conclusion is, rather than how well those arguments support that conclusion.  In other words, people tend to focus their attention on the conclusion of an argument, and if they think that the conclusion is likely to be true (based on their prior beliefs), this affects their perspective of how strong or weak the arguments themselves appear to be.  This is obviously an incorrect way to analyze arguments, as within logic, only the arguments themselves can be used to determine the validity of the conclusion, not the other way around.  This implies that what is intuitive to us is often incorrect, and thus our cognitive biases are constantly at odds with logic and rationality.  This is why it takes a lot of practice to learn how to apply logic effectively in one’s thinking.  It just doesn’t come naturally, even though we often think it does.

Another cognitive deficit regarding how people analyze arguments irrationally is what I like to call the argument proximity effect.  Basically, when strong arguments are presented along with weak arguments that support a particular conclusion, the strong arguments will often appear to be weaker or less persuasive because of their proximity or association with the weaker ones.  This is partly due to the fact that if a person thinks that they can defeat the moderate or weak arguments, they will often believe that they can also defeat the stronger argument, if only they were given enough time to do so.  It is as if the strong arguments become “tainted” by the weaker ones, even though the opposite is true since the arguments are independent of one another.  That is, when a person has a number of arguments to support their position, a combination of strong and weak arguments is always better than only having the strong arguments, because there are simply more arguments that need to be addressed and rebutted in the former than in the latter.  Another reason for this effect is that the presence of weak arguments also weakens the credibility of the person presenting them, and so then the stronger arguments aren’t taken as seriously by the recipient.  Just as with our belief bias, this cognitive deficit is ultimately caused by not employing logic properly, if at all.  Making people aware of this cognitive flaw is only half the battle, as once again we also need to learn about logic and how to apply it effectively in our thinking.

To read part 3 of 3, click here.

Learning About Our Cognitive Biases: An Antidote to Irrational & Dogmatic Thinking (Part 1 of 3)

leave a comment »

It is often surprising to think about the large number of people that still ascribe to dogmatic beliefs, despite our living in a post-enlightenment age of science, reason, and skeptical inquiry.  We have a plethora of scientific evidence and an enormous amount of acquired data illustrating just how fallacious various dogmatic belief systems are, as well as how dogmatism in general is utterly useless for gaining knowledge or making responsible decisions throughout one’s life.  We also have an ample means of distributing this valuable information to the public through academic institutions, educational television programming, online scientific resources, books, etc.  Yet, we still have an incredibly large number of people preferably believing in various dogmatic claims (and with a feeling of “certainty”) over those supported by empirical evidence and reason.  Furthermore, when some of these people are presented with the scientific evidence that refutes their dogmatic belief, they outright deny that the evidence exists or they simply rationalize it away.  This is a very troubling problem in our society as this kind of muddled thinking often prevents many people from making responsible, rational decisions.  Irrational thinking in general has caused many people to vote for political candidates for all the wrong reasons.  It has caused many people to raise their own children in ways that promote intolerance, prejudice, and that undervalue if not entirely abhor invaluable intellectual virtues such as rational skepticism and critical thought.
Some may reasonably assume that (at least) many of these irrational belief systems and behaviors are simply a result of having low self-esteem, inadequate education and/or low intelligence, and it turns out that several dozen sociological studies have found evidence that supports this line of reasoning.  That is, a person that has lower self-esteem, lower education, and/or lower intelligence is more likely to be religious and dogmatic.  However, there is clearly much more to it than that, especially since there are still a lot of people that obtain these irrational belief systems that are also well educated and intelligent.  Furthermore, every human being is quite often irrational in their thinking.  So what else could be contributing to this irrationality (and dogmatism)?  Well, the answer seems to lie in the realms of cognitive science and psychology.  I’ve decided to split this post into three parts, because there is a lot of information I’d like to cover.  Here’s the link to part 2 of 3, which can also be found at the end of this post.

Cognitive Biases

Human beings are very intelligent relative to most other species on this planet, however, we are still riddled with various flaws in our brains which drastically reduce our ability to think logically or rationally.  This isn’t surprising after one recognizes that our brains are the product of evolution, and thus they weren’t “designed” in any way at all, let alone to operate logically or rationally.  Instead, what we see in human beings is a highly capable and adaptable brain, yet one with a number of cognitive biases and shortcomings.  Though a large number (if not all) of these biases developed as a sub-optimal yet fairly useful survival strategy, specifically within the context of our evolutionary past, most of them have become a liability in our modern civilization, and often impede our intellectual development as individuals and as a society.  A lot of these cognitive biases serve (or once served) as an effective way at making certain decisions rather quickly, that is, the biases have effectively served as heuristics for simplifying the decision making process with many everyday problems.  While heuristics are valuable and often make our decision making faculties more efficient, they are often far less than optimal due to the fact that increased efficiency is often afforded by a reduction in accuracy.  Again, this is exactly what we expect to find with products of evolution — a suboptimal strategy for solving some problem or accomplishing some goal, but one that has worked well enough to give the organism (in this case, human beings) an advantage over the competition within some environmental niche.

So what kinds of cognitive biases do we have exactly, and how many of them have cognitive scientists discovered?  A good list of them can be found here.  I’m going to mention a few of them in this post, specifically those that promote or reinforce dogmatic thinking, those that promote or reinforce an appeal to a supernatural world view, and ultimately those that seem to most hinder overall intellectual development and progress.  To begin, I’d like to briefly discuss Cognitive Dissonance Theory and how it pertains to the automated management of our beliefs.

Cognitive Dissonance Theory

The human mind doesn’t tolerate internal conflicts very well, and so when we have beliefs that contradict one another or when we are exposed to new information that conflicts with an existing belief, some degree of cognitive dissonance results and we are effectively pushed out of our comfort zone.  The mind attempts to rid itself of this cognitive dissonance through a few different methods.  A person may alter the importance of the original belief or that of the new information, they may change the original belief, or they may seek evidence that is critical of the new information.  Generally, the easiest path for the mind to take is the first and last method mentioned here, as it is far more difficult for a person to change their existing beliefs, partly due to the fact that a person’s existing set of beliefs is their only frame of reference when encountering new information, and there is an innate drive to maintain a feeling of familiarity and a feeling of certainty of our beliefs.

The primary problem with these automated cognitive dissonance reduction methods is that they tend to cause people to defend their beliefs in one way or another rather than to question them and try to analyze them objectively.  Unfortunately, this means that we often fail to consider new evidence and information using a rational approach, and this in turn can cause people to acquire quite a large number of irrational, false beliefs, despite having a feeling of certainty regarding the truth of those beliefs.  It is also important to note that many of the cognitive biases that I’m about to mention result from these cognitive dissonance reduction methods, and they can become compounded to create severe lapses in judgement.

Confirmation Bias, Semmelweis Reflex, and The Frequency Illusion

One of the most significant cognitive biases we have is what is commonly referred to as confirmation bias.  Basically, this bias refers to our tendency to seek out, interpret, or remember information in a particular way that serves to confirm our beliefs.  To put it more simply, we often will only see what we want to see.  It is a method that our brain uses in order to be able to sift through large amounts of information and piece it together with what we already believe into a meaningful story line or explanation.  Just as we expect, it ultimately optimizes for efficiency over accuracy, and therein lies the problem.  Even though this bias applies to everyone, the dogmatist in particular has a larger cognitive barrier to overcome because they’re also using a fallacious epistemological methodology right from the start (i.e. appealing to some authority rather than to reason and evidence).  So while everyone is affected by this bias to some degree, the dogmatic believer in particular has an epistemological flaw that serves to compound the issue and make matters worse.  They will, to a much higher degree, live their life unconsciously ignoring any evidence encountered that refutes their beliefs (or fallaciously reinterpreting it to be in their favor), and they will rarely if ever attempt to actively seek out such evidence to try and disprove their beliefs.  A more rational thinker on the other hand, has a belief system primarily based on a reliable epistemological methodology (which uses empirical evidence to support it), and one that has been proven to work and provide increasingly accurate knowledge better than any other method (i.e. the scientific method).  Nevertheless, everyone is affected by this bias, and because it is operating outside the conscious mind, we all fail to notice it as it actively modifies our perception of reality.

One prominent example in our modern society, illustrating how this cognitive bias can reinforce dogmatic thinking (and with large groups of people), is the ongoing debate between Young Earth Creationists and the scientific consensus regarding evolutionary theory.  Even though the theory of evolution is a scientific fact (much like many other scientific theories, such as the theory of gravity, or the Germ theory of disease), and even though there are several hundred thousand scientists that can attest to its validity, as well as a plethora of evidence within a large number of scientific fields supporting its validity, Creationists seem to be in complete and utter denial of this actuality.  Not only do they ignore the undeniable wealth of evidence that is presented to them, but they also misinterpret evidence (or cherry pick) to suit their position.  This problem is compounded by the fact that their beliefs are only supported by fallacious tautologies [e.g. “It’s true because (my interpretation of) a book called the Bible says so…”], and other irrational arguments based on nothing more than a particular religious dogma and a lot of intellectual dishonesty.

I’ve had numerous debates with these kinds of people (and I used to BE one of them many years ago, so I understand how many of them arrived at these beliefs), and I’ll often quote several scientific sources and various logical arguments to support my claims (or to refute theirs), and it is often mind boggling to witness their response, with the new information seemingly going in one ear and coming out the other without undergoing any mental processing or critical consideration.  It is as if they didn’t hear a single argument that was pointed out to them and merely executed some kind of rehearsed reflex.  The futility of the communication is often confirmed when one finds themselves having to repeat the same arguments and refutations over and over again, seemingly falling on deaf ears, with no logical rebuttal of the points made.  On a side note, this reflex-like response where a person quickly rejects new evidence because it contradicts their established belief system is known as the “Semmelweis reflex/effect”, but it appears to be just another form of confirmation bias.  Some of these cases of denial and irrationality are nothing short of ironic, since these people are living in a society that owes its technological advancements exclusively to rational, scientific methodologies, and these people are indeed patronizing and utilizing many of these benefits of science everyday of their lives (even if they don’t realize it).  Yet we still find many of them hypocritically abhorring or ignoring science and reason whenever it is convenient to do so, such as when they try to defend their dogmatic beliefs.

Another prominent example of confirmation bias reinforcing dogmatic thinking relates to various other beliefs in the supernatural.  People that believe in the efficacy of prayer, for example, will tend to remember prayers that were “answered” and forget or rationalize away any prayers that weren’t “answered”, since their beliefs reinforce such a selective memory.  Similarly, if a plane crash or other disaster occurs, some people with certain religious beliefs will often mention or draw attention to the idea that their god or some supernatural force must have intervened or played a role in preventing the death of any survivors (if there are any), while they seem to downplay or ignore the facts pertaining to the many others that did not survive (if there are any survivors at all).  In the case of prayer, numerous studies have shown that prayer for another person (e.g. to heal them from an illness) is ineffective when that person doesn’t know that they’re being prayed for, thus any efficacy of prayer has been shown to be a result of the placebo effect and nothing more.  It may be the case that prayer is one of the most effective placebos, largely due to the incredibly high level of belief in its efficacy, and the fact that there is no expected time frame for it to “take effect”.  That is, since nobody knows when a prayer will be “answered”, then even if the prayer isn’t “answered” for several months or even several years (and time ranges like this are not unheard of for many people that have claimed to have prayers answered), then confirmation bias will be even more effective than a traditional medical placebo.  After all, a typical placebo pill or treatment is expected to work within a reasonable time frame comparable to other medicine or treatments taken in the past, but there’s no deadline for a prayer to be answered by.  It’s reasonable to assume that even if these people were shown the evidence that prayer is no more effective than a placebo (even if it is the most effective placebo available), they would reject it or rationalize it away, once again, because their beliefs require it to be so.  The same bias applies to numerous other purportedly paranormal phenomena, where people see significance in some event because of how their memory or perception is operating in order to promote or sustain their beliefs.  As mentioned earlier, we often see what we want to see.

There is also a closely related bias known as congruence bias, which occurs because people rely too heavily on directly testing a given hypothesis, and often neglect to indirectly test that hypothesis.  For example, suppose that you were introduced to a new lighting device with two buttons on it, a green button and a red button.  You are told that the device will only light up by pressing the green button.  A direct test of this hypothesis would be to press the green button, and see if it lights up.  An indirect test of this hypothesis would be to press the red button, and see if it doesn’t light up.  Congruence bias illustrates that we tend to avoid the indirect testing of hypotheses, and thus we can start to form irrational beliefs by mistaking correlation with causation, or by forming incomplete explanations of causation.  In the case of the aforementioned lighting device, it could be the case that both buttons cause it to light up.  Think about how this congruence bias affects our general decision making process, where when we combine it with our confirmation bias, we are inclined to not only reaffirm our beliefs, but to avoid trying to disprove them (since we tend to avoid indirect testing of those beliefs).  This attitude and predisposition reinforces dogmatism by assuming the truth of one’s beliefs and not trying to verify them in any rational, critical way.

An interesting cognitive bias that often works in conjunction with a person’s confirmation bias is something referred to as the frequency illusion, whereby some detail of an event or some specific object may enter a person’s thoughts or attention, and then suddenly it seems that they are experiencing the object or event at a higher than normal frequency.  For example, if a person thinks about a certain animal, say a turtle, they may start noticing lots of turtles around them that they would have normally overlooked.  This person may even go to a store and suddenly notice clothing or other gifts with turtle patterns or designs on them that they didn’t seem to notice before.  After all, “turtle” is on their mind, or at least in the back of their mind, so their brain is unconsciously “looking” for turtles, and the person isn’t aware of their own unconscious pattern recognition sensitivity.  As a result, they may think that this perceived higher frequency of “seeing turtles” is abnormal and that it must be more than simply a coincidence.  If this happens to a person that appeals to the supernatural, their confirmation bias may mistake this frequency illusion for a supernatural event, or something significant.  Since this happens unconsciously, people can’t control this illusion or prevent it from happening.  However, once a person is aware of the frequency illusion as a cognitive bias that exists, they can at least reassess their experiences with a larger toolbox of rational explanations, without having to appeal to the supernatural or other irrational belief systems.  So in this particular case, we can see how various cognitive biases can “stack up” with one another and cause serious problems in our reasoning abilities and negatively affect how accurately we perceive reality.

To read part 2 of 3, click here.