The Open Mind

Cogito Ergo Sum

Posts Tagged ‘Cognition

Learning About Our Cognitive Biases: An Antidote to Irrational & Dogmatic Thinking (Part 3 of 3)

leave a comment »

This is part 3 of 3 of this post.  Click here to read part 2.

Reactive Devaluation

Studies have shown that if a claim is believed to have come from a friend or ally, the person receiving it will be more likely to think that it has merit and is truthful.  Likewise, if it is believed to have come from an enemy or some member of the opposition, it is significantly devalued.  This bias is known as reactive devaluation.  A few psychologists actually determined that the likeabillity of the source of the message is one of the key factors involved with this effect, and people will actually value information coming from a likeable source more than a source they don’t like, with the same order of magnitude that they would value information coming from an expert versus a non-expert.  This is quite troubling.  We’ve often heard about how political campaigns and certain debates are seen more as popularity contests than anything else, and this bias is likely a primary reason for why this is often the case.

Unfortunately, a large part of society has painted a nasty picture of atheism, skepticism, and the like.  As it turns out, people who do not believe in a god (mainly the Christian god) are the least trusted minority in America, according to a sociological study performed a few years ago (Johanna Olexy and Lee Herring, “Atheists Are Distrusted: Atheists Identified as America’s Most Distrusted Minority, According to Sociological Study,” American Sociological Association News, May 3, 2006).  Regarding the fight against dogmatism, the rational thinker needs to carefully consider how they can improve their likeability to the recipients of their message, if nothing else, by going above and beyond to remain kind, respectful, and maintain a positive and uplifting attitude during their argument presentation.  In any case, the person arguing will always be up against any societal expectations and preferences that may reduce that person’s likeability, so there’s also a need to remind others as well as ourselves about what is most important when discussing an issue — the message rather than the messenger.

The Backfire Effect & Psychological Reactance

There have already been several cognitive biases mentioned that interfere with people’s abilities to accept new evidence that contradicts their beliefs.  To make matters worse, psychologists have discovered that people often react to disconfirming evidence by actually strengthening their beliefs.  This is referred to as the backfire effect.  This can make refutations futile most of the time, but there are some strategies that help to combat this bias.  The bias is largely exacerbated by a person’s emotional involvement and the fact that people don’t want to appear to be unintelligent or incorrect in front of others.  If the debating parties let tempers cool down before resuming the debate, this can be quite helpful regarding the emotional element.  Additionally, if a person can show their opponent how accepting the disconfirming evidence will benefit them, they’ll be more likely to accept it.  There are times when some of the disconfirming evidence mentioned is at least partially absorbed by the person, and they just need some more time in a less tense environment to think things over a bit.  If the situation gets too heated, or if a person risks losing face with peers around them, the ability to persuade that person only decreases.

Another similar cognitive bias is referred to as reactance, which is basically a motivational reaction to instances of perceived infringement of behavioral freedoms.  That is, if a person feels that someone is taking away their choices or limiting the number of them, such as in cases where they are being heavily pressured into accepting a different point of view, they will often respond defensively.  They may feel that freedoms are being taken away from them, and it is only natural for a person to defend whatever freedoms they see themselves having or potentially losing.  Whenever one uses reverse psychology, they are in fact playing on at least an implicit knowledge of this reactance effect.  Does this mean that rational thinkers should use reverse psychology to persuade dogmatists to accept reason and evidence?  I personally don’t think that this is the right approach because this could also backfire and I would prefer to be honest and straightforward, even if this results in less efficacy.  At the very least however, one needs to be aware of this bias and needs to be careful with how they phrase their arguments, how they present the evidence, and to maintain a calm and respectful attitude during these discussions, so that the other person doesn’t feel the need to defend themselves with the cost of ignoring evidence.

Pareidolia & Other Perceptual Illusions

Have you ever seen faces of animals or people while looking at clouds, shadows, or other similar situations?  The brain has many pattern recognition modules and will often respond erroneously to vague or even random stimuli, such that we perceive something familiar, even when it isn’t actually there.  This is called pareidolia, and is a type of apophenia (i.e. seeing patterns in random data).  Not surprisingly, many people have reported instances of seeing various religious imagery and so forth, most notably the faces of prominent religious figures, from ordinary phenomena.  People have also erroneously heard “hidden messages” in records when playing them in reverse, and similar illusory perceptions.  This results from the fact that the brain often fills in gaps and if one expects to see a pattern (even if unconsciously driven by some emotional significance), then the brain’s pattern recognition modules can often “over-detect” or result in false positives by too much sensitivity and by conflating unconscious imagery with one’s actual sensory experiences, leading to a lot of distorted perceptions.  It goes without saying that this cognitive bias is perfect for reinforcing superstitious or supernatural beliefs, because what people tend to see or experience from this effect are those things that are most emotionally significant to them.  After it occurs, people believe that what they saw or heard must have been real and therefore significant.

Most people are aware that hallucinations occur in some people from time to time, under certain neurological conditions (generally caused by an imbalance of neurotransmitters).  A bias like pareidolia can effectively produce similar experiences, but without requiring the more specific neurological conditions that a textbook hallucination requires.  That is, the brain doesn’t need to be in any drug-induced state nor does it need to be physically abnormal in any way for this to occur, making it much more common than hallucinations.  Since pareidolia is also compounded with one’s confirmation bias, and since the brain is constantly implementing cognitive dissonance reduction mechanisms in the background (as mentioned earlier), this can also result in groups of people having the same illusory experience, specifically if the group shares the same unconscious emotional motivations for “seeing” or experiencing something in particular.  These circumstances along with the power of suggestion from even just one member of the group can lead to what are known as collective hallucinations.  Since collective hallucinations like these inevitably lead to a feeling of corroboration and confirmation between the various people experiencing it (say, seeing a “spirit” or “ghost” of someone significant), they can reinforce one another’s beliefs, despite the fact that the experience was based on a cognitive illusion largely induced by powerful human emotions.  It’s likely no coincidence that this type of group phenomenon seems to only occur with members of a religion or cult, specifically those that are in a highly emotional and synchronized psychological state.  There have been many cases of large groups of people from various religions around the world claiming to see some prominent religious figure or other supernatural being, illustrating just how universal this phenomena is within a highly emotional group context.

Hyperactive Agency Detection

There is a cognitive bias that is somewhat related to the aforementioned perceptual illusion biases which is known as hyperactive agency detection.  This bias refers to our tendency to erroneously assign agency to events that happen without an agent causing them.  Human beings all have a theory of mind that is based in part on detecting agency, where we assume that other people are causal agents in the sense that they have intentions and goals just like we do.  From an evolutionary perspective, we can see that our ability to recognize that others are intentional agents allows us to better predict another person’s behavior which is extremely beneficial to our survival (notably in cases where we suspect others are intending to harm us).

Unfortunately our brain’s ability to detect agency is hyperactive in that it errs on the side of caution by over-detecting possible agency, as opposed to under-detecting (which could negatively affect our survival prospects).  As a result, people will often ascribe agency to things and events in nature that have no underlying intentional agent.  For example, people often anthropomorphize (or implicitly assume agency to) machines and get angry when they don’t work properly (such as an automobile that breaks down while driving to work), often with the feeling that the machine is “out to get us”, is “evil”, etc.  Most of the time, if we have feelings like this, they are immediately checked and balanced by our rational minds that remind us that “it’s only a car”, or “it’s not conscious”, etc.  Similarly, when natural disasters or other similar events occur, our hyperactive agency detection snaps into gear, trying to find “someone to blame”.  This cognitive bias explains quite well, at least as a contributing factor as to why so many ancient cultures assumed that there were gods that caused earthquakes, lightning, thunder, volcanoes, hurricanes, etc.  Quite simply, they needed to ascribe the events as having been caused by someone.

Likewise, if we are walking in a forest and hear some strange sounds in the bushes, we may often assume that some intentional agent is present (whether another person or other animal), even though it may simply be the wind that is causing the noise.  Once again, we can see in this case the evolutionary benefit of this agency detection, even with the occasional “false positive”, the noise in the bushes could very well be a predator or other source of danger, so it is usually better for our brains to assume that the noise is coming from an intentional agent.  It’s also entirely possible that even in cases where the source of sound was determined to be the wind, early cultures may have ascribed an intentional stance to the wind itself (e.g. a “wind” god, etc.).  In all of these cases, we can see how our hyperactive agency detection can lead to irrational, theistic and other supernatural belief systems, and we need to be aware of such a bias when evaluating our intuitions of how the world works.

Risk Compensation

It is only natural that the more a person feels protected from various harms, the less careful they tend to behave, and vice versa.  This apparent tendency, sometimes referred to as risk compensation, is often a harmless approach because generally speaking, if one is well protected from harm, they do have less to worry about than one who is not, and thus they can take greater relative risks as a result of that larger cushion of safety.  Where this tends to cause a problem however, is in cases when the perceived level of protection is far higher than it actually is.  The worst case scenario that comes to mind is if a person that appeals to the supernatural thinks that it doesn’t matter what happens to them or that no harm can come to them because of some belief in a supernatural source of protection, let alone one that they believe provides the most protection possible.  In this scenario, their level of risk compensation is fully biased to a negative extreme, and they will be more likely to behave irresponsibly because they don’t have realistic expectations of the consequences of their actions.  In a post-enlightenment society, we’ve acquired a higher responsibility to heed the knowledge gained from scientific discoveries so that we can behave more rationally as we learn more about the consequences of our actions.

It’s not at all difficult to see how the effects of various religious dogma on one’s level of risk compensation can inhibit that society from making rational, responsible decisions.  We have several serious issues plaguing modern society including those related to environmental sustainability, climate change, pollution, war, etc.  If believers in the supernatural have an attitude that everything is “in God’s hands” or that “everything will be okay” because of their belief in a protective god, they are far less likely to take these kinds of issues seriously because they fail to acknowledge the obvious harm on everyone in society.  What’s worse is that not only are dogmatists reducing their own safety, but they’re also reducing the safety of their children, and of everyone else in society that is trying to behave rationally with realistic expectations.  If we had two planets, one for rational thinkers, and one for dogmatists, this would no longer be a problem for everyone.  The reality is that we share one planet and thus we all suffer the consequences of any one person’s irresponsible actions.  If we want to make more rational decisions, especially those related to the safety of our planet’s environment, society, and posterity, people need to adjust their risk compensation such that it is based on empirical evidence using a rational, proven scientific methodology.  This point clearly illustrates the need to phase out supernatural beliefs since false beliefs that don’t correspond with reality can be quite dangerous to everybody regardless of who carries the beliefs.  In any case, when one is faced with the choice between knowledge versus ignorance, history has demonstrated which choice is best.

Final thoughts

The fact that we are living in an information age with increasing global connectivity, and the fact that we are living in a society that is becoming increasingly reliant on technology, will likely help to combat the ill effects of these cognitive biases.  These factors are making it increasingly difficult to hide one’s self from the evidence and the proven efficacy of using the scientific method in order to form accurate beliefs regarding the world around us.  Societal expectations are also changing as a result of these factors, and it is becoming less and less socially acceptable to be irrational, dogmatic, and to not take scientific methodology more seriously.  In all of these cases, having knowledge about these cognitive biases will help people to combat them.  Even if we can’t eliminate the biases, we can safeguard ourselves by preparing for any potential ill effects that those biases may produce.  This is where reason and science come into play, providing us with the means to counter our cognitive biases by applying a rational skepticism to all of our beliefs, and by using a logical and reliable methodology based on empirical evidence to arrive at a belief that we can be justifiably confident (though never certain) in.  As Darwin once said, “Ignorance more frequently begets confidence than does knowledge; it is those who know little, and not those who know much, who so positively assert that this or that problem will never be solved by science.”  Science has constantly been ratcheting our way toward a better understanding of the universe we live in, despite the fact that the results we obtain from science are often at odds with our own intuitions about how we think the world is.  Only relatively recently has science started to provide us with information regarding why our intuitions are often at odds with the scientific evidence.

We’ve made huge leaps within neuroscience, cognitive science, and psychology, and these discoveries are giving us the antidote that we need in order to reduce or eliminate irrational and dogmatic thinking.  Furthermore, those that study logic have an additional advantage in that they are more likely to spot fallacious arguments that are used to support this or that belief, and thus they are less likely to be tricked or persuaded by arguments that often appear to be sound and strong, but actually aren’t.  People fall prey to fallacious arguments all the time, and it is mostly because they haven’t learned how to spot them (which courses in logic and other reasoning can help to mitigate).  Thus, overall I believe it is imperative that we integrate knowledge concerning our cognitive biases into our children’s educational curriculum, starting at a young age (in a format that is understandable and engaging of course).  My hope is that if we start to integrate cognitive education (and complementary courses in logic and reasoning) as a part of our foundational education curriculum, we will see a cascade effect of positive benefits that will guide our society more safely into the future.

Advertisements

Learning About Our Cognitive Biases: An Antidote to Irrational & Dogmatic Thinking (Part 2 of 3)

leave a comment »

This is part 2 of 3 of this post.  Click here to read part 1.

The Feeling of Certainty & The Overconfidence Effect

There are a lot of factors that affect how certain we feel that one belief or another is in fact true.  One factor that affects this feeling of certainty is what I like to call the time equals truth fallacy.  With this cognitive bias, we tend to feel more certain about our beliefs as time progresses, despite not gaining any new evidence to support those beliefs. Since the time with a particular belief is the only factor involved here, it will have greater effects on those that are relatively isolated or sheltered from new information.  So if a person has been indoctrinated with a particular set of beliefs (let alone irrational beliefs), and they are effectively “cut off” from any sources of information that could serve to refute those beliefs, it will become increasingly more difficult to persuade them later on.  This situation sounds all too familiar, as it is often the case that dogmatic groups will isolate themselves and shelter their members from outside influences, for this very reason.  If the group can isolate their members for long enough (or the members actively isolate themselves), the dogma effectively gets burned into them, and the brainwashing is far more successful.  That this cognitive bias has been exploited by various dogmatic/religious leaders throughout history is hardly surprising considering how effective it is.

Though I haven’t researched or come across any proposed evolutionary reasons behind the development of this cognitive bias, I do have at least one hypothesis pertaining to its origin.  I’d say that this bias may have resulted from the fact that the beliefs that matter most (from an evolutionary perspective) are those that are related to our survival goals (e.g. finding food sources or evading a predator).  So naturally, any beliefs that didn’t cause noticeable harm to an organism weren’t correlated with harm, and thus were more likely to be correct (i.e. keep using whatever works and don’t fix what ain’t broke).  However, once we started adding many more beliefs to our repertoire, specifically those that didn’t directly affect our survival (including many beliefs in the supernatural), the same cognitive rules and heuristics were still being applied as before, although these new types of beliefs (when false) haven’t been naturally selected against, because they haven’t been detrimental enough to our survival (at least not yet, or not enough to force a significant evolutionary change).  So once again, evolution may have produced this bias for very advantageous reasons, but it is a sub-optimal heuristic (as always) and one that has become a significant liability after we started evolving culturally as well.

Another related cognitive bias, and one that is quite well established, is the overconfidence effect, whereby a person’s subjective confidence level or feeling of confidence in his or her judgements or beliefs are predictably higher than the actual objective accuracy of those judgements and beliefs. In a nutshell, people tend to have far more confidence in their beliefs and decisions than is warranted.  A common example cited to illustrate the intensity of this bias pertains to people taking certain quizzes, claiming to be “99% certain” about their answers to certain questions, and then finding out afterwards that they were wrong about half of the time.  In other cases, this overconfidence effect can be even worse.

In a sense, the feeling of certainty is like an emotion, which, like our other emotions, occur as a natural reflex, regardless of the cause, and independently of reason.  Just like other emotions such as anger, pleasure, or fear, the feeling of certainty can be produced by a seizure, certain drugs, and even electrical stimulation of certain regions of the brain.  In all of these cases, even when no particular beliefs are being thought about, the brain can produce a feeling of certainty nevertheless.  Research has shown that the feeling of knowing or certainty can also be induced through various brain washing and trance-inducing techniques such as a high repetition of words, rhythmic music, sleep deprivation (or fasting), and other types of social/emotional manipulation.  It is hardly a coincidence that many religions often employ a number of these techniques within their repertoire.

The most important point to take away from learning about these “confidence/certainty” biases is to understand that the feeling of certainty is not a reliable way of determining the accuracy of one’s beliefs.  Furthermore, one must realize that no matter how certain we may feel about a particular belief, we could be wrong, and often times are.  Dogmatic belief systems such as those found in many religions are often propagated by a misleading and mistaken feeling of certainty, even if that feeling is more potent than any ever experienced before.  Often times when people ascribing to these dogmatic belief systems are questioned about the lack of empirical evidence supporting their beliefs, even if all of their arguments have been refuted, they simply reply by saying “I just know.”  Irrational, and entirely unjustified responses like these illustrate the dire need for people to become aware of just how fallible their feeling of certainty can be.

Escalation of Commitment

If a person has invested their whole lives in some belief system, even if they encounter undeniable evidence that their beliefs were wrong, they are more likely to ignore it or rationalize it away than to modify their beliefs, and thus they will likely continue investing more time and energy in those false beliefs.  This is due to an effect known as escalation of commitment.  Basically, the higher the cumulative investment in a particular course of action, the more likely someone will feel justified in continuing to increase that investment, despite new evidence showing them that they’d be better off abandoning that investment and cutting their losses.  When it comes to trying to “convert” a dogmatic believer into a more rational, free thinker, this irrational tendency severely impedes any chance of success, more so when that person has been investing themselves in the dogma for a longer period of time, since they ultimately have a lot more to lose.  To put it another way, a person’s religion is often a huge part of their personal identity, so regardless of any undeniable evidence presented that refutes their beliefs, in order to accept that evidence they will have to abandon a large part of themselves which is obviously going to make that acceptance and intellectual honesty quite difficult to implement.  Furthermore, if a person’s family or friends have all invested in the same false beliefs as themselves, even if that person discovers that those beliefs are wrong, they risk their entire family and/or friends rejecting them and then forever losing those relationships that are dearest to them.  We can also see how this escalation of commitment is further reinforced by the time equals truth fallacy mentioned earlier.

Negativity Bias

When I’ve heard various Christians proselytizing to myself or others, one tactic that I’ve seen used over and over again is the use of fear-mongering with theologically based threats of eternal punishment and torture.  If we don’t convert, we’re told, we’re doomed to burn in hell for eternity.  Their incessant use of this tactic suggests that it was likely effective on themselves contributing to their own conversion (it was in fact one of the reasons for my former conversion to Christianity).  Similar tactics have been used in some political campaigns in order to persuade voters by deliberately scaring them into taking one position over another.  Though this strategy is more effective on some than others, there is an underlying cognitive bias in all of us that contributes to its efficacy.  This is known as the negativity bias.  With this bias, information or experiences that are of a more negative nature will tend to have a greater effect on our psychological states and resulting behavior when compared to positive information or experiences that are equally intense.

People will remember threats to their well-being a lot more than they remember pleasurable experiences.  This looks like another example of a simple survival strategy implemented in our brains.  Similar to my earlier hypothesis regarding the time equals truth heuristic, it is far more important to remember and avoid dangerous or life-threatening experiences than it is to remember and seek out pleasurable experiences when all else is equal.  It only takes one bad experience to end a person’s life, whereas it is less critical to experience some minimum number of pleasurable experiences.  Therefore, it makes sense as an evolutionary strategy to allocate more cognitive resources and memory for avoiding the dangerous and negative experiences, and a negativity bias helps us to accomplish that.

Unfortunately, just as with the time equals truth bias, since our cultural evolution has involved us adopting certain beliefs that no longer pertain directly to our survival, the heuristic is often being executed improperly or in the wrong context.  This increases the chances that we will fall prey to adopting irrational, dogmatic belief systems when they are presented to us in a way that utilizes fear-mongering and various forms of threats to our well-being.  When it comes to conceiving of an overtly negative threat, can anyone imagine one more significant than the threat of eternal torture?  It is, by definition, supposed to be the worst scenario imaginable, and thus it is the most effective kind of threat to play on our negativity bias, and lead to irrational beliefs.  If the dogmatic believer is also convinced that their god can hear their thoughts, they’re also far less likely to think about their dogmatic beliefs critically, for they have no mental privacy to do so.

This bias in particular reminds me of Blaise Pascal’s famous Wager, which basically asserts that it is better to believe in God than to risk the consequences of not doing so.  If God doesn’t exist, we have “only” a finite loss (according to Pascal).  If God does exist, then one’s belief leads to an eternal reward, and one’s disbelief leads to an eternal punishment.  Therefore, it is only logical (Pascal asserts) that one should believe in God, since it is the safest position to adopt.  Unfortunately, Pascal’s premises are not sound, and therefore the conclusion is invalid.  For one, is belief in God sufficient enough to avoid the supposed eternal punishment, or does it have to be more than that, such as some other religious tenets, declarations, or rituals?  Second, which god should one believe in?  There have been thousands of gods proposed by various believers over several millennia, and there were obviously many more than we currently have records of in history, therefore Pascal’s Wager merely narrows it down to a choice of several thousand known gods.  Third, even if we didn’t have to worry about choosing the correct god, if it turned out that there was no god, would a life with that belief not have carried a significant cost?  If the belief also involved a host of dogmatic moral prescriptions, rituals, and other specific ways to live one’s life, etc., including perhaps the requirement to abstain from many pleasurable human experiences, this cost could be quite great.  Furthermore, if one applies Pascal’s wager to any number of theoretical possibilities that posit a possible infinite loss over a finite loss, one would be inclined to apply the same principle to any of those possibilities, which is obviously irrational.  It is clear that Pascal hadn’t thought this one through very well, and I wouldn’t doubt that his judgement during this apologetic formulation was highly clouded by his own negativity bias (since the fear of punishment seems to be the primary focus of his “wager”).  As was mentioned earlier, we can see how effective this bias has been on a number of religious converts, and we need to be diligent about watching out for information that is presented to us with threatening strings attached, because it can easily cloud our judgement and lead to the adoption of irrational beliefs and dogma.

Belief Bias & Argument Proximity Effects

When we analyze arguments, we often judge the strength of those arguments based on how plausible their conclusion is, rather than how well those arguments support that conclusion.  In other words, people tend to focus their attention on the conclusion of an argument, and if they think that the conclusion is likely to be true (based on their prior beliefs), this affects their perspective of how strong or weak the arguments themselves appear to be.  This is obviously an incorrect way to analyze arguments, as within logic, only the arguments themselves can be used to determine the validity of the conclusion, not the other way around.  This implies that what is intuitive to us is often incorrect, and thus our cognitive biases are constantly at odds with logic and rationality.  This is why it takes a lot of practice to learn how to apply logic effectively in one’s thinking.  It just doesn’t come naturally, even though we often think it does.

Another cognitive deficit regarding how people analyze arguments irrationally is what I like to call the argument proximity effect.  Basically, when strong arguments are presented along with weak arguments that support a particular conclusion, the strong arguments will often appear to be weaker or less persuasive because of their proximity or association with the weaker ones.  This is partly due to the fact that if a person thinks that they can defeat the moderate or weak arguments, they will often believe that they can also defeat the stronger argument, if only they were given enough time to do so.  It is as if the strong arguments become “tainted” by the weaker ones, even though the opposite is true since the arguments are independent of one another.  That is, when a person has a number of arguments to support their position, a combination of strong and weak arguments is always better than only having the strong arguments, because there are simply more arguments that need to be addressed and rebutted in the former than in the latter.  Another reason for this effect is that the presence of weak arguments also weakens the credibility of the person presenting them, and so then the stronger arguments aren’t taken as seriously by the recipient.  Just as with our belief bias, this cognitive deficit is ultimately caused by not employing logic properly, if at all.  Making people aware of this cognitive flaw is only half the battle, as once again we also need to learn about logic and how to apply it effectively in our thinking.

To read part 3 of 3, click here.

Learning About Our Cognitive Biases: An Antidote to Irrational & Dogmatic Thinking (Part 1 of 3)

leave a comment »

It is often surprising to think about the large number of people that still ascribe to dogmatic beliefs, despite our living in a post-enlightenment age of science, reason, and skeptical inquiry.  We have a plethora of scientific evidence and an enormous amount of acquired data illustrating just how fallacious various dogmatic belief systems are, as well as how dogmatism in general is utterly useless for gaining knowledge or making responsible decisions throughout one’s life.  We also have an ample means of distributing this valuable information to the public through academic institutions, educational television programming, online scientific resources, books, etc.  Yet, we still have an incredibly large number of people preferably believing in various dogmatic claims (and with a feeling of “certainty”) over those supported by empirical evidence and reason.  Furthermore, when some of these people are presented with the scientific evidence that refutes their dogmatic belief, they outright deny that the evidence exists or they simply rationalize it away.  This is a very troubling problem in our society as this kind of muddled thinking often prevents many people from making responsible, rational decisions.  Irrational thinking in general has caused many people to vote for political candidates for all the wrong reasons.  It has caused many people to raise their own children in ways that promote intolerance, prejudice, and that undervalue if not entirely abhor invaluable intellectual virtues such as rational skepticism and critical thought.
 .
Some may reasonably assume that (at least) many of these irrational belief systems and behaviors are simply a result of having low self-esteem, inadequate education and/or low intelligence, and it turns out that several dozen sociological studies have found evidence that supports this line of reasoning.  That is, a person that has lower self-esteem, lower education, and/or lower intelligence is more likely to be religious and dogmatic.  However, there is clearly much more to it than that, especially since there are still a lot of people that obtain these irrational belief systems that are also well educated and intelligent.  Furthermore, every human being is quite often irrational in their thinking.  So what else could be contributing to this irrationality (and dogmatism)?  Well, the answer seems to lie in the realms of cognitive science and psychology.  I’ve decided to split this post into three parts, because there is a lot of information I’d like to cover.  Here’s the link to part 2 of 3, which can also be found at the end of this post.

Cognitive Biases

Human beings are very intelligent relative to most other species on this planet, however, we are still riddled with various flaws in our brains which drastically reduce our ability to think logically or rationally.  This isn’t surprising after one recognizes that our brains are the product of evolution, and thus they weren’t “designed” in any way at all, let alone to operate logically or rationally.  Instead, what we see in human beings is a highly capable and adaptable brain, yet one with a number of cognitive biases and shortcomings.  Though a large number (if not all) of these biases developed as a sub-optimal yet fairly useful survival strategy, specifically within the context of our evolutionary past, most of them have become a liability in our modern civilization, and often impede our intellectual development as individuals and as a society.  A lot of these cognitive biases serve (or once served) as an effective way at making certain decisions rather quickly, that is, the biases have effectively served as heuristics for simplifying the decision making process with many everyday problems.  While heuristics are valuable and often make our decision making faculties more efficient, they are often far less than optimal due to the fact that increased efficiency is often afforded by a reduction in accuracy.  Again, this is exactly what we expect to find with products of evolution — a suboptimal strategy for solving some problem or accomplishing some goal, but one that has worked well enough to give the organism (in this case, human beings) an advantage over the competition within some environmental niche.

So what kinds of cognitive biases do we have exactly, and how many of them have cognitive scientists discovered?  A good list of them can be found here.  I’m going to mention a few of them in this post, specifically those that promote or reinforce dogmatic thinking, those that promote or reinforce an appeal to a supernatural world view, and ultimately those that seem to most hinder overall intellectual development and progress.  To begin, I’d like to briefly discuss Cognitive Dissonance Theory and how it pertains to the automated management of our beliefs.

Cognitive Dissonance Theory

The human mind doesn’t tolerate internal conflicts very well, and so when we have beliefs that contradict one another or when we are exposed to new information that conflicts with an existing belief, some degree of cognitive dissonance results and we are effectively pushed out of our comfort zone.  The mind attempts to rid itself of this cognitive dissonance through a few different methods.  A person may alter the importance of the original belief or that of the new information, they may change the original belief, or they may seek evidence that is critical of the new information.  Generally, the easiest path for the mind to take is the first and last method mentioned here, as it is far more difficult for a person to change their existing beliefs, partly due to the fact that a person’s existing set of beliefs is their only frame of reference when encountering new information, and there is an innate drive to maintain a feeling of familiarity and a feeling of certainty of our beliefs.

The primary problem with these automated cognitive dissonance reduction methods is that they tend to cause people to defend their beliefs in one way or another rather than to question them and try to analyze them objectively.  Unfortunately, this means that we often fail to consider new evidence and information using a rational approach, and this in turn can cause people to acquire quite a large number of irrational, false beliefs, despite having a feeling of certainty regarding the truth of those beliefs.  It is also important to note that many of the cognitive biases that I’m about to mention result from these cognitive dissonance reduction methods, and they can become compounded to create severe lapses in judgement.

Confirmation Bias, Semmelweis Reflex, and The Frequency Illusion

One of the most significant cognitive biases we have is what is commonly referred to as confirmation bias.  Basically, this bias refers to our tendency to seek out, interpret, or remember information in a particular way that serves to confirm our beliefs.  To put it more simply, we often will only see what we want to see.  It is a method that our brain uses in order to be able to sift through large amounts of information and piece it together with what we already believe into a meaningful story line or explanation.  Just as we expect, it ultimately optimizes for efficiency over accuracy, and therein lies the problem.  Even though this bias applies to everyone, the dogmatist in particular has a larger cognitive barrier to overcome because they’re also using a fallacious epistemological methodology right from the start (i.e. appealing to some authority rather than to reason and evidence).  So while everyone is affected by this bias to some degree, the dogmatic believer in particular has an epistemological flaw that serves to compound the issue and make matters worse.  They will, to a much higher degree, live their life unconsciously ignoring any evidence encountered that refutes their beliefs (or fallaciously reinterpreting it to be in their favor), and they will rarely if ever attempt to actively seek out such evidence to try and disprove their beliefs.  A more rational thinker on the other hand, has a belief system primarily based on a reliable epistemological methodology (which uses empirical evidence to support it), and one that has been proven to work and provide increasingly accurate knowledge better than any other method (i.e. the scientific method).  Nevertheless, everyone is affected by this bias, and because it is operating outside the conscious mind, we all fail to notice it as it actively modifies our perception of reality.

One prominent example in our modern society, illustrating how this cognitive bias can reinforce dogmatic thinking (and with large groups of people), is the ongoing debate between Young Earth Creationists and the scientific consensus regarding evolutionary theory.  Even though the theory of evolution is a scientific fact (much like many other scientific theories, such as the theory of gravity, or the Germ theory of disease), and even though there are several hundred thousand scientists that can attest to its validity, as well as a plethora of evidence within a large number of scientific fields supporting its validity, Creationists seem to be in complete and utter denial of this actuality.  Not only do they ignore the undeniable wealth of evidence that is presented to them, but they also misinterpret evidence (or cherry pick) to suit their position.  This problem is compounded by the fact that their beliefs are only supported by fallacious tautologies [e.g. “It’s true because (my interpretation of) a book called the Bible says so…”], and other irrational arguments based on nothing more than a particular religious dogma and a lot of intellectual dishonesty.

I’ve had numerous debates with these kinds of people (and I used to BE one of them many years ago, so I understand how many of them arrived at these beliefs), and I’ll often quote several scientific sources and various logical arguments to support my claims (or to refute theirs), and it is often mind boggling to witness their response, with the new information seemingly going in one ear and coming out the other without undergoing any mental processing or critical consideration.  It is as if they didn’t hear a single argument that was pointed out to them and merely executed some kind of rehearsed reflex.  The futility of the communication is often confirmed when one finds themselves having to repeat the same arguments and refutations over and over again, seemingly falling on deaf ears, with no logical rebuttal of the points made.  On a side note, this reflex-like response where a person quickly rejects new evidence because it contradicts their established belief system is known as the “Semmelweis reflex/effect”, but it appears to be just another form of confirmation bias.  Some of these cases of denial and irrationality are nothing short of ironic, since these people are living in a society that owes its technological advancements exclusively to rational, scientific methodologies, and these people are indeed patronizing and utilizing many of these benefits of science everyday of their lives (even if they don’t realize it).  Yet we still find many of them hypocritically abhorring or ignoring science and reason whenever it is convenient to do so, such as when they try to defend their dogmatic beliefs.

Another prominent example of confirmation bias reinforcing dogmatic thinking relates to various other beliefs in the supernatural.  People that believe in the efficacy of prayer, for example, will tend to remember prayers that were “answered” and forget or rationalize away any prayers that weren’t “answered”, since their beliefs reinforce such a selective memory.  Similarly, if a plane crash or other disaster occurs, some people with certain religious beliefs will often mention or draw attention to the idea that their god or some supernatural force must have intervened or played a role in preventing the death of any survivors (if there are any), while they seem to downplay or ignore the facts pertaining to the many others that did not survive (if there are any survivors at all).  In the case of prayer, numerous studies have shown that prayer for another person (e.g. to heal them from an illness) is ineffective when that person doesn’t know that they’re being prayed for, thus any efficacy of prayer has been shown to be a result of the placebo effect and nothing more.  It may be the case that prayer is one of the most effective placebos, largely due to the incredibly high level of belief in its efficacy, and the fact that there is no expected time frame for it to “take effect”.  That is, since nobody knows when a prayer will be “answered”, then even if the prayer isn’t “answered” for several months or even several years (and time ranges like this are not unheard of for many people that have claimed to have prayers answered), then confirmation bias will be even more effective than a traditional medical placebo.  After all, a typical placebo pill or treatment is expected to work within a reasonable time frame comparable to other medicine or treatments taken in the past, but there’s no deadline for a prayer to be answered by.  It’s reasonable to assume that even if these people were shown the evidence that prayer is no more effective than a placebo (even if it is the most effective placebo available), they would reject it or rationalize it away, once again, because their beliefs require it to be so.  The same bias applies to numerous other purportedly paranormal phenomena, where people see significance in some event because of how their memory or perception is operating in order to promote or sustain their beliefs.  As mentioned earlier, we often see what we want to see.

There is also a closely related bias known as congruence bias, which occurs because people rely too heavily on directly testing a given hypothesis, and often neglect to indirectly test that hypothesis.  For example, suppose that you were introduced to a new lighting device with two buttons on it, a green button and a red button.  You are told that the device will only light up by pressing the green button.  A direct test of this hypothesis would be to press the green button, and see if it lights up.  An indirect test of this hypothesis would be to press the red button, and see if it doesn’t light up.  Congruence bias illustrates that we tend to avoid the indirect testing of hypotheses, and thus we can start to form irrational beliefs by mistaking correlation with causation, or by forming incomplete explanations of causation.  In the case of the aforementioned lighting device, it could be the case that both buttons cause it to light up.  Think about how this congruence bias affects our general decision making process, where when we combine it with our confirmation bias, we are inclined to not only reaffirm our beliefs, but to avoid trying to disprove them (since we tend to avoid indirect testing of those beliefs).  This attitude and predisposition reinforces dogmatism by assuming the truth of one’s beliefs and not trying to verify them in any rational, critical way.

An interesting cognitive bias that often works in conjunction with a person’s confirmation bias is something referred to as the frequency illusion, whereby some detail of an event or some specific object may enter a person’s thoughts or attention, and then suddenly it seems that they are experiencing the object or event at a higher than normal frequency.  For example, if a person thinks about a certain animal, say a turtle, they may start noticing lots of turtles around them that they would have normally overlooked.  This person may even go to a store and suddenly notice clothing or other gifts with turtle patterns or designs on them that they didn’t seem to notice before.  After all, “turtle” is on their mind, or at least in the back of their mind, so their brain is unconsciously “looking” for turtles, and the person isn’t aware of their own unconscious pattern recognition sensitivity.  As a result, they may think that this perceived higher frequency of “seeing turtles” is abnormal and that it must be more than simply a coincidence.  If this happens to a person that appeals to the supernatural, their confirmation bias may mistake this frequency illusion for a supernatural event, or something significant.  Since this happens unconsciously, people can’t control this illusion or prevent it from happening.  However, once a person is aware of the frequency illusion as a cognitive bias that exists, they can at least reassess their experiences with a larger toolbox of rational explanations, without having to appeal to the supernatural or other irrational belief systems.  So in this particular case, we can see how various cognitive biases can “stack up” with one another and cause serious problems in our reasoning abilities and negatively affect how accurately we perceive reality.

To read part 2 of 3, click here.

An Evolved Consciousness Creating Conscious Evolution

with 3 comments

Two Evolutionary Leaps That Changed It All

As I’ve mentioned in a previous post, human biological evolution has led to the emergence of not only consciousness but also a co-existing yet semi-independent cultural evolution (through the unique evolution of the human brain).  This evolutionary leap has allowed us to produce increasingly powerful technologies which in turn have provided a means for circumventing many natural selection pressures that our physical bodies would otherwise be unable to handle.

One of these technologies has been the selective breeding of plants and animals, with this process often referred to as “artificial” selection, as opposed to “natural” selection since human beings have served as an artificial selection pressure (rather than the natural selection pressures of the environment in general).  In the case of our discovery of artificial selection, by choosing which plants and animals to cultivate and raise, we basically just catalyzed the selection process by providing a selection pressure based on the plant or animal traits that we’ve desired most.  By doing so, rather than the selection process taking thousands or even millions of years to produce what we have today (in terms of domesticated plants and animals), it only took a minute fraction of that time since it was mediated through a consciously guided or teleological process, unlike natural selection which operates on randomly differentiating traits leading to differential reproductive success (and thus new genomes and species) over time.

This second evolutionary leap (artificial selection that is) has ultimately paved the way for civilization, as it has increased the landscape of our diet and thus our available options for food, and the resultant agriculture has allowed us to increase our population density such that human collaboration, complex distribution of labor, and ultimately the means for creating new and increasingly complex technologies, have been made possible.  It is largely because of this new evolutionary leap that we’ve been able to reach the current pinnacle of human evolution, the newest and perhaps our last evolutionary leap, or what I’ve previously referred to as “engineered selection”.

With artificial selection, we’ve been able to create new species of plants and animals with very unique and unprecedented traits, however we’ve been limited by the rate of mutations or other genomic differentiating mechanisms that must arise in order to create any new and desirable traits. With engineered selection, we can simply select or engineer the genomic sequences required to produce the desired traits, effectively allowing us to circumvent any genomic differentiation rate limitations and also allowing us instant access to every genomic possibility.

Genetic Engineering Progress & Applications

After a few decades of genetic engineering research, we’ve gained a number of capabilities including but not limited to: producing recombinant DNA, producing transgenic organisms, utilizing in vivo trans-species protein production, and even creating the world’s first synthetic life form (by adding a completely synthetic or human-constructed bacterial genome to a cell containing no DNA).  The plethora of potential applications for genetic engineering (as well as those applications currently in use) has continued to grow as scientists and other creative thinkers are further discovering the power and scope of areas such as mimetics, micro-organism domestication, nano-biomaterials, and many other inter-related niches.

Domestication of Genetically Engineered Micro and Macro-organisms

People have been genetically modifying plants and animals for the same reasons they’ve been artificially selecting them — in order to produce species with more desirable traits. Plants and animals have been genetically engineered to withstand harsher climates, resist harmful herbicides or pesticides (or produce their own pesticides), produce more food or calories per acre (or more nutritious food when all else is equal), etc.  Plants and animals have also been genetically modified for the purposes of “pharming”, where substances that aren’t normally produced by the plant or animal (e.g. pharmacological substances, vaccines, etc.) are expressed, extracted, and then purified.

One of the most compelling applications of genetic engineering within agriculture involves solving the “omnivore’s dilemma”, that is, the prospect of growing unconscious livestock by genetically inhibiting the development of certain parts of the brain so that the animal doesn’t experience any pain or suffering.  There have also been advancements made with in vitro meat, that is, producing cultured meat cells so that no actual animal is needed at all other than some starting cells taken painlessly from live animals (which are then placed into a culture media to grow into larger quantities of meat), however it should be noted that this latter technique doesn’t actually require any genetic modification, although genetic modification may have merit in improving these techniques.  The most important point here is that these methods should decrease the financial and environmental costs of eating meat, and will likely help to solve the ethical issues regarding the inhumane treatment of animals within agriculture.

We’ve now entered a new niche regarding the domestication of species.  As of a few decades ago, we began domesticating micro-organisms. Micro-organisms have been modified and utilized to produce insulin for diabetics as well as other forms of medicine such as vaccines, human growth hormone, etc.  There have also been certain forms of bacteria genetically modified in order to turn cellulose and other plant material directly into hydrocarbon fuels.  This year (2014), E. coli bacteria have been genetically modified in order to turn glucose into pinene (a high energy hydrocarbon used as a rocket fuel).  In 2013, researchers at the University of California, Davis, genetically engineered cyanobacteria (a.k.a. blue-green algae) by adding particular DNA sequences to its genome which coded for specific enzymes such that it can use sunlight and the process of photosynthesis to turn CO2 into 2,3 butanediol (a chemical that can be used as a fuel, or to make paint, solvents, and plastics), thus producing another means of turning our over abundant carbon emissions back into fuel.

On a related note, there are also efforts underway to improve the efficiency of certain hydro-carbon eating bacteria such as A. borkumensis in order to clean up oil spills even more effectively.  Imagine one day having the ability to use genetically engineered bacteria to directly convert carbon emissions back into mass-produced fuel, and if the fuel spills during transport, also having the counterpart capability of cleaning it up most efficiently with another form of genetically engineered bacteria.  These capabilities are being further developed and are only the tip of the iceberg.

In theory, we should also be able to genetically engineer bacteria to decompose many other materials or waste products that ordinarily decompose extremely slowly. If any of these waste products are hazardous, bacteria could be genetically engineered to breakdown or transform the waste products into a safe and stable compound.  With these types of solutions we can make many new materials and have a method in line for their proper disposal (if needed).  Additionally, by utilizing some techniques mentioned in the next section, we can also start making more novel materials that decompose using non-genetically-engineered mechanisms.

It is likely that genetically modified bacteria will continue to provide us with many new types of mass-produced chemicals and products. For those processes that do not work effectively (if at all) in bacterial (i.e. prokaryotic) cells, then eukaryotic cells such as yeast, insect cells, and mammalian cells can often be used as a viable option. All of these genetically engineered domesticated micro-organisms will likely be an invaluable complement to the increasing number of genetically modified plants and animals that are already being produced.

Mimetics

In the case of mimetics, scientists are discovering new ways of creating novel materials using a bottom-up approach at the nano-scale by utilizing some of the self-assembly techniques that natural selection has near-perfected over millions of years.  For example, mollusks form sea shells with incredibly strong structural/mechanical properties by their DNA coding for the synthesis of specific proteins, and those proteins bonding the raw materials of calcium and carbonate into alternating layers until a fully formed shell is produced.  The pearls produced by clams are produced with similar techniques. We could potentially use the same DNA sequence in combination with a scaffold of our choosing such that a similar product is formed with unique geometries, or through genetic engineering techniques, we could modify the DNA sequence so that it performs the same self-assembly with completely different materials (e.g. silicon, platinum, titanium, polymers, etc.).

By combining the capabilities of scaffolding as well as the production of unique genomic sequences, one can further increase the number of possible nanomaterials or nanostructures, although I’m confident that most if not all scaffolding needs could eventually be accomplished by the DNA sequence alone (much like the production of bone, exoskeleton, and other types of structural tissues in animals).  The same principles can be applied by looking at how silk is produced by spiders, how the cochlear hair cells are produced in mammals, etc.  Many of these materials are stronger, lighter, and more defect-free than some of the best human products ever engineered.  By mimicking and modifying these DNA-induced self-assembly techniques, we can produce entirely new materials with unprecedented properties.

If we realize that even the largest plants and animals use these same nano-scale assembly processes to build themselves, it isn’t hard to imagine using these genetic engineering techniques to effectively grow complete macro-scale consumer products.  This may sound incredibly unrealistic with our current capabilities, but imagine one day being able to grow finished products such as clothing, hardware, tools, or even a house.  There are already people working on these capabilities to some degree (for example using 3D printed scaffolding or other scaffolding means and having plant or animal tissue grow around it to form an environmentally integrated bio-structure).  If this is indeed realizable, then perhaps we could find a genetic sequence to produce almost anything we want, even a functional computer or other device.  If nature can use DNA and natural selection to produce macro-scale organisms with brains capable of pattern recognition, consciousness, and computation (and eventually the learned capability of genetic engineering in the case of the human brain), then it seems entirely reasonable that we could eventually engineer DNA sequences to produce things with at least that much complexity, if not far higher complexity, and using a much larger selection of materials.

Other advantages from using such an approach include the enormous energy savings gained by adopting the naturally selected economically efficient process of self-assembly (including less changes in the forms of energy used, and thus less loss) and a reduction in specific product manufacturing infrastructure. That is, whereas we’ve typically made industrial scale machines individually tailored to produce specific components which are later assembled into a final product, by using DNA (and the proteins it codes for) to do the work for us, we will no longer require nearly as much manufacturing capital, for the genetic engineering capital needed to produce any genetic sequence is far more versatile.

Transcending the Human Species

Perhaps the most important application of genetic engineering will be the modification of our own species.  Many of the world’s problems are caused by sudden environmental changes (many of them anthropogenic), and if we can change ourselves and/or other species biologically in order to adapt to these unexpected and sudden environmental changes (or to help prevent them altogether), then the severity of those problems can be reduced or eliminated.  In a sense, we would be selecting our own as well as other species by providing the proper genes to begin with, rather than relying on extremely slow genomic differentiation mechanisms and the greater rates of suffering and loss of life that natural selection normally follows.

Genetic Enhancement of Existing Features

With power over the genome, we may one day be able to genetically increase our life expectancy, for example, by modifying the DNA polymerase-g enzyme in our mitochondria such that they make less errors (i.e. mutations) during DNA replication, by genetically altering telomeres in our nuclear DNA such that they can maintain their length and handle more mitotic divisions, or by finding ways to preserve nuclear DNA, etc. If we also determine which genes lead to certain diseases (as well as any genes that help to prevent them), genetic engineering may be the key to extending the length of our lives perhaps indefinitely.  It may also be the key to improving the quality of that extended life by replacing the techniques we currently use for health and wellness management (including pharmaceuticals) with perhaps the most efficacious form of preventative medicine imaginable.

If we can optimize our brain’s ability to perform neuronal regeneration, reconnection, rewiring, and/or re-weighting based on the genetic instructions that at least partially mediate these processes, this optimization should drastically improve our ability to learn by improving the synaptic encoding and consolidation processes involved in memory and by improving the combinatorial operations leading to higher conceptual complexity.  Thinking along these lines, by increasing the number of pattern recognition modules that develop in the neo-cortex, or by optimizing their configuration (perhaps by increasing the number of hierarchies), our general intelligence would increase as well and would be an excellent complement to an optimized memory.  It seems reasonable to assume that these types of cognitive changes will likely have dramatic effects on how we think and thus will likely affect our philosophical beliefs as well.  Religious beliefs are also likely to change as the psychological comforts provided by certain beliefs may no longer be as effective (if those comforts continue to exist at all), especially as our species continues to phase out non-naturalistic explanations and beliefs as a result of seeing the world from a more objective perspective.

If we are able to manipulate our genetic code in order to improve the mechanisms that underlie learning, then we should also be able to alter our innate abilities through genetic engineering. For example, what if infants could walk immediately after birth (much like a newborn calf)? What if infants had adequate motor skills to produce (at least some) spoken language much more quickly? Infants normally have language acquisition mechanisms which allow them to eventually learn language comprehension and productivity but this typically takes a lot of practice and requires their motor skills to catch up before they can utter a single word that they do in fact understand. Circumventing the learning requirement and the motor skill developmental lag (at least to some degree) would be a phenomenal evolutionary advancement, and this type of innate enhancement could apply to a large number of different physical skills and abilities.

Since DNA ultimately controls the types of sensory receptors we have, we should eventually be able to optimize these as well.  For example, photoreceptors could be modified such that we would be able to see new frequencies of electro-magnetic radiation (perhaps a more optimized range of frequencies if not a larger range altogether).  Mechano-receptors of all types could be modified, for example, to hear a different if not larger range of sound frequencies or to increase tactile sensitivity (i.e. touch).  Olfactory or gustatory receptors could also be modified in order to allow us to smell and taste previously undetectable chemicals.  Basically, all of our sensations could be genetically modified and, when combined with the aforementioned genetic modifications to the brain itself, this would allow us to have greater and more optimized dimensions of perception in our subjective experiences.

Genetic Enhancement of Novel Features

So far I’ve been discussing how we may be able to use genetic engineering to enhance features we already possess, but there’s no reason we can’t consider using the same techniques to add entirely new features to the human repertoire. For example, we could combine certain genes from other animals such that we can re-grow damaged limbs or organs, have gills to breathe underwater, have wings in order to fly, etc.  For that matter, we may even be able to combine certain genes from plants such that we can produce (at least some of) our own chemical energy from the sun, that is, create at least partially photosynthetic human beings.  It is certainly science fiction at the moment, but I wouldn’t discount the possibility of accomplishing this one day after considering all of the other hybrid and transgenic species we’ve created already, and after considering the possible precedent mentioned in the endosymbiotic theory (where an ancient organism may have “absorbed” another to produce energy for it, e.g. mitochondria and chloroplasts in eukaryotic cells).

Above and beyond these possibilities, we could also potentially create advanced cybernetic organisms.  What if we were able to integrate silicon-based electronic devices (or something more biologically compatible if needed) into our bodies such that the body grows or repairs some of these technologies using biological processes?  Perhaps if the body is given the proper diet (i.e. whatever materials are needed in the new technological “organ”) and has the proper genetic code such that the body can properly assimilate those materials to create entirely new “organs” with advanced technological features (e.g. wireless communication or wireless access to an internet database activated by particular thoughts or another physiological command cue), we may eventually be able to get rid of external interface hardware and peripherals altogether.  It is likely that electronic devices will first become integrated into our bodies through surgical implantation in order to work with our body’s current hardware (including the brain), but having the body actually grow and/or repair these devices using DNA instruction would be the next logical step of innovation if it is eventually feasible.

Malleable Human Nature

When people discuss complex issues such as social engineering, sustainability, crime-reduction, etc., it is often mentioned that there is a fundamental barrier between our current societal state and where we want or need to be, and this barrier is none other than human nature itself.  Many people in power have tried to change human behavior with brute force while operating under the false assumption that human beings are analogous to some kind of blank slate that can simply learn or be conditioned to behave in any way without limits. This denial of human nature (whether implicit or explicit) has led to a lot of needless suffering and has also led to the de-synchronization of biological and cultural evolution.

Humans often think that they can adapt to any cultural change, but we often lose sight of the detrimental power that technology and other cultural inventions and changes can have over our physiological and psychological well-being. In a nutshell, the speed of cultural evolution can often make us feel like a fish out of water, perhaps better suited to live in an environment closer to our early human ancestors.  Whatever the case, we must embrace human nature and realize that our efforts to improve society (or ourselves) will only have long term efficacy if we work with human nature rather than against it.  So what can we do if our biological evolution is out-of-sync with our cultural evolution?  And what can we do if we have no choice but to accept human nature, that is, our (often selfish) biologically-driven motivations, tendencies, etc.?  Once again, genetic engineering may provide a solution to many of these previously insoluble problems.  To put it simply, if we can change our genome as desired, then we may be able to not only synchronize our biological and cultural evolution, but also change human nature itself in the process.  This change could not only make us feel better adjusted to the modern cultural environment we’re living in, but it could also incline us to instinctually behave in ways that are more beneficial to each other and to the world as a whole.

It’s often said that we have selfish genes in some sense, that is, many if not all of our selfish behaviors (as well as instinctual behaviors in general) are a reflection of the strategy that genes implement in their vehicles (i.e. our bodies) in order for the genes to maintain themselves and reproduce.  That genes possess this kind of strategy does not require us to assume that they are conscious in any way or have actual goals per se, but rather that natural selection simply selects genes that code for mechanisms which best maintain and spread those very genes.  Natural selection tends toward effective self-replicators, and that’s why “selfish” genes (in large part) cause many of our behaviors.  Improving reproductive fitness and successful reproduction has been the primary result of this strategy and many of the behaviors and motivations that were most advantageous to accomplish this are no longer compatible with modern culture including the long-term goals and greater good that humans often strive for.

Humans no longer exclusively live under the law of the jungle or “survival of the fittest” because our humanistic drives and their cultural reinforcements have expanded our horizons beyond simple self-preservation or a Machiavellian mentality.  Many humans have tried to propagate principles such as honesty, democracy, egalitarianism, immaterialism, sustainability, and altruism around the world, and they are often high-jacked by our often short-sighted sexual and survival-based instinctual motivations to gain sexual mates, power, property, a higher social status, etc.  Changing particular genes should also allow us to change these (now) disadvantageous aspects of human nature and as a result this would completely change how we look at every problem we face. No longer would we have to say “that solution won’t work because it goes against human nature”, or “the unfortunate events in human history tend to recur in one way or another because humans will always…”, but rather we could ask ourselves how we want or need to be and actually make it so by changing our human nature. Indeed, if genetic engineering is used to accomplish this, history would no longer have to repeat itself in the ways that we abhor. It goes without saying that a lot of our behavior can be changed for the better by an appropriate form of environmental conditioning, but for those behaviors that can’t be changed through conditioning, genetic engineering may be the key to success.

To Be or Not To Be?

It seems that we have been given a unique opportunity to use our ever increasing plethora of experiential data and knowledge and combine it with genetic engineering techniques to engineer a social organism that is by far the best adapted to its environment.  Additionally, we may one day find ourselves living in a true global utopia, if the barriers of human nature and the de-synchronization of biological and cultural evolution are overcome, and genetic engineering may be the only way of achieving such a goal.  One extremely important issue that I haven’t mentioned until now is the ethical concerns regarding the continued use and development of genetic engineering technology.  There are obviously concerns over whether or not we should even be experimenting with this technology.  There are many reasonable arguments both for and against using this technology, but I think that as a species, we have been driven to manipulate our environment in any way that we are capable of and this curiosity is a part of human nature itself.  Without genetic engineering, we can’t change any of the negative aspects of human nature but can only let natural selection run its course to modify our species slowly over time (for better or for worse).

If we do accept this technology, there are other concerns such as the fact that there are corporations and interested parties that want to use genetic engineering primarily if not exclusively for profit gain (often at the expense of actual universal benefits for our species) and which implement questionable practices like patenting plant and animal food sources in a potentially monopolized future agricultural market.  Perhaps an even graver concern is the potential to patent genes that become a part of the human genome, and the (at least short term) inequality that would ensue from the wealthier members of society being the primary recipients of genetic human enhancement. Some people may also use genetic engineering to create new bio-warfare weaponry and find other violent or malicious applications.  Some of these practices could threaten certain democratic or other moral principles and we need to be extremely cautious with how we as a society choose to implement and regulate this technology.  There are also numerous issues regarding how these technologies will affect the environment and various ecosystems, whether caused by people with admirable intentions or not.  So it is definitely prudent that we proceed with caution and get the public heavily involved with this cultural change so that our society can move forward as responsibly as possible.

As for the feasibility of the theoretical applications mentioned earlier, it will likely be computer simulation and computing power that catalyze the knowledge base and capability needed to realize many of these goals (by decoding the incredibly complex interactions between genes and the environment) and thus will likely be the primary limiting factor. If genetic engineering also involves expanding the DNA components we have to work with, for example, by expanding our base-four system (i.e. four nucleotides to choose from) to a higher based system through the use of other naturally occurring nucleotides or even the use of UBPs (i.e. “Unnatural Base Pairs”), while still maintaining low rates of base-pair mismatching and while maintaining adequate genetic information processing rates, we may be able to utilize previously inaccessible capabilities by increasing the genetic information density of DNA.  If we can overcome some of the chemical natural selection barriers that were present during abiogenesis and the evolution of DNA (and RNA), and/or if we can change the very structure of DNA itself (as well as the proteins and enzymes that are required for its implementation), we may be able to produce an entirely new type of genetic information storage and processing system, potentially circumventing many of the limitations of DNA in general, and thus creating a vast array of new species (genetically coded by a different nucleic acid or other substance).  This type of “nucleic acid engineering”, if viable, may complement the genetic engineering we’re currently performing on DNA and help us to further accomplish some of the aforementioned goals and applications.

Lastly, while some of the theoretical applications of genetic engineering that I’ve presented in this post may not sound plausible at all to some, I think it’s extremely important and entirely reasonable (based on historical precedent) to avoid underestimating the capabilities of our species.  We may one day be able to transform ourselves into whatever species we desire, effectively taking us from trans-humanism to some perpetual form of conscious evolution and speciation.  What I find most beautiful here is that the evolution of consciousness has actually led to a form of conscious evolution. Hopefully our species will guide this evolution in ways that are most advantageous to our species, and to the entire diversity of life on this planet.

Neuroscience Arms Race & Our Changing World View

leave a comment »

At least since the time of Hippocrates, people began to realize that the brain was the physical correlate of consciousness and thought.  Since then, the fields of psychology, neuroscience, and several inter-related fields have emerged.  There have been numerous advancements made within the field of neuroscience during the last decade or so, and in that same time frame there has also been an increased interest in the social, religious, philosophical, and moral implications that have precipitated from such a far-reaching field.  Certainly the medical knowledge we’ve obtained from the neurosciences has been the primary benefit of such research efforts, as we’ve learned quite a bit more about how the brain works, how it is structured, and the ongoing neuropathology that has led to improvements in diagnosing and treating various mental illnesses.  However, it is the other side of neuroscience that I’d like to focus on in this post — the paradigm shift relating to how we are starting to see the world around us (including ourselves), and how this is affecting our goals as well as how to achieve them.

Paradigm Shift of Our World View

Aside from the medical knowledge we are obtaining from the neurosciences, we are also gaining new perspectives on what exactly the “mind” is.  We’ve come a long way in demonstrating that “mental” or “mind” states are correlated with physical brain states, and there is an ever growing plethora of evidence which suggests that these mind states are in fact caused by these brain states.  It should come as no surprise then that all of our thoughts and behaviors are also caused by these physical brain states.  It is because of this scientific realization that society is currently undergoing an important paradigm shift in terms of our world view.

If all of our thoughts and behaviors are mediated by our physical brain states, then many everyday concepts such as thinking, learning, personality, and decision making can take on entirely new meanings.  To illustrate this point, I’d like to briefly mention the well known “nature vs. nurture” debate.  The current consensus among scientists is that people (i.e. their thoughts and behavior) are ultimately products of both their genes and their environment.

Genes & Environment

From a neuroscientific perspective, the genetic component is accounted for by noting that genes have been shown to play a very large role in directing the initial brain wiring schema of an individual during embryological development and through gestation.  During this time, the brain is developing very basic instinctual behavioral “programs” which are physically constituted by vastly complex neural networks, and the body’s developing sensory organs and systems are also connected to particular groups of these neural networks.  These complex neural networks, which have presumably been naturally selected for in order to benefit the survival of the individual, continue being constructed after gestation and throughout the entire ontogenic evolution of the individual (albeit to lesser degrees over time).

As for the environmental component, this can be further split into two parts: the internal and the external environment.  The internal environment within the brain itself, including various chemical concentration gradients partly mediated by random Brownian motion, provides some gene expression constraints as well as some additional guidance to work with the genetic instructions to help guide neuronal growth, migration, and connectivity.  The external environment, consisting of various sensory stimuli, seems to modify this neural construction by providing a form of inputs which may cause the constituent neurons within these neural networks to change their signal strength, change their action potential threshold, and/or modify their connections with particular neurons (among other possible changes).

Causal Constraints

This combination of genetic instructions and environmental interaction and input produces a conscious, thinking, and behaving being through a large number of ongoing and highly complex hardware changes.  It isn’t difficult to imagine why these insights from neuroscience might modify our conventional views of concepts such as thinking, learning, personality, and decision making.  Prior to these developments over the last few decades, the brain was largely seen as a sort of “black box”, with its internal milieu and functional properties remaining mysterious and inaccessible.  From that time and prior to it, for millennia, many people have assumed that our thoughts and behaviors were self-caused or causa sui.  That is, people believed that they themselves (i.e. some causally free “consciousness”, or “soul”, etc.) caused their own thoughts and behavior as opposed to those thoughts and behaviors being ultimately caused by physical processes (e.g. neuronal activity, chemical reactions, etc.).

Neuroscience (as well as biochemistry and its underlying physics) has shed a lot of light on this long-held assumption and, as it stands, the evidence has shown this prior assumption to be false.  The brain is ultimately controlled by the laws of physics since every chemical reaction and neural event that physically produces our thoughts, choices, and behaviors, have never been shown to be causally free from these physically guiding constraints.  I will mention that quantum uncertainty or quantum “randomness” (if ontologically random) does provide some possible freedom from physical determinism.  However, these findings from quantum physics do not provide any support for self-caused thoughts or behaviors.  Rather, it merely shows that those physically constrained thoughts and behaviors may never be completely predictable by physical laws no matter how much data is obtained.  In other words, our thoughts and behaviors are produced by highly predictable (although not necessarily completely predictable) physical laws and constraints as well as some possible random causal factors.

As a result of these physical causal constraints, the conventional perspective of an individual having classical free will has been shattered.  Our traditional views of human attributes including morality, choices, ideology, and even individualism are continuing to change markedly.  Not surprisingly, there are many people uncomfortable with these scientific discoveries including members of various religious and ideological groups that are largely based upon and thus depend on the very presupposition of precepts such as classical free will and moral responsibility.  The evidence that is compiling from the neurosciences is in fact showing that while people are causally responsible for their thoughts, choices, and behavior (i.e. an individual’s thoughts and subsequent behavior are constituents of a causal chain of events), they are not morally responsible in the sense that they can choose to think or behave any differently than they do, for their thoughts and behavior are ultimately governed by physically constrained neural processes.

New World View

Now I’d like to return to what I mentioned earlier and consider how these insights from neuroscience may be drastically modifying how we look at concepts such as thinking, learning, personality, and decision making.  If our brain is operating via these neural network dynamics, then conscious thought appears to be produced by a particular subset of these neural network configurations and processes.  So as we continue to learn how to more directly control or alter these neural network arrangements and processes (above and beyond simply applying electrical potentials to certain neural regions in order to bring memories or other forms of imagery into consciousness, as we’ve done in the past), we should be able to control thought generation from a more “bottom-up” approach.  Neuroscience is definitely heading in this direction, although there is a lot of work to be done before we have any considerable knowledge of and control over such processes.

Likewise, learning seems to consist of a certain type of neural network modification (involving memory), leading to changes in causal pattern recognition (among other things) which results in our ability to more easily achieve our goals over time.  We’ve typically thought of learning as the successful input, retention, and recall of new information, and we have been achieving this “learning” process through the input of environmental stimuli via our sensory organs and systems.  In the future, it may be possible to once again, as with the aforementioned bottom-up thought generation, physically modify our neural networks to directly implant memories and causal pattern recognition information in order to “learn” without any actual sensory input, and/or we may be able to eventually “upload” information in a way that bypasses the typical sensory pathways thus potentially allowing us to catalyze the learning process in unprecedented ways.

If we are one day able to more directly control the neural configurations and processes that lead to specific thoughts as well as learned information, then there is no reason that we won’t be able to modify our personalities, our decision-making abilities and “algorithms”, etc.  In a nutshell, we may be able to modify any aspect of “who” we are in extraordinary ways (whether this is a “good” or “bad” thing is another issue entirely).  As we come to learn more about the genetic components of these neural processes, we may also be able to use various genetic engineering techniques to assist with the necessary neural modifications required to achieve these goals.  The bottom line here is that people are products of their genes and environment, and by manipulating both of those causal constraints in more direct ways (e.g. through the use of neuroscientific techniques), we may be able to achieve previously unattainable abilities and perhaps in a relatively miniscule amount of time.  It goes without saying that these methods will also significantly affect our evolutionary course as a species, allowing us to enter new landscapes through our substantially enhanced ability to adapt.  This may be realized through these methods by finding ways to improve our intelligence, memory, or other cognitive faculties, effectively giving us the ability to engineer or re-engineer our brains as desired.

Neuroscience Arms Race

We can see that increasing our knowledge and capabilities within the neurosciences has the potential for drastic societal changes, some of which are already starting to be realized.  The impact that these fields will have on how we approach the problem of criminal, violent, or otherwise undesirable behavior can not be overstated.  Trying to correct these issues by focusing our efforts on the neural or cognitive substrate that underlie them, as opposed to using less direct and more external means (e.g. social engineering methods) that we’ve been using thus far, may lead to much less expensive solutions as well as solutions that may be realized much, much more quickly.

As with any scientific discovery or subsequent technology produced from it, neuroscience has the power to bestow on us both benefits as well as disadvantages.  I’m reminded of the ground-breaking efforts made within nuclear physics several decades ago, whereby physicists not only gained precious information about subatomic particles (and their binding energies) but also how to release these enormous amounts of energy from nuclear fusion and fission reactions.  It wasn’t long after these breakthrough discoveries were made before they were used by others to create the first atomic bombs.  Likewise, while our increasing knowledge within neuroscience has the power to help society improve by optimizing our brain function and behavior, it can also be used by various entities to manipulate the populace for unethical reasons.

For example, despite the large number of free market proponents who claim that the economy need not be regulated by anything other than rational consumers and their choices of goods and services, corporations have clearly increased their use of marketing strategies that take advantage of many humans’ irrational tendencies (whether it is “buy one get one free” offers, “sales” on items that have artificially raised prices, etc.).  Politicians and other leaders have been using similar tactics by taking advantage of voters’ emotional vulnerabilities on certain controversial issues that serve as nothing more than an ideological distraction in order to reduce or eliminate any awareness or rational analysis of the more pressing issues.

There are already research and development efforts being made by these various entities in order to take advantage of these findings within neuroscience such that they can have greater influence over people’s decisions (whether it relates to consumers’ purchases, votes, etc.).  To give an example of some of these R&D efforts, it is believed that MRI (Magnetic Resonance Imaging) or fMRI (functional Magnetic Resonance Imaging) brain scans may eventually be able to show useful details about a person’s personality or their innate or conditioned tendencies (including compulsive or addictive tendencies, preferences for certain foods or behaviors, etc.).  This kind of capability (if realized) would allow marketers to maximize how many dollars they can squeeze out of each consumer by optimizing their choices of goods and services and how they are advertised. We have already seen how purchases made on the internet, if tracked, begin to personalize the advertisements that we see during our online experience (e.g. if you buy fishing gear online, you may subsequently notice more advertisements and pop-ups for fishing related goods and services).  If possible, the information found using these types of “brain probing” methods could be applied to other areas, including that of political decision making.

While these methods derived from the neurosciences may be beneficial in some cases, for instance, by allowing the consumer more automated access to products that they may need or want (which will likely be a selling point used by these corporations for obtaining consumer approval of such methods), it will also exacerbate unsustainable consumption and other personal or societally destructive tendencies and it is likely to continue to reduce (or eliminate) whatever rational decision making capabilities we still have left.

Final Thoughts

As we can see, neuroscience has the potential to (and is already starting to) completely change the way we look at the world.  Further advancements in these fields will likely redefine many of our goals as well as how to achieve them.  It may also allow us to solve many problems that we face as a species, far beyond simply curing mental illnesses or ailments.  The main question that comes to mind is:  Who will win the neuroscience arms race?  Will it be those humanitarians, scientists, and medical professionals that are striving to accumulate knowledge in order to help solve the problems of individuals and societies as well as to increase their quality of life?  Or will it be the entities that are trying to accumulate similar knowledge in order to take advantage of human weaknesses for the purposes of gaining wealth and power, thus exacerbating the problems we currently face?

The Co-Evolution of Language and Complex Thought

with 17 comments

Language appears to be the most profound feature that has arisen during the evolution of the human mind.  This feature of humanity has led to incredible thought complexity, and also provided the foundation for the most simplistic thoughts imaginable.  Many of us may wonder how exactly language is related to thought and also how the evolution of language has affected the evolution of thought complexity.  In this post, I plan to discuss what I believe to be some evolutionary aspects of psycholinguistics.

Mental Languages

It is clear that humans think in some form of language, whether it is accomplished as an interior monologue using our native spoken language and/or some form of what many call “mentalese” (i.e. a means of thinking about concepts and propositions without the use of words).  Our thoughts are likely accomplished by a combination of these two “types” of language.  The fact that young infants and aphasics (for example) are able to think, clearly implies that not all thoughts are accomplished through a spoken language.  It is also likely that the aforementioned “mentalese” is some innate form of mental symbolic representation that is primary in some sense, supported by the fact that it appears to be necessary in order for spoken language to develop or exist at all.  Considering that words and sentences do not have any intrinsic semantic content or value (at least non-iconic forms) illustrates that this “mentalese” is in fact a prerequisite for understanding or assigning the meaning of words and sentences.  Complex words can always be defined by a number of less complex words, but at some point a limit is reached whereby the most simple units of definition are composed of seemingly irreducible concepts and propositions.  Furthermore, those irreducible concepts and propositions do not require any words to have meaning (for if they did, we would have an infinite regress of words being defined by words being defined by words, ad infinitum).  The only time we theoretically require symbolic representation of semantic content using words is if the concepts are to be easily (if at all) communicated to others.

While some form of mentalese is likely the foundation or even ultimate form of thought, it is my contention that communicable language has likely had a considerable impact on the evolution of the human mind — not only in the most trivial or obvious way whereby communicated words affect our thoughts (e.g. through inspiration, imagination, and/or reflection of new knowledge or perspectives), but also by serving as a secondary multidimensional medium for symbolic representation. That is, spoken language (as well as its subsequent allotrope, written language) has provided a form of combinatorial leverage somewhat independent of (although working in harmony with) the mental or cognitive faculties that innately exist for thought.

To be sure, spoken language has likely co-evolved with our mentalese, as they seem to affect one another in various ways.  As new types or combinations of propositions and concepts are discovered, the spoken language has to adapt in order to make those new propositions and concepts communicable to others.  What interests me more however, is how communicable language (spoken or written) has affected the evolution of thought complexity itself.

Communicable Language and Thought Complexity

Words and sentences, which primarily developed in order to communicate instances of our mental language to others, have also undoubtedly provided a secondary multidimensional medium for symbolic representation.  For example, when we use words, we are able to compress a large amount of information (i.e. many concepts and propositions) into small tokens with varying densities.  This type of compression has provided a way to maximize our use of short-term and long-term memory in order for more complex thoughts and mental capabilities to develop (whether that increase in complexity is defined as longer strings of concepts or propositions, or otherwise).

When we think of a sentence to ourselves, we end up utilizing a phonological/auditory loop, whereby we can better handle and organize information at any single moment by internally “hearing” it.  We can also visualize the words in multiple ways including how the mouth movements of people speaking those words would look like (and we can use our tactile and/or motor memory to mentally simulate how our mouth feels when these words are spoken), and if a written form of the communicable language exists, we can actually visualize the words as they would appear in their written form (as well as the aforementioned tactile/motor memory to mentally simulate how it feels to write those words).  On top of this, we can often visualize each glyph in multiple formats (i.e. different sizes, shapes, fonts, etc.).  This has provided a multidimensional memory tool, because it serves to represent the semantic information in a way that our brain can perceive and associate with multiple senses (in this case through our auditory, visual, and somatosensory cortices).  In some cases, when a particular written language uses iconic glyphs (as opposed to arbitary symbols), the user can also visualize the concept represented by the symbol in an idealized fashion.  Associating information with multiple cognitive faculties or perceptual systems means that more neural network patterns of the brain will be involved with the attainment, retention, and recall of that information.  For those of us that have successfully used various pneumonic devices and other memory-enhancing “tricks”, we can clearly see the efficacy and importance of communicable language and its relationship to how we think about and combine various concepts and propositions.

By enhancing our memory, communicable language has served as an epistemic catalyst allowing us to build upon our previous knowledge in ways that would have likely been impossible without said language.  Once written language was developed, we were no longer limited by our own short-term and long-term memory, for we had a way of recording as much information as possible, and this also allowed us to better formulate new ideas and consider thoughts that would have otherwise been too complex to mentally visualize or keep track of.  Mathematics, for example, exponentially increased in complexity once we were able to represent the relationships between variables in a written form.  While previously we would have been limited by our short-term and long-term memory, written language allowed us to eventually formulate incredibly long (sometimes painfully long) mathematical expressions.  Once written language was converted further into an electro-mechanical language (i.e. through the use of computers), our “writing” mediums, information acquisition mechanisms, and pattern recognition capabilities, were further aided and enhanced exponentially thus providing yet another platform for an increased epistemic or cognitive “breathing space”.  If our brains underwent particular mutations after communicable language evolved, it may have provided a way to ratchet our way into entirely new cognitive niches or capabilities.  That is, by communicable language providing us with new strings of concepts and propositions, there may have been an unprecedented natural selection pressure/opportunity (if an advantageous brain mutation accompanied this new cognitive input) in order for our brain to obtain an entirely new and possibly more complex fundamental concept or way of thinking.

Summary

It seems evident to me that communicable language, once it had developed, served as an extremely important epistemic catalyst and multidimensional cognitive tool that likely had a great influence on the evolution of the human brain.  While some form of mentalese was likely a prerequisite and precursor to any subsequent forms of communicable language, the cognitive “breathing space” that communicable language provided, seems to have had a marked impact on the evolution of human thought complexity, and on the amount of knowledge that we’ve been able to obtain from the world around us.  I have no doubt that the current external linguistic tools we use (i.e. written and electronic forms of handling information) will continue to significantly alter the ongoing evolution of the human mind.  Our biological forms of memory will likely adapt in order to be economically optimized and better work with those external media.  Likewise, our increasing access to new types of information may have provided (and may continue to provide) a natural selective pressure or opportunity for our brains to evolve in order to think about entirely new and potentially more complex concepts, thereby periodically increasing the lexicon or conceptual database of our “mentalese” (assuming that those new concepts provide a survival/reproductive advantage).

Religion: Psychology, Evolution, and Socio-political Aspects

with one comment

Religion is such a strong driving force in most (if not all) cultures as it significantly affects how people behave and how they look at the world around them.  It’s interesting to see that so many religions share certain common elements, and it seems likely that these common elements arose from several factors including some psychological similarities between human beings.  Much like Carl Jung’s idea of a “collective unconscious”, humans likely share certain psychological tendencies and this would help to explain the religious commonalities that have precipitated over time.  It seems plausible that some evolutionary mechanisms, including natural selection and also the “evolution” of certain social/political structures, also played a role in establishing some of these religious commonalities.  I’d like to discuss some of my thoughts on certain religious beliefs including what I believe to be some important psychological, social, political, and evolutionary factors that have likely influenced the formation, acceptance, and ultimate success of religion as well as some common religious beliefs.

Fear of Death

The fear of death is probably one of the largest forces driving many religious beliefs.  This fear of death seems to exist for several reasons.  For one, the fear of death may be (at least partly) an evolutionary by-product of our biological imperative to survive.  We already perform involuntary physical actions instinctually in order to survive (e.g. fight-or-flight response, etc.).  Having an emotional element (such as fear) combined with our human intellect and self-awareness, can drive us to survive in less autonomous ways thus providing an even greater evolutionary advantage for natural selection.  For example, many people have been driven to circumvent death through scientific advancements.  Another factor to consider is that the fear of death may largely be a fear of the unknown or unfamiliar (related to the fear of change).  It shouldn’t be surprising then that a religion offering ways to appease this fear would become successful.

Could it be that our biological imperative to survive, coupled with the logical realization that we are mortal, have catalyzed a religious means for some form of cognitive dissonance reduction?  Humans that are in denial about (or are at least uncomfortable with) their inevitable death will likely be drawn towards religious beliefs that circumvent this inevitability with some form of spiritual eternal life or immortality.  Not only can this provide a means of circumventing mortality (perhaps by transcending the biological imperative with a spiritual version), but it can also reduce or eliminate the unknown aspects that contribute to the fear of death depending on the after-death specifics outlined by the religion.

A strange irony exists regarding what I call “spiritual imperatives” and I think it is worth mentioning.  If a religion professes that one’s ultimate goal should be preparation for the after-life (or some apocalyptic scenario), then adherents to such a doctrine may end up sacrificing their biological imperative (or make it a lower priority) in favor of some spiritual imperative.  That is, they may start to care less about their physical survival or quality of life in the interest of attaining what they believe to be spiritual survival.  In doing so, they may be sacrificing or de-prioritizing the very biological imperative that likely catalyzed the formation of their spiritual imperative in the first place.  So as strange as it may be, the fear of death may lead to some religious doctrines that actually hasten one’s inevitable death.

Morality, Justice, and Manipulation

Morality seems to be deeply ingrained in our very nature, and as a result we can see a universal implementation of moral structures in human societies.  It seems likely that this deeply ingrained sense of morality, much like many other innate traits shared by the human race, is a result of natural selection in the ongoing evolution of our species.  Our sense of morality has driven many beneficial behaviors (though not always) that tend to increase the survival of the individual.  For example, the golden rule (a principle that may even serve as a sort of universal moral creed) serves to benefit every individual by encouraging cooperation and altruism at the expense of selfish motives.  Just as some individual cells eventually evolved to become cooperative multi-cellular organisms (in order to gain mutual benefits in a “non-zero sum” game), so have other species (including human beings) evolved to cooperate with one another to increase mutual benefits including that of survival (John Maynard Smith and other biologists have shared this view of how evolution can lead to greater degrees of cooperation).  A sense of morality helps to reinforce this cooperation.  Evolution aside, establishing some kind of moral framework will naturally help to maximize what is deemed to be desirable behavior.  Religion has been an extremely effective means of accomplishing this goal.  First of all, religions tend to define morality in very specific ways.  Religion has also utilized fairly effective incentives and motivations for the masses to behave in ways desired by the society (or by its leaders).

Many religions profess a form of moral absolutism, where moral values are seen as objective, unquestionable, and often ordained by the authority of a god.  This makes a religion very attractive and effective by simplifying the moral structure of the society and backing it up with the authority of a deity.  If the rules are believed to be given by the authority of a deity, then there will be few (if any) people willing to question them and as a result there will be a much greater level of obedience.  The alternative, moral relativism, is more difficult to apply to a society’s dynamic as the behavioral goals in a morally relativistic society may not be very stable nor well-defined, even if moral relativism carries with it the benefits of religious or philosophical tolerance as well as open-mindedness.  Thus, moral absolutism is more likely to lead to productive societies, which may help to explain why moral absolutism has been such a successful religious meme (as well as the fact that morality in general seems to be a universal part of human nature).

Though I’m a moral realist in a strict sense since I believe that there are objective moral facts that exist, I’d also like to stipulate that I’m also a moral relativist, in the sense that I believe that any objective moral facts that exist are dependent on a person’s biology, psychology, and how those effect one’s ultimate goals for a satisfying and fulfilling life.  Since these factors may have some variance across a species and for sure a variance across different species, then morals are ultimately relative to those variances (if any exist).

In any case, I can appreciate why most people are drawn away from relativism (in any form).  It is difficult for most people to think about reality as consisting of elements that aren’t simply black-and-white.  After all, we are used to categorizing the world around us — fracturing it into finite, manageable, and well-defined parts that we can deal with and understand.  We often forget that we are subjectively experiencing the world around us, and that our individual frames of reference and perspectives can be quite different from person to person.  Relativism just isn’t very compatible with the common human illusion of seeing the world objectively, whether it is how we look at the physical world, language, our moral values, etc.

As for moral incentives, religions often imply that there will be some type of reward for the adherent and/or some type of punishment for the deviants.  Naturally anybody introduced to the religion (i.e. potential converts) will weigh the potential risks and benefits, with some people implementing Pascal’s wager and the like, likely leading to a larger number of followers over time.  You will also have established religious members that adhere to the specific rules within the religion based on the same moral incentives.  That is, the moral incentives put into place (i.e. punishment-reward system) can serve the purposes of obtaining religious members in the first place, and also to ensure that the religious members maintain a high level of obedience within the religion.  Of these converts and well-established followers, there will likely be a mixture of those that are primarily motivated by the fear of punishment and those primarily motivated by the desire for a reward.  Psychoanalysis aside, it wouldn’t be surprising if by briefly examining one’s behavior and personality, that one could ascertain an individual’s primary religious motivations (for their conversion and/or subsequent religious obedience).  It is likely however that most people would fail to see these motivations at work as they would prefer to think of their religious affiliations as a result of some revelation of truth.

The divine authorization and punishment-reward system within many religions can also provide a benefit to those that desire power and the manipulation of the populace.  If those in power desire an effective way to control the populace, they can create a religious structure with rules, morals, goals (including wars and conquests), etc., that benefit their agenda and then convince others that they are divinely authorized.  As long as the populace is convinced that the rules, morals, and goals are of a divine source, they will be more likely to comply with them.  Clearly this effect will be further amplified if a divine punishment-reward system is believed to exist.

One last point I’d like to make regarding morality involves the desire for divine justice.  People no doubt take comfort in the thought of everything being fair and orderly (from their perspective) in the long run, regardless of whether or not any unfairness presents itself during their lifetime.  It is much less comforting to accept that some people will do whatever they want and may die without ever receiving what one believes to be a just consequence, and/or that one has sacrificed many enjoyable human experiences in the interest of maintaining their religious requirements with potentially no long-term (i.e. after-death) return for their efforts.  The idea of an absolute justice being implemented after death definitely helps reinforce religious obedience in a world that has imperfect and subjective views (as well as implementations) of justice.

Desire for Free Will

Another common religious meme (related to the aforementioned moral frameworks) is the belief in classical free will.  If people practicing a particular religion are taught that they will be rewarded or punished for their actions, then it is logical for them to assume that they have free will over their actions — otherwise their moral responsibility would be non-existent and any divinely bestowed consequences incurred would be unjustified, meaningless, and futile.  So, in these types of religions, it is assumed that people should be able to make free choices that are not influenced or constrained by factors such as: genetics, any behavioral conditioning environment, any deterministic causal chain, or any random course of events for that matter.  That is, everyone’s behavior should be causa sui.  This way, it is the individual that is directly responsible for their behavior and ultimate fate rather than any factors outside of the individual’s control.

While the sciences have shown a plethora of evidence negating the existence of classical free will, many people continue to believe that free will exists.  It seems that people are naturally driven to believe that they have free will for a few reasons.  For one, the belief in free will is consistent with the illusion of free will that we consciously experience.  We do not feel that there is something or someone else in control of our fate (due to the principles of priority, consistency, and exclusivity as explained in Wagner’s Theory of Apparent Mental Causation), and so we have no immediate reason to believe that free will doesn’t exist.  It certainly feels like we have free will, even though the mechanistic physical laws of nature (whether deterministic or indeterministic) imply that we do not.  Second, from a deeper psychological perspective, if one believes in moral responsibility, has feelings of pride or shame for their actions, etc., the belief in free will is naturally reinforced.  People want to believe that they are in control because it better justifies the aforementioned punishment-reward system of both society and many religions.

Now granted, if all people agreed that free will was non-existent (most people assume we have free will), society’s system of legislation or law enforcement wouldn’t likely change much if at all.  Criminals being punished or detained for the protection of the majority of society would likely be a continued practice because pragmatically speaking, law enforcement has proven itself to be effective for providing safety, providing crime deterrence and so forth.  However, if people universally accepted that free will was non-existent, it would likely change how they view the punishment-reward system of society and religion.  People would probably focus more on the underlying genetic causes and conditioning environment that led to undesirable behavior rather than falsely looking at the individual as inherently bad or as someone who made poor choices that could have been made differently.

If someone feels that they have a lot to gain from a particular religion (or has invested so much of themselves into the religion already), and free will is a philosophical requirement for that particular religion, then they will likely find a way to rationalize the existence of free will (or rationalize any other religious assumption), despite the strong evidence against it (or lack of evidence in support of it).  There are even a few religions that simultaneously profess the existence of free will as well as the existence of an omniscient god that has complete knowledge of the future — despite the logical incompatibility of these two propositions.  Clearly, the desire for free will (whether conscious and/or unconscious) is stronger than most people realize.

Also, it seems that there is a general human desire for one’s life to have meaning and purpose, and perhaps some people feel that having free will over their actions is the only way to give their life meaning and purpose, as opposed to their life’s course being pre-determined or random.

Anthropocentrism & Purpose

Since religions are a product of human beings, it is not surprising to see that many of them have some anthropocentric purpose or element.  There seems to be a tendency for humans to assume that they are more important than anything else on this planet (or anything else in the universe for that matter).  This assumption may be fueled by the fact that human intelligence has brought us to the top of the food chain and has allowed us to manipulate our environment in ways that seem relatively extraordinary.  Humans certainly recognize this status and some may see it as necessarily divinely ordained or at least special in some way.  This helps to answer the age-old philosophical question: What is the meaning of life and/or why are we here?

By raising the value of human life over all other animals, religion can serve to separate humans from the rest of the animal kingdom, and thus separate humans from any animalistic traits that we dislike about ourselves.  Anthropocentric views have also been used to endorse otherwise questionable behavior that humans may choose to employ on the rest of the nature around them.  On the flip side, anthropocentrism can in fact lead to a humanistic drive or feeling of human responsibility to make the world a better place for many different creatures.  It seems however that the most powerful religions have often endorsed a human domination of the world and environment around them.  This selfish drive is more in-line with the rest of the animal kingdom as every animal fights to survive, flourish, and ultimately do what they believe best serves their interests.  Either way, elevating human importance can provide many with a sense of purpose regardless of what they think that purpose is.  This sense of purpose can be important, especially for those that recognize how short our human history has been relative to the history of all life on Earth, and also how relatively insignificant our planet is in such an unfathomably large universe.  Giving humans a special purpose can also help those that are uncomfortable with the idea of living in such a mechanistic world.

Desire for Protection

Certainly people are going to feel more secure if they believe that there is someone or something that is always protecting them.  Whether or not we have people in our lives that protect us in one way or another, nothing can compare to a divine protector.  It is certainly possible that this desire for protection is an artifact of the maternal-child dynamic from one’s earliest years of life, thus driving us to seek out similar comforts and securities.  Generally speaking however, the desire for protection is yet another facet of the biological imperative to survive.  Either way, the desire for some form of protection has likely played a role in religious constructs.

If religious members fail to receive any obvious protection or safety in specific cases (i.e. if they are harmed in some way), it is often the case that many find a way to reconcile this actuality by coincidentally believing that whatever happens is ultimately governed by some god’s will or plan.  This way the comforts of believing in a protective, benevolent, or loving god are not jeopardized in any circumstance.  This is a good example of cognitive dissonance reduction being accomplished through theological rationalization.  That is, people may need a special combination of beliefs (which may evolve over time) in order to reconcile their religious and theological presuppositions with one another or with reality.

Group Dynamics

Another form of protection (and an evident form at that) offered by religious membership is that which results from group formation and dynamics.  Specifically, I am referring to the benefits of both protection and memetic reinforcement by the rest of the group.  From an evolutionary perspective, we can see that an individual will tend to have a greater survival advantage if they are a member of a cooperative group (as I mentioned previously in the section titled: “Morality, Justice, and Manipulation”).  For this reason and many others, people will often try to join or form groups.  There is always greater power in large numbers, and so even if certain religious claims or elements are difficult to accept, many people will instinctually flock toward the group and its example because it is safer than being alone and more vulnerable.  After joining a group (or perhaps in order to join the group) many may even find themselves behaving in ways that violate their own previously self-ascribed values.  Group dynamics and tendencies can be quite powerful indeed.

After a religion becomes well established and gains enough mass and momentum, people increasingly gravitate toward its power and influence even if that requires them to significantly modify their behavior.  In fact, if the religion gains enough influence and power over a culture or society, there may be little (if any) freedom to refrain from practicing the religion anyway, so even if people aren’t drawn to a popular religion, they may be forced into it.

So as we can see, group dynamics have likely influenced religion in multiple ways.  The memetic reinforcement that groups provide has promoted the success and perpetuation of particular religious memes (regardless of what those particular memes are).  There also seems to be a critical mass component, whereby after a religion gains enough mass and momentum, it is significantly more difficult for it to subside over time.  Thus, many religions that have become successful have done so by simply reaching some critical mass.

God of the gaps

Another reason that many religions or religious memes have been successful has been due to a lack of knowledge about nature.  That is, at some point in the past there arose a “god of the gaps” mentality whereby unsatisfactory, insufficient, or non-existent naturalistic explanations led to deistic or theistic presuppositions.  We’ve seen that for a large period in history, polytheism was quite popular as people were ascribing multiple gods to explain a multitude of phenomena.  Eventually some monotheistic religions precipitated but they merely replaced the multiple “gods of the gaps” with one single “God of the gaps”.  This consolidation of gods may have resulted (at least in part) from an application of Occam’s razor as well as to differentiate new religions and their respective doctrines from their polytheistic predecessors.  As science and empiricism continued to develop further in the wake of these religious world views, the phenomena previously ascribed to a god (or to many gods) became increasingly explainable by predictable, mechanistic, natural laws.  By applying Occam’s razor one last time, science and empiricism has effectively been eliminating the final “God of the gaps”.

The psychological, social, and political benefits given by various religious constructs (including but not limited to those I’ve mentioned within this post) had likely already set a precedent and established a level of momentum that would continue to impede the acceptance of scientific explanations — even up to this day.  This may help to explain the prevalence of supernatural or miraculous religious beliefs despite their incompatibility with science and empiricism.  Once the most powerful religions gained traction, rather than abandoning beliefs of the supernatural in the wake of scientific progress, it was science that was initially censored and hindered.  Eventually, science and religion began to co-exist more easily, but in order for them to be at all reconciled with one another, many religious interpretations or explanations were modified accordingly (or the religious followers continued to ignore science).  Belief can be extremely powerful — so powerful in fact that even if a proposition isn’t actually true, if a person believes it to be true strongly enough, it can become a reality for that person.  In some of these cases, it doesn’t matter if there is an overwhelming amount of evidence to refute the belief, for that evidence will be ignored if it does not corroborate the believer’s artificial reality.

Another “God of the gaps” example that still perpetuates many religious beliefs is the mis-attributed power of prayer.  Prayer is actually effective for healing or helping to heal some ailments (for example), but science has shown (and is continuing to show) how this is nothing more than a placebo effect.  To give just one example, several studies on heart patients demonstrated that prayer was only effective on their recovery when the patients knew that they were being prayed for.  This further illustrates how the “God of the gaps” argument has never been very strong, and is only shrinking with every new discovery made in science.  Nevertheless, even as evidence accumulates that shows how a religious person’s notions are incorrect, there are psychological barriers in the brain that keep one from accepting that new information.  In the case of prayer just mentioned, a person who believes in prayer will have a confirmation bias in their brain that serves to remember when prayers are “answered” and forget about prayers that are not (regardless of what is being prayed for).

In other cases, if one chooses to actually consider any refutative evidence, it can become extremely difficult if not impossible for one to reconcile certain religious beliefs with reality.  However, if it is psychologically easier for a person to modify their religious beliefs (even in some radical way) rather than abandoning their religion altogether, they will likely do so.  It is clear how powerful these religious driving factors are when we see people either blatantly ignoring reason and the senses and/or adjusting their religion or theology in order to reconcile their beliefs with reality such that they can maintain the comfort and security of their deeply invested religious convictions.

It should be noted that the “god(s) of the gaps” mentality that many people share may result when the human mind asks certain questions for which it doesn’t have the cognitive machinery to answer, regardless of any scientific progress made.  If they are answerable questions (in theory), it may take a substantial amount of cognitive evolution in order to have the capability to answer them (or in order to see certain questions as being completely irrational and thus eliminate them from any further inquiry).  Even if this epistemologically-enhancing level of cognitive evolution did take place, we may very well be defined as a new species anyway, and thus technically speaking, homo sapiens could forever remain unable to access this knowledge regardless.  It would then follow that the “god(s) of the gaps” mentality (and any of its byproducts) may forever be a part of “human” nature.  Time will tell.

Final Thoughts

It appears that there have been several evolutionary, psychological, social, and political factors that have likely influenced the formation, acceptance, and ultimate success of many religious constructs.  It seems that the largest factors influencing religious constructs (and thus the commonalities seen between many religions) have been the psychological comforts that religion has provided, the human cognitive limitations leading to supernatural explanations, as well as some naturally-selected survival advantages that have ensued.  The desire for these psychological comforts (likely unconscious although not necessarily) seems to catalyze the manifestation of extremely strong beliefs, and not only has this affected the interplay between science (or empiricism) and religion, but these desires have also made it easier for religion to be used for manipulative purposes (among other reasons).  Furthermore, cognitive biases in the human brain often serve to maintain one’s beliefs, despite contradictory evidence against them.  Perhaps it is not too surprising to see such a complex interplay of variables behind religion, and also so many commonalities, as religion has been an extremely significant facet of the human condition.