The Open Mind

Cogito Ergo Sum

Archive for the ‘Unconscious’ Category

The Brain as a Prediction Machine

leave a comment »

Over the last year, I’ve been reading a lot about Karl Friston and Andy Clark’s work on the concept of perception and action being mediated by a neurological schema centered on “predictive coding”, what Friston calls “active inference”, the “free energy principle”, and Bayesian inference in general as it applies to neuro-scientific models of perception, attention, and action.  Here’s a few links (Friston here; Clark here, here, and here) to some of their work as it is worth reading for those interested in neural modeling, information theory, and learning more about meta-theories pertaining to how the brain integrates and processes information.

I find it fascinating how this newer research and these concepts relate to and help to bring together some content from several of my previous blog posts, in particular, those that mention the concept of hierarchical neurological hardware and those that mention my own definition of knowledge “as recognized causal patterns that allow us to make successful predictions.”  For those that may be interested, here’s a list of posts I’ve made over the last few years that I think contain some relevant content (in chronological order).

The ideas formulated by Friston and expanded on by Clark center around the brain being (in large part) a prediction generating machine.  This fits in line with my own conclusions about what the brain seems to be doing when it’s acquiring knowledge over time (however limited my reading is on the subject).  Here’s an image of the basic predictive processing schema:

PPschema

The basic Predictive Processing schema (adapted from Lupyan and Clark (2014))

One key element in Friston and Clark’s work (among the work of some others) is the amalgamation of perception and action.  In this framework, perception itself is simply the result of the brain’s highest level predictions of incoming sensory data.  But also important in this framework is that prediction error minimization is accomplished through embodiment itself.  That is to say, their models posit that the brain not only tries to reduce prediction errors by updating its prediction models based on the actual incoming sensory information (with only the error feeding forward to update the models, similar to data compression schema), but the concept of active inference involves the minimization of prediction error through the use of motor outputs.  This could be taken to mean that motor outputs themselves are, in a sense, caused by the brain trying to reduce prediction errors pertaining to predicted sensory input — specifically sensory input that we would say stems from our desires and goals (e.g. desire to fulfill hunger, commuting to work, opening the car door, etc.).

To give a simple example of this model in action, let’s consider an apple resting on a table in front of me.  If I see the apple in front of me and I have a desire to grab it, my brain would not only predict what that apple looks like and how it is perceived over time (and how my arm looks while reaching for it), but it would also predict what it should feel like to reach for the apple.  So if I reach for it based on the somato-sensory prediction and there is some error in that prediction, corroborated by my visual cortex observing my arm moving in some wrong direction, the brain would respond by updating its models that predict what it should feel so that my arm starts moving in the proper direction.  This prediction error minimization is then fine-tuned as I get closer to the apple and can finally grab it.

This embodiment ingrained in the predictive processing models of Friston and Clark can also be well exemplified by the so-called “Outfielder’s Problem”.  In this problem, an outfielder is trying to catch a fly ball.  Now we know that outfielders are highly skilled at doing this rather effectively.  But if we ask the outfielder to merely stand still and watch a batted ball and predict where it will land, their accuracy is generally pretty bad.  So when we think about what strategy the brain takes to accomplish this when moving the body quickly, we begin to see the relevance of active inference and embodiment in the brain’s prediction schema.  The outfielder’s brain employs a brilliant strategy called “optical acceleration cancellation” (OAC).  Here, the well-trained outfielder sees the fly ball, and moves his or her body (while watching the ball) in order to cancel out any optical acceleration observed during the ball’s flight.  If they do this, then they will end up exactly where the ball was going to land, and then they’re able to catch it successfully.

We can imagine fine-grained examples of this active inference during everyday tasks, where I may simply be looking at a picture on my living room wall, and when my brain is predicting how it will look over the span of a few seconds, my head may slightly change its tilt, or direction, or my eyes may slowly move a fraction of a degree this way or that way, however imperceptible to me.  My brain in this case is predicting what the picture on the wall will look like over time and this prediction (according to my understanding of Clark) is identical to what we actually perceive.  One key thing to note here is that the prediction models are not simply updated based on the prediction error that is fed forward through the brain’s neurological hierarchies, but it is also getting some “help” from various motor movements to correct for the errors through action, rather than simply freezing all my muscles and updating the model itself (which may in fact be far less economical for the brain to do).

Another area of research that pertains to this framework, including ways of testing its validity, is that of evolutionary psychology and biology, where one would surmise (if these models are correct) that evolution likely provided our brains with certain hard-wired predictive models and our learning processes over time use these as starting points to produce innate reflexes (such as infant suckling to give a simple example) that allow us to survive long enough to update our models with actual new acquired information.  There are many different facets to this framework and I look forward to reading more about Friston and Clark’s work over the next few years.  I have a feeling that they have hit on something big, something that will help to answer a lot of questions about embodied cognition, perception, and even consciousness itself.

I encourage you to check out the links I provided pertaining to Friston and Clark’s work, to get a taste of the brilliant ideas they’ve been working on.

Learning About Our Cognitive Biases: An Antidote to Irrational & Dogmatic Thinking (Part 3 of 3)

leave a comment »

This is part 3 of 3 of this post.  Click here to read part 2.

Reactive Devaluation

Studies have shown that if a claim is believed to have come from a friend or ally, the person receiving it will be more likely to think that it has merit and is truthful.  Likewise, if it is believed to have come from an enemy or some member of the opposition, it is significantly devalued.  This bias is known as reactive devaluation.  A few psychologists actually determined that the likeabillity of the source of the message is one of the key factors involved with this effect, and people will actually value information coming from a likeable source more than a source they don’t like, with the same order of magnitude that they would value information coming from an expert versus a non-expert.  This is quite troubling.  We’ve often heard about how political campaigns and certain debates are seen more as popularity contests than anything else, and this bias is likely a primary reason for why this is often the case.

Unfortunately, a large part of society has painted a nasty picture of atheism, skepticism, and the like.  As it turns out, people who do not believe in a god (mainly the Christian god) are the least trusted minority in America, according to a sociological study performed a few years ago (Johanna Olexy and Lee Herring, “Atheists Are Distrusted: Atheists Identified as America’s Most Distrusted Minority, According to Sociological Study,” American Sociological Association News, May 3, 2006).  Regarding the fight against dogmatism, the rational thinker needs to carefully consider how they can improve their likeability to the recipients of their message, if nothing else, by going above and beyond to remain kind, respectful, and maintain a positive and uplifting attitude during their argument presentation.  In any case, the person arguing will always be up against any societal expectations and preferences that may reduce that person’s likeability, so there’s also a need to remind others as well as ourselves about what is most important when discussing an issue — the message rather than the messenger.

The Backfire Effect & Psychological Reactance

There have already been several cognitive biases mentioned that interfere with people’s abilities to accept new evidence that contradicts their beliefs.  To make matters worse, psychologists have discovered that people often react to disconfirming evidence by actually strengthening their beliefs.  This is referred to as the backfire effect.  This can make refutations futile most of the time, but there are some strategies that help to combat this bias.  The bias is largely exacerbated by a person’s emotional involvement and the fact that people don’t want to appear to be unintelligent or incorrect in front of others.  If the debating parties let tempers cool down before resuming the debate, this can be quite helpful regarding the emotional element.  Additionally, if a person can show their opponent how accepting the disconfirming evidence will benefit them, they’ll be more likely to accept it.  There are times when some of the disconfirming evidence mentioned is at least partially absorbed by the person, and they just need some more time in a less tense environment to think things over a bit.  If the situation gets too heated, or if a person risks losing face with peers around them, the ability to persuade that person only decreases.

Another similar cognitive bias is referred to as reactance, which is basically a motivational reaction to instances of perceived infringement of behavioral freedoms.  That is, if a person feels that someone is taking away their choices or limiting the number of them, such as in cases where they are being heavily pressured into accepting a different point of view, they will often respond defensively.  They may feel that freedoms are being taken away from them, and it is only natural for a person to defend whatever freedoms they see themselves having or potentially losing.  Whenever one uses reverse psychology, they are in fact playing on at least an implicit knowledge of this reactance effect.  Does this mean that rational thinkers should use reverse psychology to persuade dogmatists to accept reason and evidence?  I personally don’t think that this is the right approach because this could also backfire and I would prefer to be honest and straightforward, even if this results in less efficacy.  At the very least however, one needs to be aware of this bias and needs to be careful with how they phrase their arguments, how they present the evidence, and to maintain a calm and respectful attitude during these discussions, so that the other person doesn’t feel the need to defend themselves with the cost of ignoring evidence.

Pareidolia & Other Perceptual Illusions

Have you ever seen faces of animals or people while looking at clouds, shadows, or other similar situations?  The brain has many pattern recognition modules and will often respond erroneously to vague or even random stimuli, such that we perceive something familiar, even when it isn’t actually there.  This is called pareidolia, and is a type of apophenia (i.e. seeing patterns in random data).  Not surprisingly, many people have reported instances of seeing various religious imagery and so forth, most notably the faces of prominent religious figures, from ordinary phenomena.  People have also erroneously heard “hidden messages” in records when playing them in reverse, and similar illusory perceptions.  This results from the fact that the brain often fills in gaps and if one expects to see a pattern (even if unconsciously driven by some emotional significance), then the brain’s pattern recognition modules can often “over-detect” or result in false positives by too much sensitivity and by conflating unconscious imagery with one’s actual sensory experiences, leading to a lot of distorted perceptions.  It goes without saying that this cognitive bias is perfect for reinforcing superstitious or supernatural beliefs, because what people tend to see or experience from this effect are those things that are most emotionally significant to them.  After it occurs, people believe that what they saw or heard must have been real and therefore significant.

Most people are aware that hallucinations occur in some people from time to time, under certain neurological conditions (generally caused by an imbalance of neurotransmitters).  A bias like pareidolia can effectively produce similar experiences, but without requiring the more specific neurological conditions that a textbook hallucination requires.  That is, the brain doesn’t need to be in any drug-induced state nor does it need to be physically abnormal in any way for this to occur, making it much more common than hallucinations.  Since pareidolia is also compounded with one’s confirmation bias, and since the brain is constantly implementing cognitive dissonance reduction mechanisms in the background (as mentioned earlier), this can also result in groups of people having the same illusory experience, specifically if the group shares the same unconscious emotional motivations for “seeing” or experiencing something in particular.  These circumstances along with the power of suggestion from even just one member of the group can lead to what are known as collective hallucinations.  Since collective hallucinations like these inevitably lead to a feeling of corroboration and confirmation between the various people experiencing it (say, seeing a “spirit” or “ghost” of someone significant), they can reinforce one another’s beliefs, despite the fact that the experience was based on a cognitive illusion largely induced by powerful human emotions.  It’s likely no coincidence that this type of group phenomenon seems to only occur with members of a religion or cult, specifically those that are in a highly emotional and synchronized psychological state.  There have been many cases of large groups of people from various religions around the world claiming to see some prominent religious figure or other supernatural being, illustrating just how universal this phenomena is within a highly emotional group context.

Hyperactive Agency Detection

There is a cognitive bias that is somewhat related to the aforementioned perceptual illusion biases which is known as hyperactive agency detection.  This bias refers to our tendency to erroneously assign agency to events that happen without an agent causing them.  Human beings all have a theory of mind that is based in part on detecting agency, where we assume that other people are causal agents in the sense that they have intentions and goals just like we do.  From an evolutionary perspective, we can see that our ability to recognize that others are intentional agents allows us to better predict another person’s behavior which is extremely beneficial to our survival (notably in cases where we suspect others are intending to harm us).

Unfortunately our brain’s ability to detect agency is hyperactive in that it errs on the side of caution by over-detecting possible agency, as opposed to under-detecting (which could negatively affect our survival prospects).  As a result, people will often ascribe agency to things and events in nature that have no underlying intentional agent.  For example, people often anthropomorphize (or implicitly assume agency to) machines and get angry when they don’t work properly (such as an automobile that breaks down while driving to work), often with the feeling that the machine is “out to get us”, is “evil”, etc.  Most of the time, if we have feelings like this, they are immediately checked and balanced by our rational minds that remind us that “it’s only a car”, or “it’s not conscious”, etc.  Similarly, when natural disasters or other similar events occur, our hyperactive agency detection snaps into gear, trying to find “someone to blame”.  This cognitive bias explains quite well, at least as a contributing factor as to why so many ancient cultures assumed that there were gods that caused earthquakes, lightning, thunder, volcanoes, hurricanes, etc.  Quite simply, they needed to ascribe the events as having been caused by someone.

Likewise, if we are walking in a forest and hear some strange sounds in the bushes, we may often assume that some intentional agent is present (whether another person or other animal), even though it may simply be the wind that is causing the noise.  Once again, we can see in this case the evolutionary benefit of this agency detection, even with the occasional “false positive”, the noise in the bushes could very well be a predator or other source of danger, so it is usually better for our brains to assume that the noise is coming from an intentional agent.  It’s also entirely possible that even in cases where the source of sound was determined to be the wind, early cultures may have ascribed an intentional stance to the wind itself (e.g. a “wind” god, etc.).  In all of these cases, we can see how our hyperactive agency detection can lead to irrational, theistic and other supernatural belief systems, and we need to be aware of such a bias when evaluating our intuitions of how the world works.

Risk Compensation

It is only natural that the more a person feels protected from various harms, the less careful they tend to behave, and vice versa.  This apparent tendency, sometimes referred to as risk compensation, is often a harmless approach because generally speaking, if one is well protected from harm, they do have less to worry about than one who is not, and thus they can take greater relative risks as a result of that larger cushion of safety.  Where this tends to cause a problem however, is in cases when the perceived level of protection is far higher than it actually is.  The worst case scenario that comes to mind is if a person that appeals to the supernatural thinks that it doesn’t matter what happens to them or that no harm can come to them because of some belief in a supernatural source of protection, let alone one that they believe provides the most protection possible.  In this scenario, their level of risk compensation is fully biased to a negative extreme, and they will be more likely to behave irresponsibly because they don’t have realistic expectations of the consequences of their actions.  In a post-enlightenment society, we’ve acquired a higher responsibility to heed the knowledge gained from scientific discoveries so that we can behave more rationally as we learn more about the consequences of our actions.

It’s not at all difficult to see how the effects of various religious dogma on one’s level of risk compensation can inhibit that society from making rational, responsible decisions.  We have several serious issues plaguing modern society including those related to environmental sustainability, climate change, pollution, war, etc.  If believers in the supernatural have an attitude that everything is “in God’s hands” or that “everything will be okay” because of their belief in a protective god, they are far less likely to take these kinds of issues seriously because they fail to acknowledge the obvious harm on everyone in society.  What’s worse is that not only are dogmatists reducing their own safety, but they’re also reducing the safety of their children, and of everyone else in society that is trying to behave rationally with realistic expectations.  If we had two planets, one for rational thinkers, and one for dogmatists, this would no longer be a problem for everyone.  The reality is that we share one planet and thus we all suffer the consequences of any one person’s irresponsible actions.  If we want to make more rational decisions, especially those related to the safety of our planet’s environment, society, and posterity, people need to adjust their risk compensation such that it is based on empirical evidence using a rational, proven scientific methodology.  This point clearly illustrates the need to phase out supernatural beliefs since false beliefs that don’t correspond with reality can be quite dangerous to everybody regardless of who carries the beliefs.  In any case, when one is faced with the choice between knowledge versus ignorance, history has demonstrated which choice is best.

Final thoughts

The fact that we are living in an information age with increasing global connectivity, and the fact that we are living in a society that is becoming increasingly reliant on technology, will likely help to combat the ill effects of these cognitive biases.  These factors are making it increasingly difficult to hide one’s self from the evidence and the proven efficacy of using the scientific method in order to form accurate beliefs regarding the world around us.  Societal expectations are also changing as a result of these factors, and it is becoming less and less socially acceptable to be irrational, dogmatic, and to not take scientific methodology more seriously.  In all of these cases, having knowledge about these cognitive biases will help people to combat them.  Even if we can’t eliminate the biases, we can safeguard ourselves by preparing for any potential ill effects that those biases may produce.  This is where reason and science come into play, providing us with the means to counter our cognitive biases by applying a rational skepticism to all of our beliefs, and by using a logical and reliable methodology based on empirical evidence to arrive at a belief that we can be justifiably confident (though never certain) in.  As Darwin once said, “Ignorance more frequently begets confidence than does knowledge; it is those who know little, and not those who know much, who so positively assert that this or that problem will never be solved by science.”  Science has constantly been ratcheting our way toward a better understanding of the universe we live in, despite the fact that the results we obtain from science are often at odds with our own intuitions about how we think the world is.  Only relatively recently has science started to provide us with information regarding why our intuitions are often at odds with the scientific evidence.

We’ve made huge leaps within neuroscience, cognitive science, and psychology, and these discoveries are giving us the antidote that we need in order to reduce or eliminate irrational and dogmatic thinking.  Furthermore, those that study logic have an additional advantage in that they are more likely to spot fallacious arguments that are used to support this or that belief, and thus they are less likely to be tricked or persuaded by arguments that often appear to be sound and strong, but actually aren’t.  People fall prey to fallacious arguments all the time, and it is mostly because they haven’t learned how to spot them (which courses in logic and other reasoning can help to mitigate).  Thus, overall I believe it is imperative that we integrate knowledge concerning our cognitive biases into our children’s educational curriculum, starting at a young age (in a format that is understandable and engaging of course).  My hope is that if we start to integrate cognitive education (and complementary courses in logic and reasoning) as a part of our foundational education curriculum, we will see a cascade effect of positive benefits that will guide our society more safely into the future.

Learning About Our Cognitive Biases: An Antidote to Irrational & Dogmatic Thinking (Part 2 of 3)

leave a comment »

This is part 2 of 3 of this post.  Click here to read part 1.

The Feeling of Certainty & The Overconfidence Effect

There are a lot of factors that affect how certain we feel that one belief or another is in fact true.  One factor that affects this feeling of certainty is what I like to call the time equals truth fallacy.  With this cognitive bias, we tend to feel more certain about our beliefs as time progresses, despite not gaining any new evidence to support those beliefs. Since the time with a particular belief is the only factor involved here, it will have greater effects on those that are relatively isolated or sheltered from new information.  So if a person has been indoctrinated with a particular set of beliefs (let alone irrational beliefs), and they are effectively “cut off” from any sources of information that could serve to refute those beliefs, it will become increasingly more difficult to persuade them later on.  This situation sounds all too familiar, as it is often the case that dogmatic groups will isolate themselves and shelter their members from outside influences, for this very reason.  If the group can isolate their members for long enough (or the members actively isolate themselves), the dogma effectively gets burned into them, and the brainwashing is far more successful.  That this cognitive bias has been exploited by various dogmatic/religious leaders throughout history is hardly surprising considering how effective it is.

Though I haven’t researched or come across any proposed evolutionary reasons behind the development of this cognitive bias, I do have at least one hypothesis pertaining to its origin.  I’d say that this bias may have resulted from the fact that the beliefs that matter most (from an evolutionary perspective) are those that are related to our survival goals (e.g. finding food sources or evading a predator).  So naturally, any beliefs that didn’t cause noticeable harm to an organism weren’t correlated with harm, and thus were more likely to be correct (i.e. keep using whatever works and don’t fix what ain’t broke).  However, once we started adding many more beliefs to our repertoire, specifically those that didn’t directly affect our survival (including many beliefs in the supernatural), the same cognitive rules and heuristics were still being applied as before, although these new types of beliefs (when false) haven’t been naturally selected against, because they haven’t been detrimental enough to our survival (at least not yet, or not enough to force a significant evolutionary change).  So once again, evolution may have produced this bias for very advantageous reasons, but it is a sub-optimal heuristic (as always) and one that has become a significant liability after we started evolving culturally as well.

Another related cognitive bias, and one that is quite well established, is the overconfidence effect, whereby a person’s subjective confidence level or feeling of confidence in his or her judgements or beliefs are predictably higher than the actual objective accuracy of those judgements and beliefs. In a nutshell, people tend to have far more confidence in their beliefs and decisions than is warranted.  A common example cited to illustrate the intensity of this bias pertains to people taking certain quizzes, claiming to be “99% certain” about their answers to certain questions, and then finding out afterwards that they were wrong about half of the time.  In other cases, this overconfidence effect can be even worse.

In a sense, the feeling of certainty is like an emotion, which, like our other emotions, occur as a natural reflex, regardless of the cause, and independently of reason.  Just like other emotions such as anger, pleasure, or fear, the feeling of certainty can be produced by a seizure, certain drugs, and even electrical stimulation of certain regions of the brain.  In all of these cases, even when no particular beliefs are being thought about, the brain can produce a feeling of certainty nevertheless.  Research has shown that the feeling of knowing or certainty can also be induced through various brain washing and trance-inducing techniques such as a high repetition of words, rhythmic music, sleep deprivation (or fasting), and other types of social/emotional manipulation.  It is hardly a coincidence that many religions often employ a number of these techniques within their repertoire.

The most important point to take away from learning about these “confidence/certainty” biases is to understand that the feeling of certainty is not a reliable way of determining the accuracy of one’s beliefs.  Furthermore, one must realize that no matter how certain we may feel about a particular belief, we could be wrong, and often times are.  Dogmatic belief systems such as those found in many religions are often propagated by a misleading and mistaken feeling of certainty, even if that feeling is more potent than any ever experienced before.  Often times when people ascribing to these dogmatic belief systems are questioned about the lack of empirical evidence supporting their beliefs, even if all of their arguments have been refuted, they simply reply by saying “I just know.”  Irrational, and entirely unjustified responses like these illustrate the dire need for people to become aware of just how fallible their feeling of certainty can be.

Escalation of Commitment

If a person has invested their whole lives in some belief system, even if they encounter undeniable evidence that their beliefs were wrong, they are more likely to ignore it or rationalize it away than to modify their beliefs, and thus they will likely continue investing more time and energy in those false beliefs.  This is due to an effect known as escalation of commitment.  Basically, the higher the cumulative investment in a particular course of action, the more likely someone will feel justified in continuing to increase that investment, despite new evidence showing them that they’d be better off abandoning that investment and cutting their losses.  When it comes to trying to “convert” a dogmatic believer into a more rational, free thinker, this irrational tendency severely impedes any chance of success, more so when that person has been investing themselves in the dogma for a longer period of time, since they ultimately have a lot more to lose.  To put it another way, a person’s religion is often a huge part of their personal identity, so regardless of any undeniable evidence presented that refutes their beliefs, in order to accept that evidence they will have to abandon a large part of themselves which is obviously going to make that acceptance and intellectual honesty quite difficult to implement.  Furthermore, if a person’s family or friends have all invested in the same false beliefs as themselves, even if that person discovers that those beliefs are wrong, they risk their entire family and/or friends rejecting them and then forever losing those relationships that are dearest to them.  We can also see how this escalation of commitment is further reinforced by the time equals truth fallacy mentioned earlier.

Negativity Bias

When I’ve heard various Christians proselytizing to myself or others, one tactic that I’ve seen used over and over again is the use of fear-mongering with theologically based threats of eternal punishment and torture.  If we don’t convert, we’re told, we’re doomed to burn in hell for eternity.  Their incessant use of this tactic suggests that it was likely effective on themselves contributing to their own conversion (it was in fact one of the reasons for my former conversion to Christianity).  Similar tactics have been used in some political campaigns in order to persuade voters by deliberately scaring them into taking one position over another.  Though this strategy is more effective on some than others, there is an underlying cognitive bias in all of us that contributes to its efficacy.  This is known as the negativity bias.  With this bias, information or experiences that are of a more negative nature will tend to have a greater effect on our psychological states and resulting behavior when compared to positive information or experiences that are equally intense.

People will remember threats to their well-being a lot more than they remember pleasurable experiences.  This looks like another example of a simple survival strategy implemented in our brains.  Similar to my earlier hypothesis regarding the time equals truth heuristic, it is far more important to remember and avoid dangerous or life-threatening experiences than it is to remember and seek out pleasurable experiences when all else is equal.  It only takes one bad experience to end a person’s life, whereas it is less critical to experience some minimum number of pleasurable experiences.  Therefore, it makes sense as an evolutionary strategy to allocate more cognitive resources and memory for avoiding the dangerous and negative experiences, and a negativity bias helps us to accomplish that.

Unfortunately, just as with the time equals truth bias, since our cultural evolution has involved us adopting certain beliefs that no longer pertain directly to our survival, the heuristic is often being executed improperly or in the wrong context.  This increases the chances that we will fall prey to adopting irrational, dogmatic belief systems when they are presented to us in a way that utilizes fear-mongering and various forms of threats to our well-being.  When it comes to conceiving of an overtly negative threat, can anyone imagine one more significant than the threat of eternal torture?  It is, by definition, supposed to be the worst scenario imaginable, and thus it is the most effective kind of threat to play on our negativity bias, and lead to irrational beliefs.  If the dogmatic believer is also convinced that their god can hear their thoughts, they’re also far less likely to think about their dogmatic beliefs critically, for they have no mental privacy to do so.

This bias in particular reminds me of Blaise Pascal’s famous Wager, which basically asserts that it is better to believe in God than to risk the consequences of not doing so.  If God doesn’t exist, we have “only” a finite loss (according to Pascal).  If God does exist, then one’s belief leads to an eternal reward, and one’s disbelief leads to an eternal punishment.  Therefore, it is only logical (Pascal asserts) that one should believe in God, since it is the safest position to adopt.  Unfortunately, Pascal’s premises are not sound, and therefore the conclusion is invalid.  For one, is belief in God sufficient enough to avoid the supposed eternal punishment, or does it have to be more than that, such as some other religious tenets, declarations, or rituals?  Second, which god should one believe in?  There have been thousands of gods proposed by various believers over several millennia, and there were obviously many more than we currently have records of in history, therefore Pascal’s Wager merely narrows it down to a choice of several thousand known gods.  Third, even if we didn’t have to worry about choosing the correct god, if it turned out that there was no god, would a life with that belief not have carried a significant cost?  If the belief also involved a host of dogmatic moral prescriptions, rituals, and other specific ways to live one’s life, etc., including perhaps the requirement to abstain from many pleasurable human experiences, this cost could be quite great.  Furthermore, if one applies Pascal’s wager to any number of theoretical possibilities that posit a possible infinite loss over a finite loss, one would be inclined to apply the same principle to any of those possibilities, which is obviously irrational.  It is clear that Pascal hadn’t thought this one through very well, and I wouldn’t doubt that his judgement during this apologetic formulation was highly clouded by his own negativity bias (since the fear of punishment seems to be the primary focus of his “wager”).  As was mentioned earlier, we can see how effective this bias has been on a number of religious converts, and we need to be diligent about watching out for information that is presented to us with threatening strings attached, because it can easily cloud our judgement and lead to the adoption of irrational beliefs and dogma.

Belief Bias & Argument Proximity Effects

When we analyze arguments, we often judge the strength of those arguments based on how plausible their conclusion is, rather than how well those arguments support that conclusion.  In other words, people tend to focus their attention on the conclusion of an argument, and if they think that the conclusion is likely to be true (based on their prior beliefs), this affects their perspective of how strong or weak the arguments themselves appear to be.  This is obviously an incorrect way to analyze arguments, as within logic, only the arguments themselves can be used to determine the validity of the conclusion, not the other way around.  This implies that what is intuitive to us is often incorrect, and thus our cognitive biases are constantly at odds with logic and rationality.  This is why it takes a lot of practice to learn how to apply logic effectively in one’s thinking.  It just doesn’t come naturally, even though we often think it does.

Another cognitive deficit regarding how people analyze arguments irrationally is what I like to call the argument proximity effect.  Basically, when strong arguments are presented along with weak arguments that support a particular conclusion, the strong arguments will often appear to be weaker or less persuasive because of their proximity or association with the weaker ones.  This is partly due to the fact that if a person thinks that they can defeat the moderate or weak arguments, they will often believe that they can also defeat the stronger argument, if only they were given enough time to do so.  It is as if the strong arguments become “tainted” by the weaker ones, even though the opposite is true since the arguments are independent of one another.  That is, when a person has a number of arguments to support their position, a combination of strong and weak arguments is always better than only having the strong arguments, because there are simply more arguments that need to be addressed and rebutted in the former than in the latter.  Another reason for this effect is that the presence of weak arguments also weakens the credibility of the person presenting them, and so then the stronger arguments aren’t taken as seriously by the recipient.  Just as with our belief bias, this cognitive deficit is ultimately caused by not employing logic properly, if at all.  Making people aware of this cognitive flaw is only half the battle, as once again we also need to learn about logic and how to apply it effectively in our thinking.

To read part 3 of 3, click here.

Learning About Our Cognitive Biases: An Antidote to Irrational & Dogmatic Thinking (Part 1 of 3)

leave a comment »

It is often surprising to think about the large number of people that still ascribe to dogmatic beliefs, despite our living in a post-enlightenment age of science, reason, and skeptical inquiry.  We have a plethora of scientific evidence and an enormous amount of acquired data illustrating just how fallacious various dogmatic belief systems are, as well as how dogmatism in general is utterly useless for gaining knowledge or making responsible decisions throughout one’s life.  We also have an ample means of distributing this valuable information to the public through academic institutions, educational television programming, online scientific resources, books, etc.  Yet, we still have an incredibly large number of people preferably believing in various dogmatic claims (and with a feeling of “certainty”) over those supported by empirical evidence and reason.  Furthermore, when some of these people are presented with the scientific evidence that refutes their dogmatic belief, they outright deny that the evidence exists or they simply rationalize it away.  This is a very troubling problem in our society as this kind of muddled thinking often prevents many people from making responsible, rational decisions.  Irrational thinking in general has caused many people to vote for political candidates for all the wrong reasons.  It has caused many people to raise their own children in ways that promote intolerance, prejudice, and that undervalue if not entirely abhor invaluable intellectual virtues such as rational skepticism and critical thought.
 .
Some may reasonably assume that (at least) many of these irrational belief systems and behaviors are simply a result of having low self-esteem, inadequate education and/or low intelligence, and it turns out that several dozen sociological studies have found evidence that supports this line of reasoning.  That is, a person that has lower self-esteem, lower education, and/or lower intelligence is more likely to be religious and dogmatic.  However, there is clearly much more to it than that, especially since there are still a lot of people that obtain these irrational belief systems that are also well educated and intelligent.  Furthermore, every human being is quite often irrational in their thinking.  So what else could be contributing to this irrationality (and dogmatism)?  Well, the answer seems to lie in the realms of cognitive science and psychology.  I’ve decided to split this post into three parts, because there is a lot of information I’d like to cover.  Here’s the link to part 2 of 3, which can also be found at the end of this post.

Cognitive Biases

Human beings are very intelligent relative to most other species on this planet, however, we are still riddled with various flaws in our brains which drastically reduce our ability to think logically or rationally.  This isn’t surprising after one recognizes that our brains are the product of evolution, and thus they weren’t “designed” in any way at all, let alone to operate logically or rationally.  Instead, what we see in human beings is a highly capable and adaptable brain, yet one with a number of cognitive biases and shortcomings.  Though a large number (if not all) of these biases developed as a sub-optimal yet fairly useful survival strategy, specifically within the context of our evolutionary past, most of them have become a liability in our modern civilization, and often impede our intellectual development as individuals and as a society.  A lot of these cognitive biases serve (or once served) as an effective way at making certain decisions rather quickly, that is, the biases have effectively served as heuristics for simplifying the decision making process with many everyday problems.  While heuristics are valuable and often make our decision making faculties more efficient, they are often far less than optimal due to the fact that increased efficiency is often afforded by a reduction in accuracy.  Again, this is exactly what we expect to find with products of evolution — a suboptimal strategy for solving some problem or accomplishing some goal, but one that has worked well enough to give the organism (in this case, human beings) an advantage over the competition within some environmental niche.

So what kinds of cognitive biases do we have exactly, and how many of them have cognitive scientists discovered?  A good list of them can be found here.  I’m going to mention a few of them in this post, specifically those that promote or reinforce dogmatic thinking, those that promote or reinforce an appeal to a supernatural world view, and ultimately those that seem to most hinder overall intellectual development and progress.  To begin, I’d like to briefly discuss Cognitive Dissonance Theory and how it pertains to the automated management of our beliefs.

Cognitive Dissonance Theory

The human mind doesn’t tolerate internal conflicts very well, and so when we have beliefs that contradict one another or when we are exposed to new information that conflicts with an existing belief, some degree of cognitive dissonance results and we are effectively pushed out of our comfort zone.  The mind attempts to rid itself of this cognitive dissonance through a few different methods.  A person may alter the importance of the original belief or that of the new information, they may change the original belief, or they may seek evidence that is critical of the new information.  Generally, the easiest path for the mind to take is the first and last method mentioned here, as it is far more difficult for a person to change their existing beliefs, partly due to the fact that a person’s existing set of beliefs is their only frame of reference when encountering new information, and there is an innate drive to maintain a feeling of familiarity and a feeling of certainty of our beliefs.

The primary problem with these automated cognitive dissonance reduction methods is that they tend to cause people to defend their beliefs in one way or another rather than to question them and try to analyze them objectively.  Unfortunately, this means that we often fail to consider new evidence and information using a rational approach, and this in turn can cause people to acquire quite a large number of irrational, false beliefs, despite having a feeling of certainty regarding the truth of those beliefs.  It is also important to note that many of the cognitive biases that I’m about to mention result from these cognitive dissonance reduction methods, and they can become compounded to create severe lapses in judgement.

Confirmation Bias, Semmelweis Reflex, and The Frequency Illusion

One of the most significant cognitive biases we have is what is commonly referred to as confirmation bias.  Basically, this bias refers to our tendency to seek out, interpret, or remember information in a particular way that serves to confirm our beliefs.  To put it more simply, we often will only see what we want to see.  It is a method that our brain uses in order to be able to sift through large amounts of information and piece it together with what we already believe into a meaningful story line or explanation.  Just as we expect, it ultimately optimizes for efficiency over accuracy, and therein lies the problem.  Even though this bias applies to everyone, the dogmatist in particular has a larger cognitive barrier to overcome because they’re also using a fallacious epistemological methodology right from the start (i.e. appealing to some authority rather than to reason and evidence).  So while everyone is affected by this bias to some degree, the dogmatic believer in particular has an epistemological flaw that serves to compound the issue and make matters worse.  They will, to a much higher degree, live their life unconsciously ignoring any evidence encountered that refutes their beliefs (or fallaciously reinterpreting it to be in their favor), and they will rarely if ever attempt to actively seek out such evidence to try and disprove their beliefs.  A more rational thinker on the other hand, has a belief system primarily based on a reliable epistemological methodology (which uses empirical evidence to support it), and one that has been proven to work and provide increasingly accurate knowledge better than any other method (i.e. the scientific method).  Nevertheless, everyone is affected by this bias, and because it is operating outside the conscious mind, we all fail to notice it as it actively modifies our perception of reality.

One prominent example in our modern society, illustrating how this cognitive bias can reinforce dogmatic thinking (and with large groups of people), is the ongoing debate between Young Earth Creationists and the scientific consensus regarding evolutionary theory.  Even though the theory of evolution is a scientific fact (much like many other scientific theories, such as the theory of gravity, or the Germ theory of disease), and even though there are several hundred thousand scientists that can attest to its validity, as well as a plethora of evidence within a large number of scientific fields supporting its validity, Creationists seem to be in complete and utter denial of this actuality.  Not only do they ignore the undeniable wealth of evidence that is presented to them, but they also misinterpret evidence (or cherry pick) to suit their position.  This problem is compounded by the fact that their beliefs are only supported by fallacious tautologies [e.g. “It’s true because (my interpretation of) a book called the Bible says so…”], and other irrational arguments based on nothing more than a particular religious dogma and a lot of intellectual dishonesty.

I’ve had numerous debates with these kinds of people (and I used to BE one of them many years ago, so I understand how many of them arrived at these beliefs), and I’ll often quote several scientific sources and various logical arguments to support my claims (or to refute theirs), and it is often mind boggling to witness their response, with the new information seemingly going in one ear and coming out the other without undergoing any mental processing or critical consideration.  It is as if they didn’t hear a single argument that was pointed out to them and merely executed some kind of rehearsed reflex.  The futility of the communication is often confirmed when one finds themselves having to repeat the same arguments and refutations over and over again, seemingly falling on deaf ears, with no logical rebuttal of the points made.  On a side note, this reflex-like response where a person quickly rejects new evidence because it contradicts their established belief system is known as the “Semmelweis reflex/effect”, but it appears to be just another form of confirmation bias.  Some of these cases of denial and irrationality are nothing short of ironic, since these people are living in a society that owes its technological advancements exclusively to rational, scientific methodologies, and these people are indeed patronizing and utilizing many of these benefits of science everyday of their lives (even if they don’t realize it).  Yet we still find many of them hypocritically abhorring or ignoring science and reason whenever it is convenient to do so, such as when they try to defend their dogmatic beliefs.

Another prominent example of confirmation bias reinforcing dogmatic thinking relates to various other beliefs in the supernatural.  People that believe in the efficacy of prayer, for example, will tend to remember prayers that were “answered” and forget or rationalize away any prayers that weren’t “answered”, since their beliefs reinforce such a selective memory.  Similarly, if a plane crash or other disaster occurs, some people with certain religious beliefs will often mention or draw attention to the idea that their god or some supernatural force must have intervened or played a role in preventing the death of any survivors (if there are any), while they seem to downplay or ignore the facts pertaining to the many others that did not survive (if there are any survivors at all).  In the case of prayer, numerous studies have shown that prayer for another person (e.g. to heal them from an illness) is ineffective when that person doesn’t know that they’re being prayed for, thus any efficacy of prayer has been shown to be a result of the placebo effect and nothing more.  It may be the case that prayer is one of the most effective placebos, largely due to the incredibly high level of belief in its efficacy, and the fact that there is no expected time frame for it to “take effect”.  That is, since nobody knows when a prayer will be “answered”, then even if the prayer isn’t “answered” for several months or even several years (and time ranges like this are not unheard of for many people that have claimed to have prayers answered), then confirmation bias will be even more effective than a traditional medical placebo.  After all, a typical placebo pill or treatment is expected to work within a reasonable time frame comparable to other medicine or treatments taken in the past, but there’s no deadline for a prayer to be answered by.  It’s reasonable to assume that even if these people were shown the evidence that prayer is no more effective than a placebo (even if it is the most effective placebo available), they would reject it or rationalize it away, once again, because their beliefs require it to be so.  The same bias applies to numerous other purportedly paranormal phenomena, where people see significance in some event because of how their memory or perception is operating in order to promote or sustain their beliefs.  As mentioned earlier, we often see what we want to see.

There is also a closely related bias known as congruence bias, which occurs because people rely too heavily on directly testing a given hypothesis, and often neglect to indirectly test that hypothesis.  For example, suppose that you were introduced to a new lighting device with two buttons on it, a green button and a red button.  You are told that the device will only light up by pressing the green button.  A direct test of this hypothesis would be to press the green button, and see if it lights up.  An indirect test of this hypothesis would be to press the red button, and see if it doesn’t light up.  Congruence bias illustrates that we tend to avoid the indirect testing of hypotheses, and thus we can start to form irrational beliefs by mistaking correlation with causation, or by forming incomplete explanations of causation.  In the case of the aforementioned lighting device, it could be the case that both buttons cause it to light up.  Think about how this congruence bias affects our general decision making process, where when we combine it with our confirmation bias, we are inclined to not only reaffirm our beliefs, but to avoid trying to disprove them (since we tend to avoid indirect testing of those beliefs).  This attitude and predisposition reinforces dogmatism by assuming the truth of one’s beliefs and not trying to verify them in any rational, critical way.

An interesting cognitive bias that often works in conjunction with a person’s confirmation bias is something referred to as the frequency illusion, whereby some detail of an event or some specific object may enter a person’s thoughts or attention, and then suddenly it seems that they are experiencing the object or event at a higher than normal frequency.  For example, if a person thinks about a certain animal, say a turtle, they may start noticing lots of turtles around them that they would have normally overlooked.  This person may even go to a store and suddenly notice clothing or other gifts with turtle patterns or designs on them that they didn’t seem to notice before.  After all, “turtle” is on their mind, or at least in the back of their mind, so their brain is unconsciously “looking” for turtles, and the person isn’t aware of their own unconscious pattern recognition sensitivity.  As a result, they may think that this perceived higher frequency of “seeing turtles” is abnormal and that it must be more than simply a coincidence.  If this happens to a person that appeals to the supernatural, their confirmation bias may mistake this frequency illusion for a supernatural event, or something significant.  Since this happens unconsciously, people can’t control this illusion or prevent it from happening.  However, once a person is aware of the frequency illusion as a cognitive bias that exists, they can at least reassess their experiences with a larger toolbox of rational explanations, without having to appeal to the supernatural or other irrational belief systems.  So in this particular case, we can see how various cognitive biases can “stack up” with one another and cause serious problems in our reasoning abilities and negatively affect how accurately we perceive reality.

To read part 2 of 3, click here.

Neuroscience Arms Race & Our Changing World View

leave a comment »

At least since the time of Hippocrates, people began to realize that the brain was the physical correlate of consciousness and thought.  Since then, the fields of psychology, neuroscience, and several inter-related fields have emerged.  There have been numerous advancements made within the field of neuroscience during the last decade or so, and in that same time frame there has also been an increased interest in the social, religious, philosophical, and moral implications that have precipitated from such a far-reaching field.  Certainly the medical knowledge we’ve obtained from the neurosciences has been the primary benefit of such research efforts, as we’ve learned quite a bit more about how the brain works, how it is structured, and the ongoing neuropathology that has led to improvements in diagnosing and treating various mental illnesses.  However, it is the other side of neuroscience that I’d like to focus on in this post — the paradigm shift relating to how we are starting to see the world around us (including ourselves), and how this is affecting our goals as well as how to achieve them.

Paradigm Shift of Our World View

Aside from the medical knowledge we are obtaining from the neurosciences, we are also gaining new perspectives on what exactly the “mind” is.  We’ve come a long way in demonstrating that “mental” or “mind” states are correlated with physical brain states, and there is an ever growing plethora of evidence which suggests that these mind states are in fact caused by these brain states.  It should come as no surprise then that all of our thoughts and behaviors are also caused by these physical brain states.  It is because of this scientific realization that society is currently undergoing an important paradigm shift in terms of our world view.

If all of our thoughts and behaviors are mediated by our physical brain states, then many everyday concepts such as thinking, learning, personality, and decision making can take on entirely new meanings.  To illustrate this point, I’d like to briefly mention the well known “nature vs. nurture” debate.  The current consensus among scientists is that people (i.e. their thoughts and behavior) are ultimately products of both their genes and their environment.

Genes & Environment

From a neuroscientific perspective, the genetic component is accounted for by noting that genes have been shown to play a very large role in directing the initial brain wiring schema of an individual during embryological development and through gestation.  During this time, the brain is developing very basic instinctual behavioral “programs” which are physically constituted by vastly complex neural networks, and the body’s developing sensory organs and systems are also connected to particular groups of these neural networks.  These complex neural networks, which have presumably been naturally selected for in order to benefit the survival of the individual, continue being constructed after gestation and throughout the entire ontogenic evolution of the individual (albeit to lesser degrees over time).

As for the environmental component, this can be further split into two parts: the internal and the external environment.  The internal environment within the brain itself, including various chemical concentration gradients partly mediated by random Brownian motion, provides some gene expression constraints as well as some additional guidance to work with the genetic instructions to help guide neuronal growth, migration, and connectivity.  The external environment, consisting of various sensory stimuli, seems to modify this neural construction by providing a form of inputs which may cause the constituent neurons within these neural networks to change their signal strength, change their action potential threshold, and/or modify their connections with particular neurons (among other possible changes).

Causal Constraints

This combination of genetic instructions and environmental interaction and input produces a conscious, thinking, and behaving being through a large number of ongoing and highly complex hardware changes.  It isn’t difficult to imagine why these insights from neuroscience might modify our conventional views of concepts such as thinking, learning, personality, and decision making.  Prior to these developments over the last few decades, the brain was largely seen as a sort of “black box”, with its internal milieu and functional properties remaining mysterious and inaccessible.  From that time and prior to it, for millennia, many people have assumed that our thoughts and behaviors were self-caused or causa sui.  That is, people believed that they themselves (i.e. some causally free “consciousness”, or “soul”, etc.) caused their own thoughts and behavior as opposed to those thoughts and behaviors being ultimately caused by physical processes (e.g. neuronal activity, chemical reactions, etc.).

Neuroscience (as well as biochemistry and its underlying physics) has shed a lot of light on this long-held assumption and, as it stands, the evidence has shown this prior assumption to be false.  The brain is ultimately controlled by the laws of physics since every chemical reaction and neural event that physically produces our thoughts, choices, and behaviors, have never been shown to be causally free from these physically guiding constraints.  I will mention that quantum uncertainty or quantum “randomness” (if ontologically random) does provide some possible freedom from physical determinism.  However, these findings from quantum physics do not provide any support for self-caused thoughts or behaviors.  Rather, it merely shows that those physically constrained thoughts and behaviors may never be completely predictable by physical laws no matter how much data is obtained.  In other words, our thoughts and behaviors are produced by highly predictable (although not necessarily completely predictable) physical laws and constraints as well as some possible random causal factors.

As a result of these physical causal constraints, the conventional perspective of an individual having classical free will has been shattered.  Our traditional views of human attributes including morality, choices, ideology, and even individualism are continuing to change markedly.  Not surprisingly, there are many people uncomfortable with these scientific discoveries including members of various religious and ideological groups that are largely based upon and thus depend on the very presupposition of precepts such as classical free will and moral responsibility.  The evidence that is compiling from the neurosciences is in fact showing that while people are causally responsible for their thoughts, choices, and behavior (i.e. an individual’s thoughts and subsequent behavior are constituents of a causal chain of events), they are not morally responsible in the sense that they can choose to think or behave any differently than they do, for their thoughts and behavior are ultimately governed by physically constrained neural processes.

New World View

Now I’d like to return to what I mentioned earlier and consider how these insights from neuroscience may be drastically modifying how we look at concepts such as thinking, learning, personality, and decision making.  If our brain is operating via these neural network dynamics, then conscious thought appears to be produced by a particular subset of these neural network configurations and processes.  So as we continue to learn how to more directly control or alter these neural network arrangements and processes (above and beyond simply applying electrical potentials to certain neural regions in order to bring memories or other forms of imagery into consciousness, as we’ve done in the past), we should be able to control thought generation from a more “bottom-up” approach.  Neuroscience is definitely heading in this direction, although there is a lot of work to be done before we have any considerable knowledge of and control over such processes.

Likewise, learning seems to consist of a certain type of neural network modification (involving memory), leading to changes in causal pattern recognition (among other things) which results in our ability to more easily achieve our goals over time.  We’ve typically thought of learning as the successful input, retention, and recall of new information, and we have been achieving this “learning” process through the input of environmental stimuli via our sensory organs and systems.  In the future, it may be possible to once again, as with the aforementioned bottom-up thought generation, physically modify our neural networks to directly implant memories and causal pattern recognition information in order to “learn” without any actual sensory input, and/or we may be able to eventually “upload” information in a way that bypasses the typical sensory pathways thus potentially allowing us to catalyze the learning process in unprecedented ways.

If we are one day able to more directly control the neural configurations and processes that lead to specific thoughts as well as learned information, then there is no reason that we won’t be able to modify our personalities, our decision-making abilities and “algorithms”, etc.  In a nutshell, we may be able to modify any aspect of “who” we are in extraordinary ways (whether this is a “good” or “bad” thing is another issue entirely).  As we come to learn more about the genetic components of these neural processes, we may also be able to use various genetic engineering techniques to assist with the necessary neural modifications required to achieve these goals.  The bottom line here is that people are products of their genes and environment, and by manipulating both of those causal constraints in more direct ways (e.g. through the use of neuroscientific techniques), we may be able to achieve previously unattainable abilities and perhaps in a relatively miniscule amount of time.  It goes without saying that these methods will also significantly affect our evolutionary course as a species, allowing us to enter new landscapes through our substantially enhanced ability to adapt.  This may be realized through these methods by finding ways to improve our intelligence, memory, or other cognitive faculties, effectively giving us the ability to engineer or re-engineer our brains as desired.

Neuroscience Arms Race

We can see that increasing our knowledge and capabilities within the neurosciences has the potential for drastic societal changes, some of which are already starting to be realized.  The impact that these fields will have on how we approach the problem of criminal, violent, or otherwise undesirable behavior can not be overstated.  Trying to correct these issues by focusing our efforts on the neural or cognitive substrate that underlie them, as opposed to using less direct and more external means (e.g. social engineering methods) that we’ve been using thus far, may lead to much less expensive solutions as well as solutions that may be realized much, much more quickly.

As with any scientific discovery or subsequent technology produced from it, neuroscience has the power to bestow on us both benefits as well as disadvantages.  I’m reminded of the ground-breaking efforts made within nuclear physics several decades ago, whereby physicists not only gained precious information about subatomic particles (and their binding energies) but also how to release these enormous amounts of energy from nuclear fusion and fission reactions.  It wasn’t long after these breakthrough discoveries were made before they were used by others to create the first atomic bombs.  Likewise, while our increasing knowledge within neuroscience has the power to help society improve by optimizing our brain function and behavior, it can also be used by various entities to manipulate the populace for unethical reasons.

For example, despite the large number of free market proponents who claim that the economy need not be regulated by anything other than rational consumers and their choices of goods and services, corporations have clearly increased their use of marketing strategies that take advantage of many humans’ irrational tendencies (whether it is “buy one get one free” offers, “sales” on items that have artificially raised prices, etc.).  Politicians and other leaders have been using similar tactics by taking advantage of voters’ emotional vulnerabilities on certain controversial issues that serve as nothing more than an ideological distraction in order to reduce or eliminate any awareness or rational analysis of the more pressing issues.

There are already research and development efforts being made by these various entities in order to take advantage of these findings within neuroscience such that they can have greater influence over people’s decisions (whether it relates to consumers’ purchases, votes, etc.).  To give an example of some of these R&D efforts, it is believed that MRI (Magnetic Resonance Imaging) or fMRI (functional Magnetic Resonance Imaging) brain scans may eventually be able to show useful details about a person’s personality or their innate or conditioned tendencies (including compulsive or addictive tendencies, preferences for certain foods or behaviors, etc.).  This kind of capability (if realized) would allow marketers to maximize how many dollars they can squeeze out of each consumer by optimizing their choices of goods and services and how they are advertised. We have already seen how purchases made on the internet, if tracked, begin to personalize the advertisements that we see during our online experience (e.g. if you buy fishing gear online, you may subsequently notice more advertisements and pop-ups for fishing related goods and services).  If possible, the information found using these types of “brain probing” methods could be applied to other areas, including that of political decision making.

While these methods derived from the neurosciences may be beneficial in some cases, for instance, by allowing the consumer more automated access to products that they may need or want (which will likely be a selling point used by these corporations for obtaining consumer approval of such methods), it will also exacerbate unsustainable consumption and other personal or societally destructive tendencies and it is likely to continue to reduce (or eliminate) whatever rational decision making capabilities we still have left.

Final Thoughts

As we can see, neuroscience has the potential to (and is already starting to) completely change the way we look at the world.  Further advancements in these fields will likely redefine many of our goals as well as how to achieve them.  It may also allow us to solve many problems that we face as a species, far beyond simply curing mental illnesses or ailments.  The main question that comes to mind is:  Who will win the neuroscience arms race?  Will it be those humanitarians, scientists, and medical professionals that are striving to accumulate knowledge in order to help solve the problems of individuals and societies as well as to increase their quality of life?  Or will it be the entities that are trying to accumulate similar knowledge in order to take advantage of human weaknesses for the purposes of gaining wealth and power, thus exacerbating the problems we currently face?

Religion: Psychology, Evolution, and Socio-political Aspects

with one comment

Religion is such a strong driving force in most (if not all) cultures as it significantly affects how people behave and how they look at the world around them.  It’s interesting to see that so many religions share certain common elements, and it seems likely that these common elements arose from several factors including some psychological similarities between human beings.  Much like Carl Jung’s idea of a “collective unconscious”, humans likely share certain psychological tendencies and this would help to explain the religious commonalities that have precipitated over time.  It seems plausible that some evolutionary mechanisms, including natural selection and also the “evolution” of certain social/political structures, also played a role in establishing some of these religious commonalities.  I’d like to discuss some of my thoughts on certain religious beliefs including what I believe to be some important psychological, social, political, and evolutionary factors that have likely influenced the formation, acceptance, and ultimate success of religion as well as some common religious beliefs.

Fear of Death

The fear of death is probably one of the largest forces driving many religious beliefs.  This fear of death seems to exist for several reasons.  For one, the fear of death may be (at least partly) an evolutionary by-product of our biological imperative to survive.  We already perform involuntary physical actions instinctually in order to survive (e.g. fight-or-flight response, etc.).  Having an emotional element (such as fear) combined with our human intellect and self-awareness, can drive us to survive in less autonomous ways thus providing an even greater evolutionary advantage for natural selection.  For example, many people have been driven to circumvent death through scientific advancements.  Another factor to consider is that the fear of death may largely be a fear of the unknown or unfamiliar (related to the fear of change).  It shouldn’t be surprising then that a religion offering ways to appease this fear would become successful.

Could it be that our biological imperative to survive, coupled with the logical realization that we are mortal, have catalyzed a religious means for some form of cognitive dissonance reduction?  Humans that are in denial about (or are at least uncomfortable with) their inevitable death will likely be drawn towards religious beliefs that circumvent this inevitability with some form of spiritual eternal life or immortality.  Not only can this provide a means of circumventing mortality (perhaps by transcending the biological imperative with a spiritual version), but it can also reduce or eliminate the unknown aspects that contribute to the fear of death depending on the after-death specifics outlined by the religion.

A strange irony exists regarding what I call “spiritual imperatives” and I think it is worth mentioning.  If a religion professes that one’s ultimate goal should be preparation for the after-life (or some apocalyptic scenario), then adherents to such a doctrine may end up sacrificing their biological imperative (or make it a lower priority) in favor of some spiritual imperative.  That is, they may start to care less about their physical survival or quality of life in the interest of attaining what they believe to be spiritual survival.  In doing so, they may be sacrificing or de-prioritizing the very biological imperative that likely catalyzed the formation of their spiritual imperative in the first place.  So as strange as it may be, the fear of death may lead to some religious doctrines that actually hasten one’s inevitable death.

Morality, Justice, and Manipulation

Morality seems to be deeply ingrained in our very nature, and as a result we can see a universal implementation of moral structures in human societies.  It seems likely that this deeply ingrained sense of morality, much like many other innate traits shared by the human race, is a result of natural selection in the ongoing evolution of our species.  Our sense of morality has driven many beneficial behaviors (though not always) that tend to increase the survival of the individual.  For example, the golden rule (a principle that may even serve as a sort of universal moral creed) serves to benefit every individual by encouraging cooperation and altruism at the expense of selfish motives.  Just as some individual cells eventually evolved to become cooperative multi-cellular organisms (in order to gain mutual benefits in a “non-zero sum” game), so have other species (including human beings) evolved to cooperate with one another to increase mutual benefits including that of survival (John Maynard Smith and other biologists have shared this view of how evolution can lead to greater degrees of cooperation).  A sense of morality helps to reinforce this cooperation.  Evolution aside, establishing some kind of moral framework will naturally help to maximize what is deemed to be desirable behavior.  Religion has been an extremely effective means of accomplishing this goal.  First of all, religions tend to define morality in very specific ways.  Religion has also utilized fairly effective incentives and motivations for the masses to behave in ways desired by the society (or by its leaders).

Many religions profess a form of moral absolutism, where moral values are seen as objective, unquestionable, and often ordained by the authority of a god.  This makes a religion very attractive and effective by simplifying the moral structure of the society and backing it up with the authority of a deity.  If the rules are believed to be given by the authority of a deity, then there will be few (if any) people willing to question them and as a result there will be a much greater level of obedience.  The alternative, moral relativism, is more difficult to apply to a society’s dynamic as the behavioral goals in a morally relativistic society may not be very stable nor well-defined, even if moral relativism carries with it the benefits of religious or philosophical tolerance as well as open-mindedness.  Thus, moral absolutism is more likely to lead to productive societies, which may help to explain why moral absolutism has been such a successful religious meme (as well as the fact that morality in general seems to be a universal part of human nature).

Though I’m a moral realist in a strict sense since I believe that there are objective moral facts that exist, I’d also like to stipulate that I’m also a moral relativist, in the sense that I believe that any objective moral facts that exist are dependent on a person’s biology, psychology, and how those effect one’s ultimate goals for a satisfying and fulfilling life.  Since these factors may have some variance across a species and for sure a variance across different species, then morals are ultimately relative to those variances (if any exist).

In any case, I can appreciate why most people are drawn away from relativism (in any form).  It is difficult for most people to think about reality as consisting of elements that aren’t simply black-and-white.  After all, we are used to categorizing the world around us — fracturing it into finite, manageable, and well-defined parts that we can deal with and understand.  We often forget that we are subjectively experiencing the world around us, and that our individual frames of reference and perspectives can be quite different from person to person.  Relativism just isn’t very compatible with the common human illusion of seeing the world objectively, whether it is how we look at the physical world, language, our moral values, etc.

As for moral incentives, religions often imply that there will be some type of reward for the adherent and/or some type of punishment for the deviants.  Naturally anybody introduced to the religion (i.e. potential converts) will weigh the potential risks and benefits, with some people implementing Pascal’s wager and the like, likely leading to a larger number of followers over time.  You will also have established religious members that adhere to the specific rules within the religion based on the same moral incentives.  That is, the moral incentives put into place (i.e. punishment-reward system) can serve the purposes of obtaining religious members in the first place, and also to ensure that the religious members maintain a high level of obedience within the religion.  Of these converts and well-established followers, there will likely be a mixture of those that are primarily motivated by the fear of punishment and those primarily motivated by the desire for a reward.  Psychoanalysis aside, it wouldn’t be surprising if by briefly examining one’s behavior and personality, that one could ascertain an individual’s primary religious motivations (for their conversion and/or subsequent religious obedience).  It is likely however that most people would fail to see these motivations at work as they would prefer to think of their religious affiliations as a result of some revelation of truth.

The divine authorization and punishment-reward system within many religions can also provide a benefit to those that desire power and the manipulation of the populace.  If those in power desire an effective way to control the populace, they can create a religious structure with rules, morals, goals (including wars and conquests), etc., that benefit their agenda and then convince others that they are divinely authorized.  As long as the populace is convinced that the rules, morals, and goals are of a divine source, they will be more likely to comply with them.  Clearly this effect will be further amplified if a divine punishment-reward system is believed to exist.

One last point I’d like to make regarding morality involves the desire for divine justice.  People no doubt take comfort in the thought of everything being fair and orderly (from their perspective) in the long run, regardless of whether or not any unfairness presents itself during their lifetime.  It is much less comforting to accept that some people will do whatever they want and may die without ever receiving what one believes to be a just consequence, and/or that one has sacrificed many enjoyable human experiences in the interest of maintaining their religious requirements with potentially no long-term (i.e. after-death) return for their efforts.  The idea of an absolute justice being implemented after death definitely helps reinforce religious obedience in a world that has imperfect and subjective views (as well as implementations) of justice.

Desire for Free Will

Another common religious meme (related to the aforementioned moral frameworks) is the belief in classical free will.  If people practicing a particular religion are taught that they will be rewarded or punished for their actions, then it is logical for them to assume that they have free will over their actions — otherwise their moral responsibility would be non-existent and any divinely bestowed consequences incurred would be unjustified, meaningless, and futile.  So, in these types of religions, it is assumed that people should be able to make free choices that are not influenced or constrained by factors such as: genetics, any behavioral conditioning environment, any deterministic causal chain, or any random course of events for that matter.  That is, everyone’s behavior should be causa sui.  This way, it is the individual that is directly responsible for their behavior and ultimate fate rather than any factors outside of the individual’s control.

While the sciences have shown a plethora of evidence negating the existence of classical free will, many people continue to believe that free will exists.  It seems that people are naturally driven to believe that they have free will for a few reasons.  For one, the belief in free will is consistent with the illusion of free will that we consciously experience.  We do not feel that there is something or someone else in control of our fate (due to the principles of priority, consistency, and exclusivity as explained in Wagner’s Theory of Apparent Mental Causation), and so we have no immediate reason to believe that free will doesn’t exist.  It certainly feels like we have free will, even though the mechanistic physical laws of nature (whether deterministic or indeterministic) imply that we do not.  Second, from a deeper psychological perspective, if one believes in moral responsibility, has feelings of pride or shame for their actions, etc., the belief in free will is naturally reinforced.  People want to believe that they are in control because it better justifies the aforementioned punishment-reward system of both society and many religions.

Now granted, if all people agreed that free will was non-existent (most people assume we have free will), society’s system of legislation or law enforcement wouldn’t likely change much if at all.  Criminals being punished or detained for the protection of the majority of society would likely be a continued practice because pragmatically speaking, law enforcement has proven itself to be effective for providing safety, providing crime deterrence and so forth.  However, if people universally accepted that free will was non-existent, it would likely change how they view the punishment-reward system of society and religion.  People would probably focus more on the underlying genetic causes and conditioning environment that led to undesirable behavior rather than falsely looking at the individual as inherently bad or as someone who made poor choices that could have been made differently.

If someone feels that they have a lot to gain from a particular religion (or has invested so much of themselves into the religion already), and free will is a philosophical requirement for that particular religion, then they will likely find a way to rationalize the existence of free will (or rationalize any other religious assumption), despite the strong evidence against it (or lack of evidence in support of it).  There are even a few religions that simultaneously profess the existence of free will as well as the existence of an omniscient god that has complete knowledge of the future — despite the logical incompatibility of these two propositions.  Clearly, the desire for free will (whether conscious and/or unconscious) is stronger than most people realize.

Also, it seems that there is a general human desire for one’s life to have meaning and purpose, and perhaps some people feel that having free will over their actions is the only way to give their life meaning and purpose, as opposed to their life’s course being pre-determined or random.

Anthropocentrism & Purpose

Since religions are a product of human beings, it is not surprising to see that many of them have some anthropocentric purpose or element.  There seems to be a tendency for humans to assume that they are more important than anything else on this planet (or anything else in the universe for that matter).  This assumption may be fueled by the fact that human intelligence has brought us to the top of the food chain and has allowed us to manipulate our environment in ways that seem relatively extraordinary.  Humans certainly recognize this status and some may see it as necessarily divinely ordained or at least special in some way.  This helps to answer the age-old philosophical question: What is the meaning of life and/or why are we here?

By raising the value of human life over all other animals, religion can serve to separate humans from the rest of the animal kingdom, and thus separate humans from any animalistic traits that we dislike about ourselves.  Anthropocentric views have also been used to endorse otherwise questionable behavior that humans may choose to employ on the rest of the nature around them.  On the flip side, anthropocentrism can in fact lead to a humanistic drive or feeling of human responsibility to make the world a better place for many different creatures.  It seems however that the most powerful religions have often endorsed a human domination of the world and environment around them.  This selfish drive is more in-line with the rest of the animal kingdom as every animal fights to survive, flourish, and ultimately do what they believe best serves their interests.  Either way, elevating human importance can provide many with a sense of purpose regardless of what they think that purpose is.  This sense of purpose can be important, especially for those that recognize how short our human history has been relative to the history of all life on Earth, and also how relatively insignificant our planet is in such an unfathomably large universe.  Giving humans a special purpose can also help those that are uncomfortable with the idea of living in such a mechanistic world.

Desire for Protection

Certainly people are going to feel more secure if they believe that there is someone or something that is always protecting them.  Whether or not we have people in our lives that protect us in one way or another, nothing can compare to a divine protector.  It is certainly possible that this desire for protection is an artifact of the maternal-child dynamic from one’s earliest years of life, thus driving us to seek out similar comforts and securities.  Generally speaking however, the desire for protection is yet another facet of the biological imperative to survive.  Either way, the desire for some form of protection has likely played a role in religious constructs.

If religious members fail to receive any obvious protection or safety in specific cases (i.e. if they are harmed in some way), it is often the case that many find a way to reconcile this actuality by coincidentally believing that whatever happens is ultimately governed by some god’s will or plan.  This way the comforts of believing in a protective, benevolent, or loving god are not jeopardized in any circumstance.  This is a good example of cognitive dissonance reduction being accomplished through theological rationalization.  That is, people may need a special combination of beliefs (which may evolve over time) in order to reconcile their religious and theological presuppositions with one another or with reality.

Group Dynamics

Another form of protection (and an evident form at that) offered by religious membership is that which results from group formation and dynamics.  Specifically, I am referring to the benefits of both protection and memetic reinforcement by the rest of the group.  From an evolutionary perspective, we can see that an individual will tend to have a greater survival advantage if they are a member of a cooperative group (as I mentioned previously in the section titled: “Morality, Justice, and Manipulation”).  For this reason and many others, people will often try to join or form groups.  There is always greater power in large numbers, and so even if certain religious claims or elements are difficult to accept, many people will instinctually flock toward the group and its example because it is safer than being alone and more vulnerable.  After joining a group (or perhaps in order to join the group) many may even find themselves behaving in ways that violate their own previously self-ascribed values.  Group dynamics and tendencies can be quite powerful indeed.

After a religion becomes well established and gains enough mass and momentum, people increasingly gravitate toward its power and influence even if that requires them to significantly modify their behavior.  In fact, if the religion gains enough influence and power over a culture or society, there may be little (if any) freedom to refrain from practicing the religion anyway, so even if people aren’t drawn to a popular religion, they may be forced into it.

So as we can see, group dynamics have likely influenced religion in multiple ways.  The memetic reinforcement that groups provide has promoted the success and perpetuation of particular religious memes (regardless of what those particular memes are).  There also seems to be a critical mass component, whereby after a religion gains enough mass and momentum, it is significantly more difficult for it to subside over time.  Thus, many religions that have become successful have done so by simply reaching some critical mass.

God of the gaps

Another reason that many religions or religious memes have been successful has been due to a lack of knowledge about nature.  That is, at some point in the past there arose a “god of the gaps” mentality whereby unsatisfactory, insufficient, or non-existent naturalistic explanations led to deistic or theistic presuppositions.  We’ve seen that for a large period in history, polytheism was quite popular as people were ascribing multiple gods to explain a multitude of phenomena.  Eventually some monotheistic religions precipitated but they merely replaced the multiple “gods of the gaps” with one single “God of the gaps”.  This consolidation of gods may have resulted (at least in part) from an application of Occam’s razor as well as to differentiate new religions and their respective doctrines from their polytheistic predecessors.  As science and empiricism continued to develop further in the wake of these religious world views, the phenomena previously ascribed to a god (or to many gods) became increasingly explainable by predictable, mechanistic, natural laws.  By applying Occam’s razor one last time, science and empiricism has effectively been eliminating the final “God of the gaps”.

The psychological, social, and political benefits given by various religious constructs (including but not limited to those I’ve mentioned within this post) had likely already set a precedent and established a level of momentum that would continue to impede the acceptance of scientific explanations — even up to this day.  This may help to explain the prevalence of supernatural or miraculous religious beliefs despite their incompatibility with science and empiricism.  Once the most powerful religions gained traction, rather than abandoning beliefs of the supernatural in the wake of scientific progress, it was science that was initially censored and hindered.  Eventually, science and religion began to co-exist more easily, but in order for them to be at all reconciled with one another, many religious interpretations or explanations were modified accordingly (or the religious followers continued to ignore science).  Belief can be extremely powerful — so powerful in fact that even if a proposition isn’t actually true, if a person believes it to be true strongly enough, it can become a reality for that person.  In some of these cases, it doesn’t matter if there is an overwhelming amount of evidence to refute the belief, for that evidence will be ignored if it does not corroborate the believer’s artificial reality.

Another “God of the gaps” example that still perpetuates many religious beliefs is the mis-attributed power of prayer.  Prayer is actually effective for healing or helping to heal some ailments (for example), but science has shown (and is continuing to show) how this is nothing more than a placebo effect.  To give just one example, several studies on heart patients demonstrated that prayer was only effective on their recovery when the patients knew that they were being prayed for.  This further illustrates how the “God of the gaps” argument has never been very strong, and is only shrinking with every new discovery made in science.  Nevertheless, even as evidence accumulates that shows how a religious person’s notions are incorrect, there are psychological barriers in the brain that keep one from accepting that new information.  In the case of prayer just mentioned, a person who believes in prayer will have a confirmation bias in their brain that serves to remember when prayers are “answered” and forget about prayers that are not (regardless of what is being prayed for).

In other cases, if one chooses to actually consider any refutative evidence, it can become extremely difficult if not impossible for one to reconcile certain religious beliefs with reality.  However, if it is psychologically easier for a person to modify their religious beliefs (even in some radical way) rather than abandoning their religion altogether, they will likely do so.  It is clear how powerful these religious driving factors are when we see people either blatantly ignoring reason and the senses and/or adjusting their religion or theology in order to reconcile their beliefs with reality such that they can maintain the comfort and security of their deeply invested religious convictions.

It should be noted that the “god(s) of the gaps” mentality that many people share may result when the human mind asks certain questions for which it doesn’t have the cognitive machinery to answer, regardless of any scientific progress made.  If they are answerable questions (in theory), it may take a substantial amount of cognitive evolution in order to have the capability to answer them (or in order to see certain questions as being completely irrational and thus eliminate them from any further inquiry).  Even if this epistemologically-enhancing level of cognitive evolution did take place, we may very well be defined as a new species anyway, and thus technically speaking, homo sapiens could forever remain unable to access this knowledge regardless.  It would then follow that the “god(s) of the gaps” mentality (and any of its byproducts) may forever be a part of “human” nature.  Time will tell.

Final Thoughts

It appears that there have been several evolutionary, psychological, social, and political factors that have likely influenced the formation, acceptance, and ultimate success of many religious constructs.  It seems that the largest factors influencing religious constructs (and thus the commonalities seen between many religions) have been the psychological comforts that religion has provided, the human cognitive limitations leading to supernatural explanations, as well as some naturally-selected survival advantages that have ensued.  The desire for these psychological comforts (likely unconscious although not necessarily) seems to catalyze the manifestation of extremely strong beliefs, and not only has this affected the interplay between science (or empiricism) and religion, but these desires have also made it easier for religion to be used for manipulative purposes (among other reasons).  Furthermore, cognitive biases in the human brain often serve to maintain one’s beliefs, despite contradictory evidence against them.  Perhaps it is not too surprising to see such a complex interplay of variables behind religion, and also so many commonalities, as religion has been an extremely significant facet of the human condition.

Dreams, Dialogue, and the Unconscious

leave a comment »

It has long been believed that our mental structure consists of both a conscious and an unconscious element.  While the conscious element has been studied exhaustively, there seems to be relatively little known about the unconscious.  We can certainly infer that it exists as every part of the self that we can’t control or are not aware of must necessarily be mediated by the unconscious.  To be sure, the fields of neuroscience and psychology (among others) have provided a plethora of evidence related to the unconscious in terms of neuronal structures and activity, and the influence it has on our behavior, respectively.  However, trying to actually access the unconscious mind has proven to be quite difficult.  How can one hope to access this hidden yet incredibly powerful portion of themselves?  In this post, I plan to discuss what I believe to be two effective ways with which we can learn more about ourselves and access that which seems to elude us day-in and day-out.

Concept of Self

It is clear that we have an idea of who we are as individuals.  We consciously know what many of our interests are, what our philosophical and/or religious beliefs are, and we also have a subjective view of what we believe to be our personality traits.  I prefer to define this aspect of the self as the “Me”.  In short, the “Me” is the conscious subjective view one holds about themselves.

Another aspect of the self is the “You”, or the way others see you from their own subjective perspective.  It goes without saying that others view us very differently than we view ourselves.  People see things about us that we just don’t notice or that we deny to be true, whether they are particular personality traits or various behavioral tendencies.  Due to the fact that most people put on a social mask when they interact with others, the “You” ends up including not only some real albeit unknown aspects of the self, but also how you want to be seen by others and how they want to see you.  So I believe that the “You” is the social self — that which is implied by the individual and that which is inferred by another person.  I believe that the implied self and the inferred self involve both a conscious and unconscious element from each party, and thus the implication and inference will generally be quite different regardless of any of the limitations of language.

Finally, we have the aspect of the self which is typically unreachable and seems to be operating in the background.   I believe that this portion of the self ultimately drives us to think and behave the way we do, and accounts for what we may describe to be a form of “auto-pilot”.  This of course is the unconscious portion of the self.  I would call this aspect of the self the “I”.  In my opinion, it is the “I” that represents who we really are as a person (independent of subjective perspectives), as I believe everything conscious about the self is ultimately derived from this “I”.  The “I” includes the beliefs, interests, disinterests, etc., that we are not aware of yet are likely to exist based on some of our behaviors that conflict with our conscious intentions.  This aspect in particular is what I would describe as the objective self, and consequently it is that which we can never fully access or know about with any certainty.

Using the “You” to Access the “I”

I believe that the “You” is in fact a portal to access the “I”, for the portion of this “You” that is not derived from one’s artificial social mask will certainly contain at least some truths about one’s self that are either not consciously evident or are not believed by the “Me” to be true, even if they are in fact true.  Thus, in my opinion it is the inter-subjective communication with others that allows us to learn more about our unconscious self than any other method or action.  I also believe that this in fact accounts for most of the efficacy provided by mental health counseling.  That is, by having a discourse with someone else, we are getting another subjective perspective of the self that is not tainted with our own predispositions.  Even if the conversation isn’t specifically about you, by another person simply sharing their subjective perspective about anything at all, they are providing you with novel ways of looking at things, and if these perspectives weren’t evident in your conscious repertoire, they may in fact probe the unconscious (by providing recognition cues for unconscious concepts or beliefs).

The key lies in analyzing those external perspectives with an open mind, so that denial and the fear of knowing ourselves do not dominate and hinder this access.  Let’s face it, people often hear what they want to hear (whether about themselves or anything else for that matter), and we often unknowingly ignore the rest in order to feel comfortable and secure.  This sought-out comfort severely inhibits one’s personal growth and thus, at least periodically, we need to be able to depart from our comfort zone so that we can be true to others and be true to ourselves.

It is also important for us to strive to really listen to what others have to say rather than just waiting for our turn to speak.  In doing so, we will gain the most knowledge and get the most out of the human experience.  In particular, by critically listening to others we will learn the most about our “self” including the unconscious aspect.  While I certainly believe that inter-subjective communication is an effective way for us to access the “I”, it is generally only effective if those whom we’re speaking with are open and honest as well.  If they are only attempting to tell you what you want to hear, then even if you embrace their perspective with an open mind, it will not have much of any substance nor be nearly as useful.  There needs to be a mutual understanding that being open and honest is absolutely crucial for a productive discourse to transpire.  All parties involved will benefit from this mutual effort, as everyone will have a chance to gain access to their unconscious.

Another way that inter-subjective communication can help in accessing the unconscious is through mutual projection.  As I mentioned earlier, the “You” is often distorted by others hearing what they want to hear and by your social mask giving others a false impression of who you are.  However, they also tend to project their own insecurities into the “You”.  That is, if a person talking with you says specific things about you, they may in fact be a result of that person unknowingly projecting their own attributes onto you.  If they are uncomfortable with some aspect of themselves, they may accuse you of possessing the aspect, thus using projection as a defense mechanism.  Thus, if we pay attention to ourselves in terms of how we talk about others, we may learn more about our own unconscious projections.  Fortunately, if the person you’re speaking with knows you quite well and senses that you are projecting, they may point it out to you and vice versa.

Dream Analysis

Another potentially useful method for accessing the unconscious is an analysis of one’s dreams.  Freud, Jung and other well-known psychologists have endorsed this method as an effective psychoanalytic tool.  When we are dreaming, our brain is in a reduced-conscious if not unconscious state (although the brain is highly active within the dream-associated REM phase).  I believe that due to the decreased sensory input and stimulation during sleep, the brain has more opportunities to “fill in the blanks” and make an alternate conceptualization of reality.  This may provide a platform for unconscious expression.  When our brain constructs the dream content it seems to be utilizing a mixture of memories, current sensory stimuli constituting the sleeper’s environment (albeit a minimal amount — and perhaps necessarily so), and elements from the unconscious.  By analyzing our dreams, we have a chance to try and interpret symbolic representations likely stemming from the unconscious.  While I don’t believe that we can ever know for sure that which came from the unconscious, by asking ourselves questions relating to the dream content and making a concerted effort to analyze the dream, we will likely discover at least some elements of our unconscious, even if we have no way of confirming the origin or significance of each dream component.

Again, just as we must be open-minded and willing to face previously unknown aspects of ourselves during the aforementioned inter-subjective experience, we must also be willing to do the same during any dream analysis.  You must be willing to identify personal weaknesses, insecurities, and potentially repressed emotions.  Surely there can be aspects of our unconscious that we’d like and appreciate if discovered, but there will likely be a tendency to repress that which we find repulsive about ourselves.  Thus, I believe that the unconscious contains more negative things about our self than positive things (as implied by Jung’s “Shadow” archetype).

How might one begin such an analysis?  Obviously we must first obtain some data by recording the details of our dreams.  As soon as you wake up after a dream, take advantage of the opportunity to record as many details as you can in order to be more confident with the analysis.  The longer you wait, the more likely the information will become distorted or lost altogether (as we’ve all experienced at one time or another).  As you record these details, try and include different elements of the dream so that you aren’t only recording your perceptions, but also how the setting or events made you feel emotionally.  Note any ambiguities no matter how trivial, mundane, or irrelevant they may seem.  For example, if you happen to notice groups of people or objects in your dreams, try to note how many there are as that number may be significant.  If it seems that the dream is set in the past, try to infer the approximate date.  Various details may be subtle indicators of unconscious material.

Often times dreams are not very easy to describe because they tend to deviate from reality and have a largely irrational and/or emotional structure.  All we can do is try our best to describe what we can remember even if it seems non-sensical or is difficult to articulate.

As for the analysis of the dream content, I try and ask myself specific questions within the context of the dream.  The primary questions include:

  • What might this person, place, or thing symbolize, if they aren’t taken at face value?  That is, what kinds of emotions, qualities, or properties do I associate with these dream contents?
  • If I think my associations for the dream contents are atypical, then what associations might be more common?  In other words, what would I expect the average person to associate the dream content with?  (Collective or personal opinions may present themselves in dreams)

Once these primary questions are addressed, I ask myself questions that may or may not seem to relate to my dream, in order to probe the psyche.  For example:

  • Are there currently any conflicts in my life? (whether involving others or not)
  • If there are conflicts with others, do I desire some form of reconciliation or closure?
  • Have I been feeling guilty about anything lately?
  • Do I have any long term goals set for myself, and if so, are they being realized?
  • What do I like about myself, and why?
  • What do I dislike about myself, and why?  Or perhaps, what would I like to change about myself?
  • Do certain personality traits I feel I possess remind me of anyone else I know?  If so, what is my overall view of that person?
  • Am I envious of anyone else’s life, and if so, what aspects of their life are envied?
  • Are there any childhood experiences I repeatedly think about (good or bad)?
  • Are there any recurring dreams or recurring elements within different dreams?  If so, why might they be significant?
  • Are there any accomplishments that I’m especially proud of?
  • What elements of my past do I regret?
  • How would I describe the relationships with my family and friends?
  • Do I have anyone in my life that I would consider an enemy?  If so, why do I consider them an enemy?
  • How would I describe my sexuality, and my sex life?
  • Am I happy with my current job or career?
  • Do I feel that my life has purpose or that I am well fulfilled?
  • What types of things about myself would I be least comfortable sharing with others?
  • Do I have undesired behaviors that I feel are out of my control?
  • Do I feel the need to escape myself or the world around me?  If so, what might I be doing in order to escape? (e.g. abusing drugs, abusing television or other virtual-reality media, anti-social seclusion, etc.)
  • Might I be suffering from some form of cognitive dissonance as a result of me having conflicting values or beliefs?  Are there any beliefs which I’ve become deeply invested in that I may now doubt to be true, or that may be incompatible with my other beliefs?  If the answer is “no”, then I would ask:  Are there any beliefs that I’ve become deeply invested in, and if so, in what ways could they be threatened?

These questions are intended to probe one’s self beneath the surface.  By asking ourselves specific questions like this, particularly in relation to our dream contents, I believe that we can gain access to the unconscious simply by addressing concepts and potential issues that are often left out-of-sight and out-of-mind.  How we answer these questions isn’t as important as asking them in the first place.  We may deny that we have problems or personal weaknesses as we answer these questions, but asking them will continue to bring our attention to these subjects and elements of ourselves that we often take for granted or prefer not to think about.  In doing so, I believe one will at least have a better chance at accessing the unconscious than if they hadn’t made an attempt at all.

In terms of answering the various questions listed above, the analysis will likely be more useful if you go over the questions a second time, and reverse or change your previous instinctual answer while trying to justify the reversal or change.  This exercise will force you to think about yourself in new ways that might improve access to the unconscious, since you are effectively minimizing the barriers brought on through rationalization and denial.

Final Thoughts

So as we can see, while the unconscious mind may seem inaccessible, there appear to be at least two ways with which we can gain some access.  Inter-subjective communication allows us access to the “I” via the “You”, and access to both the speaker’s and the listener’s unconscious is accomplished via mutual projection.  Dreams and the analysis of such appears to be yet another method for accessing the unconscious.  Since our brains appear to be in a semi-conscious state, the brain may be capable of cognitive processes that aren’t saturated by sensory input from the outside world.  This reduction in sensory input may in fact give the brain more opportunities to “fill in the blanks” (or so to speak), and this may provide a platform for unconscious expression.  So in short, it appears that there are at least a few effective methods for accessing the unconscious self.  The bigger question is:  Are we willing to face this hidden side of ourselves?