The Open Mind

Cogito Ergo Sum

Posts Tagged ‘Psychology

Virtual Reality & Its Moral Implications

leave a comment »

There’s a lot to be said about virtual reality (VR) in terms of our current technological capabilities, our likely prospects for future advancements, and the vast amount of utility that we gain from it.  But, as it is with all other innovations, with great power comes great responsibility.

While there are several types of VR interfaces on the market, used for video gaming or various forms of life simulation, they do have at least one commonality, namely the explicit goal of attempting to convince the user that what they are experiencing is in fact real in at least some sense.  This raises a number of ethical concerns.  While we can’t deny the fact that even reading books and watching movies influences our behavior to some degree, VR is bound to influence our behavior much more readily because of the sheer richness of the qualia and the brain’s inability to distinguish significant differences between a virtual reality and our natural one.  Since the behavioral conditioning schema that our brain employs has evolved to be well adapted to our natural reality, any virtual variety that increasingly approximates it is bound to increasingly affect our behavior.  So we need to be concerned with VR in terms of how it can affect our beliefs, our biases, and our moral inclinations and other behaviors.

One concern with VR is the desensitization to, or normalization of, violence and other undesirable or immoral behaviors.  Many video games have been criticized over the years for this very reason, with the claim that they promote similar behaviors in the users of those games (most especially younger users with more impressionable minds).  These claims have been significantly validated by the American Psychological Association and the American Academy of Pediatrics, where they have both taken firm stances against children and teens playing violent video games, as a result of the accumulated research and meta studies showing a strong link between violent video gaming and increased aggression, anti-social behavior, and sharp decreases in moral engagement and empathy.

Thus, the increasingly realistic nature of VR and the ever-consistent increase in the capacities one has at their disposal within such a virtual space, is bound to exacerbate these types of problems.  If people are able to simulate rape or pedophilia among other morally reprehensible actions and social taboos, will they too become more susceptible to actually engaging in these behaviors once they leave the virtual space and re-enter the real world?  Even if they don’t increase their susceptibility to perform those behaviors, what does such a virtual escapade do to that person’s moral character?  Are they more likely to condone those behaviors (even if they don’t participate in them directly), or to condone other behaviors that have some kind of moral relevance or cognitive overlap with one another?

On the flip side, what if it was possible to use VR as a therapeutic tool to help cure pedophilia or other behavioral problems?  What if one was able to simulate rape, pedophilia or otherwise to reduce their chances of performing those acts in the real world?  Hardly anyone would argue that a virtual rape or molestation is anywhere near as abhorrent or consequential as real instances of such crimes would be, horrific crimes made against real human beings.  While this may only apply to a small number of people, it is at least plausible that such a therapeutic utility would make the world a better place if it prevented an actual rape or other crime from taking place.  If certain people have hard-wired impulses that would normally ruin their lives or the lives of others if left unchecked, then it would be prudent if not morally obligatory to do what we can to prevent such harms from taking place.  So even though this technology could make the otherwise healthy user begin to engage in bad behaviors, it could also be used as an outlet of expression for those already afflicted with similar impulses.  Just as they’ve used VR to help cure anxiety disorders, phobias, PTSD, and other pathologies, by exposing people to various stimuli that help them to overcome their ills, so too may VR possibly provide a cure for other types of mental illnesses and aggressive predispositions such as those related to murder, sexual assault, etc.

Whether VR is used as an outlet for certain behaviors to prevent them from actually happening in the real world, or as a means of curing a person from those immoral inclinations (where the long term goal is to eventually no longer need any VR treatment at all), there are a few paths that could show some promising results to decrease crime and so forth.  But, beyond therapeutic uses, we need to be careful about how these technologies are used generally and how that usage will increasingly affect our moral inclinations.

If society chose to implement some kind of prohibition to limit the types of things people could do in these virtual spaces, that may be of some use, but beyond the fact that this kind of prohibition would likely be difficult to enforce, it would also be a form of selective prohibition that may not be justified to implement.  If one chose to prohibit simulated rape and pedophilia (for example), but not prohibit murder or other forms of assault and violence, then what would justify such a selective prohibition?  We can’t simply rely on an intuition that the former simulated behaviors are somehow more repugnant than the latter (and besides, many would say that murder is just as bad if not worse anyway).  It seems that instead we need to better assess the consequences of each type of simulated behavior on our behavioral conditioning to see if at least some simulated activities should be prohibited while allowing others to persist unregulated.  On the other hand, if prohibiting this kind of activity is not practical or if it can only be implemented by infringing on certain liberties that we have good reasons to protect, then we need to think about some counter-strategies to either better inform people about these kinds of dangers and/or to make other VR products that help to encourage the right kinds of behaviors.

I can’t think of a more valuable use for VR than to help us cultivate moral virtues and other behaviors that are conducive to our well-being and to our living a more fulfilled life.  Anything from reducing our prejudices and biases through exposure to various simulated “out-groups” (for example), to modifying our moral character in more profound ways through artificial realities that can encourage the user to help others in need and to develop habits and inclinations that are morally praiseworthy.  We can even use this technology (and have already to some degree) to work out various moral dilemmas and our psychological response to them without anybody actually dying or getting physically hurt.  Overall, VR certainly holds a lot of promise, but it also poses a lot of psychological danger, thus making it incumbent upon us to talk more about these technologies as they continue to develop.

Karma & Socio-Psychological Feedback Loops

leave a comment »

Over the weekend, I had the pleasure of seeing an old friend.  At one point during our conversation, he basically asked me if I believed in Karma (that good things happen to good people, etc.).  My answer to that question was yes and no.  No, in the sense that I don’t believe in any kind of supernatural moral causation, which is what Karma technically is.  But on the other hand, I do believe that there are other naturalistic mechanisms that produce similar effects which would be indistinguishable from what people would call Karma.  Specifically, I believe that there are various psychological, cognitive and social factors that ultimately produce these kinds of effects on one’s life.  So I decided to expand on this topic a little bit in this post.

First of all, the level of optimism or pessimism that a person has will undoubtedly affect not only their overall worldview but also the course that their life takes over time in profound ways that are often taken for granted or overlooked.  If a person has a positive attitude, they will tend to invite others (whether explicitly or not) into their social circle not only due to their personality being inviting and comforting, but also because that person is more likely to be productive in helping others.  Furthermore, if they are also altruistic, other people will often take notice of this, and are more likely to solidify a relationship with them.  Likewise, since altruism is often reciprocated, then a person that is altruistic is more likely to have help returned to them when they need it most.  So in short, a person that is positive and altruistic is more likely to continue along a positive path in their life’s course simply because their attitude and less selfish behavior serve as catalysts to solidify more meaningful relationships with others, thus allowing them to increasingly gain more safety nets and mutual socioeconomic benefits as time progresses.

One can see how this principle would operate in the converse scenario, that is, with a person that is generally pessimistic and selfish.  This person is clearly more likely to deter new meaningful relationships due to their uninviting personality (especially if they are anti-social), due to how they make others feel generally, and due to them only focusing on their own best interests.  Others are likely to notice this behavior, and if that pessimistic and selfish person needs help at some point in time, they aren’t nearly as likely to receive any.  This in turn will make it more likely for that person to fall into a downward spiral, where their lack of help from others is likely to cause that person to be increasingly resentful, bitter, negative, and even less likely to help others around them then they were before.  So we can see how a person’s attitude and behavioral trends often have a catalyzing effect on their life’s course by effectively amplifying their behavior in a reciprocated fashion from those around them.  That is, whatever socio-psychological environment is being nurtured by that person (whether good or bad) will most likely be reciprocated thus creating a feedback loop that can become amplified over time.

There appears to be a sort of avalanche effect that can occur, where even a tiny chaotic deviation from the present state can lead to very large differences later on.  Most of us have heard of the so-called “Butterfly Effect” where tiny perturbations in a system can lead to huge changes that are increasingly amplified over time, and this socio-psychological feedback loop is perhaps one of the most important illustrations of such an effect.  Even tiny actions or events that influence our perspective (or the perspective of those around us) can often lead to dramatic changes later on in our lives.

Another important point regarding the effect of optimism and pessimism within this socio-psychological feedback loop is the placebo/nocebo effect, where if one believes that either positive or negative outcomes are more likely, their physiology and cognitive states can change in accordance with those expectations.  People that strongly believe that they will fail to reach a goal or that have some other negative expectation (such as getting sick) are more likely to self-manifest that expectation (i.e. the “nocebo” effect) since their expectations not only influence their perception for the worse, but also because they often channel their focus and attention on that negative belief (which can increase stress levels and thus impair cognitive faculties and overall health) and the belief can become reinforced in other ways since the brain’s cognitive biases often function to reinforce whatever beliefs we have in the first place, even if they are unjustified, incorrect, or ultimately bad for our well-being.  Following along this line of reasoning, we can see how a person that strongly believes that they will in fact achieve a goal or some other positive state are more likely to do so.  Thus, the placebo or nocebo effect can directly result from optimistic or pessimistic perspectives and are often reinforced by our own cognitive biases and cognitive dissonance reduction mechanisms.

It seems that even a small boost in encouragement, optimism, or altruism, can lead to a cascade effect of improved social relationships (thus providing more socioeconomic stability) and an improvement in overall well-being through various socio-psychological feedback loops.  Furthermore, our attitude or perspective can also lead to various placebo effects that further reinforce these feedback loops.  In any case, we should all recognize and appreciate how even small perturbations in our attitude as well as in our behavior toward others can have profound changes in our lives.  Even small acts of kindness or morale boosts can go a long way to changing the lives of others, as well as our own.  So to conclude, I would argue that if any kind of Karma seems to exists in this world, those Karmic effects are naturally brought about by the kinds of mechanisms I’ve described here.

The Placebo Effect & The Future of Medicine

leave a comment »

The placebo effect has been known for a long time, and doctors and medical practitioners have been exploiting its efficacy for a number of ailments.  Whether it is sugar pills, a fake surgery, or even prayer, the power of belief and the psychological effect on our physiology is real and undeniable.  While the placebo effect may have its roots (or been most thoroughly exploited) in various religions, through the power of belief, it has been used by non-religious medical practitioners for quite some time now, and it is starting to be investigated much more thoroughly in cognitive neuroscience, and psychology.  Scientists are beginning to design and implement new techniques that take advantage of this effect of the mind helping to heal the body.  It’s become a fascinating area of research with some potentially huge benefits that may prompt a significant paradigm shift in the future of medicine.

A major advantage of placebos (at least those in the form of a pill or injection) is that they don’t require the expensive R&D and drug-synthesis manufacturing processes that traditional treatments do, which means that there are likely billions of dollars that can be saved in the future, as well as the very important benefit of greater environmental sustainability, by consuming less energy and creating less hazardous waste in the pharmaceutical manufacturing process.  In the case of placebic-surgeries (which have been successfully performed), if surgeons simply have to make a far less complicated incision or two, along with some other protocol and ambiance considerations and requirements to produce the desired effect, then the costs of the far simpler operation are drastically reduced, as well as any chances of malpractice or other long-term complications resulting from the surgery.

One thing that will have to be considered as we start making greater use of these placebic treatments (specifically “drugs” and surgeries), is how the pricing and costs associated with placebo options will be decided for a patient.  Will placebos be as expensive as normal drugs (if they are comparably effective), or will the decreased cost of placebo manufacturing offset/reduce the cost of other drugs so that the average trip to the doctor will be cheaper for everyone?  As always, we will likely have to continue battling with pharmaceutical companies and or medical practitioners that take advantage of the cost savings just to increase their own profit margins while giving no trickle-down savings to the patient.  In the grander scheme of things, having healthier people at the same cost that we currently have is still better, but nevertheless, we can only hope that these discoveries will continue to reduce the costs for the patient as well.

As for other types of placebos, such as meditation, prayer, various rituals, etc., since these placebos do not require any physical mediums or materials per se (or at least not in many cases), they are relatively inexpensive, if not entirely free in some cases.  Modern medicine, however, is also making more use of similar placebo methods, that is, placebos that don’t require the intake of a chemical nor require any surgery.  In fact, one can certainly argue that modern medicine has been utilizing these “non-material” placebos for quite some time already.  For example, various psychological treatments have been used to help heal people with all kinds of ailments, many of which are psycho-somatically induced, and all of which can be exacerbated by hypochondria, pessimism, and other causes of stress — and the placebo effect is certainly a likely contributing factor in many of these successful treatments.  These kinds of treatments could very well be applied in many (if not all) other cases that aren’t currently considered “psychological” ailments.  Putting this all together, I think we are going to start seeing a shift in medicine where psychology, cognitive and neuroscience are going to combine with modern “material” medicine to form a more obvious hybrid.  This integration will be significant, as currently many medical practitioners or schools of thought within medicine have a large divide between what are believed to be either psychological or physical ailments.  In reality though, it appears that every ailment is actually a combination of the two that can be more effectively treated, when both aspects are treated rather than merely one or the other.  After all, the brain and the rest of the body operate as a single unit, and so they should be treated as such.

Another positive discovery relating to the placebo effect is the fact that even if people know that they are being given a placebo, it is still effective, as long as they believe that the placebo is effective.  A more common concern regarding the placebo effect is that it will lose all efficacy if patients are informed that they are receiving some kind of placebo, however it turns out that this isn’t the case at all.  To give an example, in a recent study at the Harvard Medical School, people with irritable bowel syndrome were given a placebo and they were informed that the pills were “made of an inert substance, like sugar pills, that have been shown in clinical studies to produce significant improvement in IBS symptoms through mind-body self-healing processes.”  Then the researchers found that despite being aware that they were taking placebos, the participants rated their symptoms as “moderately improved” on average.  This seems to imply that it is the belief in efficacy of a particular treatment that houses the placebic potential for healing, regardless of the substrate used to implement it, and this solves a lot of ethical complications regarding the desired disclosure that a patient is receiving a placebo.

If the belief in the efficacy of a placebo is the primary factor for the placebo’s effectiveness, this would imply that stronger belief will produce a stronger placebo effect.  So it appears that one of the most major challenges in producing ever-more effective treatments within this domain, is going to be finding ways to increase the belief in a treatment’s effectiveness (as well as other psychologically beneficial factors including being generally optimistic, reducing stress, and other factors that haven’t yet been discovered).  It may even be possible one day for this “belief” maximization (or the neurological effect that it causes) to be accomplished by physically altering the brain through various types of electrical stimulation or other neurologically-based treatments, so a person wouldn’t need to be convinced of anything at all.  On a related note, I’d like to mention that we also need to consider that the opposite effect, that is, the “nocebo” effect, also exists and presumably for the same psychological reasons.  That is, by a person believing that something will harm them or that they are getting sick, even if there is no actual pathogen or physical medium to produce the illness, they can actually get sick and make things much worse.  For a powerful example, there were chemo-therapy participants in a certain study, some of which only got a placebo, and they still developed nausea and had their hair fall out (alopecia) because of their expectations of the treatment.  So the placebo effect works both ways, and this means that medical treatment will also likely change with regard to finding new ways of countering the “bad news” of a diagnosis, etc., possibly through the same kinds of psychological and neurological techniques.

Nobody is sure exactly when the placebo effect was first discovered, but it was likely unknowingly discovered many thousands of years ago in various cultures with particular religious beliefs, including those that involved prayer and faith healing, shamans and other medicine men, etc.  Without seeing any material cause or knowing what was causing its efficacy (i.e. dynamics in the brain), people no doubt chalked up many positive effects to the supernatural, whether by the interventions of some god or a number of gods, magic, etc.  So this appears to be one of many examples of how natural selection favors not only certain genes, but also certain memes.  If people began to spread certain religious memes (ideas) that promoted self-healing by utilizing the power of belief, they would be more likely to survive, and this would be yet another factor in explaining why religions formed, why they’ve been as ubiquitous as they have, and why they’ve propagated for so long throughout human history.  To be sure, it could have been the case that humans long ago experienced the placebo effect (without knowing it) and this led to the development of certain religious rituals or beliefs (because the cause was mis-attributed or unknown), or it could be that certain religious beliefs that were formed for other reasons happened to produce a placebo effect (and/or to strengthen it by other psychological factors).  Either way, it was a very valuable discovery indeed.  Now that we are starting to better understand the real physiological/neurological/psychological factors that produce the placebo (and nocebo) effect, we can continue advancing a scientific world-view and continue to increase our well being in the process.

Christianity: A Theological & Moral Critique

leave a comment »

Previously I’ve written several posts concerning religion, including many of the contributing factors that led to the development and perpetuation of religion (among them, our cognitive biases and other psychological driving forces) and the various religious ideas contained within.  I’ve also written a little about some of the most common theological arguments for God, as well as the origin and apparent evolution of human morality.  As a former Christian, I’ve also been particularly interested in the religion of Christianity (or to be more accurate, the various Christianities that have existed throughout the last two millennia), and as a result I’ve previously written a few posts relevant to that topic including some pertaining to historical critical analyses.  In this post, I’d like to elaborate on some of the underlying principles and characteristics of Christianity, although some of these characteristics will indeed also apply to the other Abrahamic religions, and indeed to other non-Abrahamic religions as well.  Specifically, I’d like to examine some of the characteristics of a few of the most primary religious tenets and elements of Christian theology, with regard to some of their resulting philosophical (including logical and ethical) implications and problems.

What Do Christians Believe?

There are a number of basic beliefs that all Christians seem to have in common.  They believe in a God that is all-loving, omnibenevolent, omniscient, omnipotent, and omnipresent (yet somehow transcendent from an actual physical/materialistic omnipresence).  This is a God that they must love, a God that they must worship, and a God that they must also fear.  They believe that their God has bestowed upon them a set of morals and rules that they must follow, although many of the rules originating from their religious predecessor, Judaism (as found in the Old Testament of the Bible), are often ignored and/or are seen as superceded by a “New Covenant” created through their messiah, believed to be the son of God, namely Jesus Christ.  Furthermore, the sacrifice of their purported messiah is believed to have saved mankind from their original sin (which will be discussed further later in this post), and this belief in Jesus Christ as the spiritual savior of mankind is believed by Christians to confer to them an eternal life in heaven after they die.  Whichever list of rules is accepted by any particular sect or denomination of Christianity, along with their own unique interpretation of those rules (and the rest of their scripture for that matter), those rules and scriptural interpretations are to be followed without question as a total solution for how to conduct themselves and live their lives.  If they do so, they will be granted a reward of eternal paradise in heaven.  If they do not, they will suffer the wrath of God and be punished severely by an eternity of torture in hell.  Let’s examine some of these specific attributes of the Christian God.

This God is supposedly all-loving and omnibenevolent.  If we simply look at the Old Testament of the Bible we can see numerous instances of this God implementing, ordaining, or condoning: theft, rape, slavery (and the beating of slaves), sexism, sexual-orientationism (and the murder of homosexuals), child abuse, filicide, murder, genocide, cannibalism, and one of the most noteworthy, vicarious redemption, though this last example may not be as obviously immoral as the rest which is why I mention it in more detail later in this post.  Granted, some of these acts were punishments for disobedience, but this is hardly an excuse worth defending at all, let alone on any moral grounds.  Furthermore, many of the people harmed in this way were innocent, some of them children, which had no responsibility over what their parents did, nor over what society they were brought up in and the values bestowed upon them therein.
Most Christians that are aware of these morally reprehensible actions make excuses for them including: the need to examine those actions within the cultural or historical “context” that they occurred (thus implying that their God’s morals aren’t objective or culturally independent), the claim that whatever their God does is considered moral and good by definition (which either fails to address the Euthyphro Dilemma or fails to meet a non-arbitrary or rational standard of goodness), and/or that some or all of the violent and horrible things commanded in the Old Testament were eventually superceded or nullified by a “New Covenant” with a new set of morals.  In any case, we mustn’t forget about the eternal punishment for those that do not follow God’s wishes.  Does a God that threatens his most prized creation with eternal torture — the worst fate imaginable — and with no chance of defense or forgiveness after death, really possess omnibenevolence and an all-loving nature?  Some people may have a relatively easy life where circumstances have easily encouraged living a life that fits in line with Christianity, but many are not afforded those circumstances and thus there is no absolute fairness or equality for all humans in “making this choice”.
Another point worth considering is the fact that the Christian God didn’t just skip creating the physical world altogether in the first place.  Didn’t God have the power to simply have all humans (or all conscious creatures for that matter) exist in heaven without having to live through any possible suffering on Earth first?  Though life is worth living for many, there has been a lot of suffering for many conscious creatures, and regardless of how good one’s life is on Earth, it could never compare to existence in heaven (according to Christians).  There’s no feasible or coherent reason to explain why God didn’t do this if he is truly omnipotent and omnibenevolent.  It appears that this is either an example of something God didn’t have the power to do, which is an argument against his omnipotence (see next section for more on this attribute), or God was able to do this but didn’t, which is an argument against his omnibenevolence.  Christians can’t have it both ways.
Christians must employ a number of mental gymnastic tricks in order to reconcile all of these circumstances with the cherished idea that their God is nevertheless all-loving and omnibenevolent.  I used to share this view as well, though now I see things quite differently.  What I see from these texts is exactly what I would expect to find from a religion invented by human beings (a “higher” evolved primate) living in a primitive, patriarchal, and relatively uneducated culture in the middle east.  Most noteworthy however, is the fact that we see their morals evolve along with other aspects of their culture over time.  Just as we see all cultures and their morals evolve and change over time in response to various cultural driving forces, whether they are the interests of those in power or those seeking power, and/or the ratcheting effect of accumulating knowledge of the consequences of our actions accompanied by centuries of ethical and other philosophical discourse and deep contemplation.  Man was not made in God’s image — clearly, God was made in man’s image.  This easily explains why this God often acts sexist, petty, violent, callous, narcissistic, selfish, jealous, and overwhelmingly egotistical, and it also explains why this God changes over time into one that begins to promote at least some of the fruits gained in philosophy as well as some level of altruism and love.  After all, these are all characteristics of human beings, not only as we venture from a morally immature childhood to a more morally mature adulthood, but also as we’ve morally evolved over time, both biologically and culturally.
Omniscience and Omnipotence
Next, I mentioned that Christians believe their God to be omniscient and omnipotent.  First of all, these are mutually exclusive properties.  If their God is omniscient, this generally is taken to mean that he knows everything there is to know, including the future.  Not only does he know the future of all time within our universe, this god likely knows the future of all it’s actions and intentions.  If this is the case, then we have several problems posed for Christian theology.  If God knows the future of his own actions, then he is unable to change them, and therefore fails to be omnipotent.  Christians may argue that God only desires to do what he does, and therefore has no need to change his mind.  Nevertheless, he is still entirely unable to do so, even if he never desires to do so.  I’ll also point out that in order for God to know what it feels like to sin, including thinking blasphemous thoughts about himself, he ceases to remain morally pure.  Obviously a Christian can use the same types of arguments that are used to condone God’s heinous actions in the Bible (specifically the Old Testament), namely that whatever God does is good and morally perfect, although we can see the double-standard here quite clearly.
The second and perhaps more major problem for Christian theology regarding the attribute of omniscience is the problem of free will and the entire biblical narrative.  If God knows exactly what is going to happen before it does, then the biblical narrative is basically just a story made up by God, and all of history has been like a sort of cosmic or divinely created “movie” that is merely being played out and couldn’t have happened any other way.  If God knows everything that is going to happen, then he alone is the only one that could change such a fate.  However, once again, he is unable to do so if he knows what he is going to do before he does it.  God, in this case, must have knowingly created the Devil and all of the evil in the world.  God knows who on Earth is going to heaven and who is going to hell, and thus our belief or disbelief in God or our level of obedience to God is pre-determined before we are even born.  Overall, we can have no free will if God is omniscient and knows what we will do before we do it.
My worldview, which is consistent with science, already negates free will as free will can’t exist with the laws of physics governing everything as they do (regardless of any quantum randomness).  So my view regarding free will is actually coherent with the idea of a God that knows the future, so this isn’t a problem for me.  It is however, a problem for Christians because they want to have their proverbial cake and eat it too.  To add to this dilemma, even if God was not omniscient, that still wouldn’t negate the fact that the only two logical possibilities that exist regarding the future are that it is either predetermined or random (even if God doesn’t know that future).  In either logical possibility, humans still couldn’t have free will, and thus the entire biblical narrative and the entire religion for that matter are invalid regardless of the problem of omniscience.  The only way for humans to have free will is if two requirements are met.  First, God couldn’t have omniscience for the logically necessary reasons already mentioned, and second, humans would have to possess the physically and logically impossible property of self-caused actions and behaviors — where our intentional actions and behaviors would have to be free of any prior causes contributing to said intentions (i.e. our intentions couldn’t be caused by our brain chemistry, our genes, our upbringing and environment, the laws of physics which govern all of these processes, etc.).  Thus, unless we concede that God isn’t omniscient, and that humans possess the impossible ability of causa sui intentions, then all of history, beginning with the supposed “Fall of Man” in the Garden of Eden would have either been predetermined or would have resulted from random causes.  This entails that we would all be receiving a punishment due to an original sin that either God himself instantiated with his own deterministic physical laws, or that was instantiated by random physical laws that God instantiated (even if they appear to be random to God as well) which would have likewise been out of our control.
The Christian God is also described as being omnipresent.  What exactly could this mean?  It certainly doesn’t mean that God is physically omnipresent in any natural way that we can detect.  Rather it seems to mean that God’s omnipresence is almost always invisible to us (though not always, e.g., the burning bush), thus transcending the physical realm with an assumption of a supernatural or metaphysical realm outside of the physical universe, yet somehow able to intervene or act within it.  This to me seems like a contradiction of terms as well since the attribute of omnipresence implied by Christians doesn’t seem to include the natural realm (at least not all the time), but only a transcendent type (all the time).  Christians may argue that God is omnipresent in the natural world, however this defense could only work by changing the definition of “natural” to mean something other than the universe that we can universally and unmistakably detect, and therefore the argument falls short.  However, I only see this as a minor problem for the Christian theology, and since it isn’t as central a precept nor as important a precept as the others, I won’t discuss it further.
Love, Worship, and Fear
Though Christians may say that they don’t need to be forced to love, or worship God because they do so willingly, let’s examine the situation here.  The bible instructs Christians to fear God within dozens of different verses throughout.  In fact, Deuteronomy 10:12 reads: “And now, Israel, what does the Lord your God require of you, but to fear the Lord your God, to walk in all his ways, to love him, to serve the Lord your God with all your heart and with all your soul.”
It’s difficult for me to avoid noticing how this required love, servitude, worship, and fear of God resembles the effective requirements of some kind of celestial dictator, and one that implements a totalitarian ideology with explicit moral codes and a total solution for how one is to conduct themselves throughout their entire lives.  This dictator is similar to that which I’ve heard of in Orwell’s Nineteen Eighty-Four, even sharing common principles such as “thought crimes” which a person can never hide given an omniscient God (in the Christian view) that gives them no such privacy.  Likewise with the Orwellian dystopia, the totalitarian attempt to mold us as it does largely forces itself against our human nature (in many ways at least), and we can see the consequences of this clash throughout history, whether relating to our innate predisposition for maintaining certain human rights and freedoms (including various forms of individuality and free expression), maintaining our human sexuality, and other powerful aspects of who we are as a species given our evolutionary history.
If you were to put many of these attributes of God into a person on Earth, we would no doubt see that person as a dictator, and we would no doubt see the total solution implemented as nothing short of totalitarian.  Is there any escape from this totalitarian implementation?  In the Christian view, you can’t ever escape (though they’d never use the word “escape”) from the authority of this God, even after you die.  In fact, it is after a person dies that the fun really begins, with a fate of either eternal torture or eternal paradise (with the latter only attainable if you’ve met the arbitrary obligations of this God).  While Christians’ views of God may differ markedly from the perspective I’ve described here, so would the perspective of a slave that has been brainwashed by their master through the use of fear among other psychological motivations (whether that person is conscious of their efficacy or not).  They would likely not see themselves as a slave at all, even though an outsider looking at them would make no mistake in making such an assertion.
Vicarious Redemption
Another controversial concept within Christianity is that of vicarious redemption or “substitutionary atonement”.  There are a number of Christian models that have been formulated over the years to interpret the ultimate meaning of this doctrine, but they all involve some aspect of Jesus Christ undergoing a passion, crucifixion and ultimately death in order to save mankind from their sins.  This was necessary in the Christian view because after the “Fall of Man” beginning with Adam and Eve in the Garden of Eden (after they were deceived and tempted to disobey God by a talking snake), all of humanity was supposedly doomed to both physical and spiritual death.  Thankfully, say the Christians, in his grace and mercy, God provided a way out of this dilemma, specifically, the shedding of blood from his perfect son.  So through a human sacrifice, mankind is saved.  This concept seems to have started with ancient Judaic law, specifically within the Law of Moses, where God’s chosen people (the Jews) could pay for their sins or become “right in God’s eyes” through an atonement accomplished through animal sacrifice.
Looking in the Old Testament of the Bible at the book of Leviticus (16:1-34) we see where this vicarious redemption or substitutionary atonement began, which I will now paraphrase.  Starting with Moses (another likely mythological being according to many scholars), we read that God told him that his brother Aaron (who recently had two of his own sons die when they “drew too close to the presence of the Lord”) could only enter the shrine if he bathed, wore certain garments, and then brought with him a bull for a sin offering and a ram for a burnt offering.  He was also to take two goats from the Israelite community to make expiation for himself and for his household.  Then Aaron was to take the two goats and let them stand before the Lord at the entrance of the “Tent of Meeting” and place lots upon the goats (i.e. to randomly determine each of the goat’s fate in the ritual), one marked for the Lord and the other marked for “Azazel” (i.e. marked as a “scapegoat”).  He was to bring forward the goat designated by lot for the Lord, which he is to offer as a sin offering, while the goat designated by lot for “Azazel” shall be left standing alive before the Lord, to make expiation with it and to send it off to the wilderness to die.  Then he was to offer his “bull of sin offering” by slaughtering it, followed by taking some of the blood of the bull and sprinkling it with his finger several times.  Then he was to slaughter the “people’s goat of sin offering”, and do the same thing with the goat’s blood as was done with the bull’s.  Then a bit later he was to take some more blood of the bull and of the goat and apply it to each of the horns of the alter and then sprinkle the rest of the blood with his finger seven times (this was meant to “purge the shrine of uncleanness”).  Afterward, the live goat was to be brought forward and Aaron was to lay his hands upon the head of the goat and confess over it all the iniquities and transgressions of the Israelites, whatever their sins, which was meant to put those sins on the head of the goat.  Then the goat was to be sent off into the desert to die.  Then Aaron was to offer his burnt offering and the burnt offering of the people, making expiation for himself as well as for the people.  The fat of the sin offering was to be “turned into smoke” on the altar.  Then the “bull of sin offering” and “goat of sin offering” (whose blood was brought in to purge the shrine) were to be removed from their camp, and their hides, flesh, and dung were to be consumed in a fire.  And this became a law for the Israelites to atone for their sins, by performing this ritual once a year.
Animal sacrifice has been a long practiced ritual in most religions (at one time or another), where it has been primarily used as a form of blood magic to appease the gods of that particular culture and religion, and has in fact been found in the history of almost all cultures throughout the world.  So it’s not surprising in the sense that this barbaric practice had a precedent and was near ubiquitous in a number of cultures, however it was most prevalent in the primitive cultures of the past.  Not surprisingly, these primitive cultures had far less knowledge about the world available to comprise their worldview.  As a result, they invoked the supernatural and a number of incredibly odd rituals and behaviors.  We can see some obviously questionable ethics involved here with this type of practice: an innocent animal suffers and/or dies in order to compensate for the guilty animal’s transgressions.  Does this sound like the actions of a morally “good” God?  Does this sound like a moral philosophy that involves personal responsibility, altruism, and love?  I’ll leave that to the reader to decide for themselves, but I think the answer is quite clear.
It didn’t stop there though, unfortunately.  This requirement of God was apparently only temporary and eventually a human sacrifice was needed, and this came into fruition in the New Testament of the Christian Bible with the stories and myths of a Jewish man (a mix between an apocalyptic itinerant rabbi and a demigod) named Jesus Christ (Joshua/Yeshua) — a man who Christians claim was their messiah, the son of God (and “born of a virgin” as most mythic heroes were), and he was a man who most Christians claim was also God (despite the obvious logical contradiction of this all-human/all-divine duality, which is amplified further in the Trinitarian doctrine).  Along with this new vicarious redemption sacrifice was the creation of a “New Covenant” — a new relationship between humans and God mediated by Jesus Christ.  It should be noted that the earliest manuscripts in the New Testament actually suggest that Jesus was likely originally believed to be a sort of cosmic archangel solely communicating to apostles through divine revelation, dreams, visions, and through hidden messages in the scripture.  Then it appears that Jesus was later euhemerized (i.e. placed into stories in history) later on, most notably in the allegorical and other likely fictions found within the Gospels — although most Christians are unaware of this, as the specific brands of Christianity that have survived to this day mutually assume a historical Jesus.  For more information regarding recent scholarship pertaining to this, please read this previous post.
Generally, Christians claim that this “New Covenant” was instituted at the “Last Supper” as part of the “Eucharist”.  To digress briefly, the Eucharist was a ritual considered by most Christians to be a sacrament.  During this ritual, Jesus is claimed to have given his disciples bread and wine, asking them to “do this in memory of me,” while referring to the bread as his “body” and the wine as his “blood” (many Christians think that this was exclusively symbolic).  Some Christians (Catholics, Orthodox, and members of The Church of the East) however believe in transubstantiation, where the bread and wine that they are about to eat literally becomes the body and blood of Jesus Christ.  So to add to the aforementioned controversial religious precepts, we have some form of pseudo or quasi-cannibalism of the purported savior of mankind, the Christian god (along with the allegorical content or intentions, e.g., eating Jesus on the 15th day of the month as Jews would normally have done with the Passover lamb).  The practice of the Eucharist had a precedent in other Hellenistic mystery religions where members of those religions would have feasts/ceremonies where they would symbolically eat the flesh and drink the blood of their god(s) as well, to confer to them eternal life from there (oft) “dying-and-rising” savior gods.  So just like the animal sacrifice mentioned earlier, practices that were the same as or incredibly similar to the Eucharist (including the reward of eternal life) arose prior to Christianity and were likely influential during Christianity’s origin and development.  Overall, Christianity is basically just a syncretism between Judaism and Hellenism anyway, which explains the large number of similarities and overlap between the two belief systems, as these cultures were able to share, mix, and modify these religious ideas over time.
Returning back to the vicarious redemption, this passion — this incredible agony, suffering, and eventual death of Jesus Christ along with the blood he spilled, we hear, was to reverse humanity’s spiritual death (and forgive us for our sins) once and for all with no more yearly atonement needed.  This was after all the “perfect” sacrifice, and according to Christians, this fate of their supposed messiah was prophesied in their scriptural texts, and thus was inevitable to occur.  So what exactly are we to make of this vicarious redemption through human sacrifice, that God required to happen?  Or the pseudo-cannibalism in the Eucharist for that matter?  There are certainly some good symbolic intentions and implications regarding the idea of a person selflessly performing an altruistic act to save others, and I completely recognize that fact as well.  However, it is something else entirely to assume that one’s transgressions can be born onto another person so that the transgressions are “made clean” or nullified or “canceled out” in some way, let alone through the act of torturing and executing a human being.  In the modern world, if we saw such a thing, we would be morally obligated to stop it.  Not only to prevent that human being from suffering needlessly, but to give that person (if they are willingly submitting themselves to the torture and execution) the proper mental health resources to protect themselves and hopefully to repair whatever psychological ailment that caused such a lapse in their sanity in the first place.
As for the Eucharist, there’s definitely something to be said about the idea of literally or symbolically eating another human being.  While a symbolic version is generally speaking significantly less controversial, the idea altogether seems to be yet another form of barbaric blood magic (just like the crucifixion, and the non-human animal sacrifice that preceded it).  However, the act of remembrance of Jesus via the Eucharist is an admirable intention and allegory for becoming one with Jesus, and if the food and wine were seen to represent his teachings and message only (and explicitly so), then it wouldn’t be nearly as controversial.  However, given that there is a very obvious intention (in the Gospel according to Mark for example, 14:16-24) to eat Jesus in place of the Passover lamb, the Eucharist is at the very least a controversial allegory, and if it isn’t supposed to be allegorical (or entirely allegorical), this would explain why the belief in transubstantiation (held by many Christians) is as important as it is.
There are two final things I’d like to mention regarding Jesus Christ’s role in this vicarious redemption.  Let’s assume for the moment, that this act isn’t immoral and view it from a Christian perspective (however irrational that may be).  For one, Jesus knew perfectly well that he was going to be resurrected afterward, and so ultimately he didn’t die the same way you or I would die if we had made the same sacrifice, for we would die forever.  He knew that his death would only be temporary and physical (though many Christians think he was physically as opposed to spiritually resurrected).  If I knew that I would not be resurrected, that would be a much more noble sacrifice for I would be giving up my life indefinitely.  Furthermore, with knowledge of the afterlife, and if I accept that heaven and hell exist, then an even greater sacrifice would be for Jesus to die and go to hell for all eternity, as that would be the greatest possible self-sacrifice imaginable.  However, this wasn’t what happened.  Instead, Jesus suffered (horribly no less), and then died, but only for 3 days, and then he basically became invincible and impervious to any further pain or suffering.  Regardless of how Christians would respond to this point, the fact remains that a more noble sacrifice was possible, but didn’t occur.  The second and last point I’ll mention is the promise offered from the action — that true believers are now granted an eternity in heaven.  One little issue here for Christians is the fact that they don’t know for a fact whether or not God would keep his end of the bargain.  God can do whatever he wants, and whatever he does is morally good in the Christian view — even if that means that he changes his mind and reneges on his promise.  If Christians argue that God would never do this, they are making the assumption that they know with 100% certainty what God will (or will not) do, and this is outside of their available knowledge (again according to the Christian view of not knowing what God is thinking or what he will do).  Just as the rest of the religion goes, the truth of the promise of eternal life itself is based on faith, and believers may end up in hell anyway (it would be up to God either way).  Furthermore, even if all believers did go to heaven, could they not rebel just as Lucifer did when he was an angel in heaven?  Once again, Christians may deny this, but there’s nothing in their scriptures to suggest that this is impossible, especially given the precedent of God’s highest angel doing so in the past.
Final Thoughts
I will say that there are a lot of good principles found within the Christian religion, including that of forgiveness and altruism, but there are morally reprehensible things as well, and we should expect that a religion designed by men living long ago carries both barbaric and enlightened human ideas, with more enlightened ideas coming later as the religion co-developed with the culture around it.  Many of the elements we admire (such as altruism and forgiveness for example) exist in large part because they are simply some of the aspects of human nature that evolution favored (since cooperation and social relationships help us to survive many environmental pressures), and this fact also explains why they are seen cross-culturally and do not depend on any particular religion.  Having said that, I will add that on top of our human nature, we also learn new and advantageous ideas over time including those pertaining to morals and ethics, and it is philosophical contemplation and discourse that we owe our thanks to, not any particular religion, even if they endorse those independent ideas.  One of the main problems with religion, especially one such as Christianity, is that it carries with it so many absurd assumptions and beliefs about reality and our existence, that the good philosophical fruits that accompany it are often tainted with dangerous dogma and irrational faith in the unfalsifiable, and this is a serious problem that humanity has been battling for millennia.
I found a quote describing Christianity from a particular perspective that, while offensive to Christians, does shed some light on the overall ludicrous nature of their (and my former) belief system.
Christianity was described as:
“The belief that a cosmic Jewish zombie who was his own father can make you live forever if you symbolically eat his flesh and drink his blood and telepathically tell him you accept him as your master, so he can remove an evil force from your soul that is present in humanity because a rib-woman was convinced by a talking snake to eat from a magical tree”
Honestly, that quote says it all.  Regarding the Adam and Eve myth, most Christians (let alone most people in general) don’t realize that Eve received more of the blame than Adam did (and women supposedly received the punishment of painful childbirth as a result), based on fallacious reasoning from God.  She received more blame than Adam did, as she received a punishment that Adam did not have to endure and not vice versa (since both men and women would have to endure the punishment meant “for Adam”, that is, difficulty farming and producing food).  It seems that Eve was punished more, presumably because she ate the fruit of the tree of knowledge first (or because she was a woman rather than a man), however, what is quite clear to me from reading this story is that Eve was deceived by a highly skilled deceiver (the serpent) that was hell-bent on getting her to eat from the tree.  Adam however, was duped by his wife, a woman made from an insignificant part of his body (his rib), and a woman that was not a highly skilled deceiver as the serpent was.  It seems to me that this gives the expression “Fall of Man” a whole new meaning, as in this case, it seems that women would have become the “head of the household” instead.  Yet, what we see is a double standard here, and it appears that the sexist, patriarchal authors illustrated their true colors quite well when they were devising this myth, and their motivations for doing so were obvious considering the way they had God create Eve (from an insignificant part of Adam, and for the purpose of keeping him company as a subservient wife), and the way they portray her is clearly meant to promote patriarchal dominance.  This is even further illustrated by the implication that God is a male, referred to as “He”, etc., despite the fact that all living animals on Earth are born from a “life producing” or “life creating” female.  It’s nothing but icing on the cake I guess.

Learning About Our Cognitive Biases: An Antidote to Irrational & Dogmatic Thinking (Part 3 of 3)

leave a comment »

This is part 3 of 3 of this post.  Click here to read part 2.

Reactive Devaluation

Studies have shown that if a claim is believed to have come from a friend or ally, the person receiving it will be more likely to think that it has merit and is truthful.  Likewise, if it is believed to have come from an enemy or some member of the opposition, it is significantly devalued.  This bias is known as reactive devaluation.  A few psychologists actually determined that the likeabillity of the source of the message is one of the key factors involved with this effect, and people will actually value information coming from a likeable source more than a source they don’t like, with the same order of magnitude that they would value information coming from an expert versus a non-expert.  This is quite troubling.  We’ve often heard about how political campaigns and certain debates are seen more as popularity contests than anything else, and this bias is likely a primary reason for why this is often the case.

Unfortunately, a large part of society has painted a nasty picture of atheism, skepticism, and the like.  As it turns out, people who do not believe in a god (mainly the Christian god) are the least trusted minority in America, according to a sociological study performed a few years ago (Johanna Olexy and Lee Herring, “Atheists Are Distrusted: Atheists Identified as America’s Most Distrusted Minority, According to Sociological Study,” American Sociological Association News, May 3, 2006).  Regarding the fight against dogmatism, the rational thinker needs to carefully consider how they can improve their likeability to the recipients of their message, if nothing else, by going above and beyond to remain kind, respectful, and maintain a positive and uplifting attitude during their argument presentation.  In any case, the person arguing will always be up against any societal expectations and preferences that may reduce that person’s likeability, so there’s also a need to remind others as well as ourselves about what is most important when discussing an issue — the message rather than the messenger.

The Backfire Effect & Psychological Reactance

There have already been several cognitive biases mentioned that interfere with people’s abilities to accept new evidence that contradicts their beliefs.  To make matters worse, psychologists have discovered that people often react to disconfirming evidence by actually strengthening their beliefs.  This is referred to as the backfire effect.  This can make refutations futile most of the time, but there are some strategies that help to combat this bias.  The bias is largely exacerbated by a person’s emotional involvement and the fact that people don’t want to appear to be unintelligent or incorrect in front of others.  If the debating parties let tempers cool down before resuming the debate, this can be quite helpful regarding the emotional element.  Additionally, if a person can show their opponent how accepting the disconfirming evidence will benefit them, they’ll be more likely to accept it.  There are times when some of the disconfirming evidence mentioned is at least partially absorbed by the person, and they just need some more time in a less tense environment to think things over a bit.  If the situation gets too heated, or if a person risks losing face with peers around them, the ability to persuade that person only decreases.

Another similar cognitive bias is referred to as reactance, which is basically a motivational reaction to instances of perceived infringement of behavioral freedoms.  That is, if a person feels that someone is taking away their choices or limiting the number of them, such as in cases where they are being heavily pressured into accepting a different point of view, they will often respond defensively.  They may feel that freedoms are being taken away from them, and it is only natural for a person to defend whatever freedoms they see themselves having or potentially losing.  Whenever one uses reverse psychology, they are in fact playing on at least an implicit knowledge of this reactance effect.  Does this mean that rational thinkers should use reverse psychology to persuade dogmatists to accept reason and evidence?  I personally don’t think that this is the right approach because this could also backfire and I would prefer to be honest and straightforward, even if this results in less efficacy.  At the very least however, one needs to be aware of this bias and needs to be careful with how they phrase their arguments, how they present the evidence, and to maintain a calm and respectful attitude during these discussions, so that the other person doesn’t feel the need to defend themselves with the cost of ignoring evidence.

Pareidolia & Other Perceptual Illusions

Have you ever seen faces of animals or people while looking at clouds, shadows, or other similar situations?  The brain has many pattern recognition modules and will often respond erroneously to vague or even random stimuli, such that we perceive something familiar, even when it isn’t actually there.  This is called pareidolia, and is a type of apophenia (i.e. seeing patterns in random data).  Not surprisingly, many people have reported instances of seeing various religious imagery and so forth, most notably the faces of prominent religious figures, from ordinary phenomena.  People have also erroneously heard “hidden messages” in records when playing them in reverse, and similar illusory perceptions.  This results from the fact that the brain often fills in gaps and if one expects to see a pattern (even if unconsciously driven by some emotional significance), then the brain’s pattern recognition modules can often “over-detect” or result in false positives by too much sensitivity and by conflating unconscious imagery with one’s actual sensory experiences, leading to a lot of distorted perceptions.  It goes without saying that this cognitive bias is perfect for reinforcing superstitious or supernatural beliefs, because what people tend to see or experience from this effect are those things that are most emotionally significant to them.  After it occurs, people believe that what they saw or heard must have been real and therefore significant.

Most people are aware that hallucinations occur in some people from time to time, under certain neurological conditions (generally caused by an imbalance of neurotransmitters).  A bias like pareidolia can effectively produce similar experiences, but without requiring the more specific neurological conditions that a textbook hallucination requires.  That is, the brain doesn’t need to be in any drug-induced state nor does it need to be physically abnormal in any way for this to occur, making it much more common than hallucinations.  Since pareidolia is also compounded with one’s confirmation bias, and since the brain is constantly implementing cognitive dissonance reduction mechanisms in the background (as mentioned earlier), this can also result in groups of people having the same illusory experience, specifically if the group shares the same unconscious emotional motivations for “seeing” or experiencing something in particular.  These circumstances along with the power of suggestion from even just one member of the group can lead to what are known as collective hallucinations.  Since collective hallucinations like these inevitably lead to a feeling of corroboration and confirmation between the various people experiencing it (say, seeing a “spirit” or “ghost” of someone significant), they can reinforce one another’s beliefs, despite the fact that the experience was based on a cognitive illusion largely induced by powerful human emotions.  It’s likely no coincidence that this type of group phenomenon seems to only occur with members of a religion or cult, specifically those that are in a highly emotional and synchronized psychological state.  There have been many cases of large groups of people from various religions around the world claiming to see some prominent religious figure or other supernatural being, illustrating just how universal this phenomena is within a highly emotional group context.

Hyperactive Agency Detection

There is a cognitive bias that is somewhat related to the aforementioned perceptual illusion biases which is known as hyperactive agency detection.  This bias refers to our tendency to erroneously assign agency to events that happen without an agent causing them.  Human beings all have a theory of mind that is based in part on detecting agency, where we assume that other people are causal agents in the sense that they have intentions and goals just like we do.  From an evolutionary perspective, we can see that our ability to recognize that others are intentional agents allows us to better predict another person’s behavior which is extremely beneficial to our survival (notably in cases where we suspect others are intending to harm us).

Unfortunately our brain’s ability to detect agency is hyperactive in that it errs on the side of caution by over-detecting possible agency, as opposed to under-detecting (which could negatively affect our survival prospects).  As a result, people will often ascribe agency to things and events in nature that have no underlying intentional agent.  For example, people often anthropomorphize (or implicitly assume agency to) machines and get angry when they don’t work properly (such as an automobile that breaks down while driving to work), often with the feeling that the machine is “out to get us”, is “evil”, etc.  Most of the time, if we have feelings like this, they are immediately checked and balanced by our rational minds that remind us that “it’s only a car”, or “it’s not conscious”, etc.  Similarly, when natural disasters or other similar events occur, our hyperactive agency detection snaps into gear, trying to find “someone to blame”.  This cognitive bias explains quite well, at least as a contributing factor as to why so many ancient cultures assumed that there were gods that caused earthquakes, lightning, thunder, volcanoes, hurricanes, etc.  Quite simply, they needed to ascribe the events as having been caused by someone.

Likewise, if we are walking in a forest and hear some strange sounds in the bushes, we may often assume that some intentional agent is present (whether another person or other animal), even though it may simply be the wind that is causing the noise.  Once again, we can see in this case the evolutionary benefit of this agency detection, even with the occasional “false positive”, the noise in the bushes could very well be a predator or other source of danger, so it is usually better for our brains to assume that the noise is coming from an intentional agent.  It’s also entirely possible that even in cases where the source of sound was determined to be the wind, early cultures may have ascribed an intentional stance to the wind itself (e.g. a “wind” god, etc.).  In all of these cases, we can see how our hyperactive agency detection can lead to irrational, theistic and other supernatural belief systems, and we need to be aware of such a bias when evaluating our intuitions of how the world works.

Risk Compensation

It is only natural that the more a person feels protected from various harms, the less careful they tend to behave, and vice versa.  This apparent tendency, sometimes referred to as risk compensation, is often a harmless approach because generally speaking, if one is well protected from harm, they do have less to worry about than one who is not, and thus they can take greater relative risks as a result of that larger cushion of safety.  Where this tends to cause a problem however, is in cases when the perceived level of protection is far higher than it actually is.  The worst case scenario that comes to mind is if a person that appeals to the supernatural thinks that it doesn’t matter what happens to them or that no harm can come to them because of some belief in a supernatural source of protection, let alone one that they believe provides the most protection possible.  In this scenario, their level of risk compensation is fully biased to a negative extreme, and they will be more likely to behave irresponsibly because they don’t have realistic expectations of the consequences of their actions.  In a post-enlightenment society, we’ve acquired a higher responsibility to heed the knowledge gained from scientific discoveries so that we can behave more rationally as we learn more about the consequences of our actions.

It’s not at all difficult to see how the effects of various religious dogma on one’s level of risk compensation can inhibit that society from making rational, responsible decisions.  We have several serious issues plaguing modern society including those related to environmental sustainability, climate change, pollution, war, etc.  If believers in the supernatural have an attitude that everything is “in God’s hands” or that “everything will be okay” because of their belief in a protective god, they are far less likely to take these kinds of issues seriously because they fail to acknowledge the obvious harm on everyone in society.  What’s worse is that not only are dogmatists reducing their own safety, but they’re also reducing the safety of their children, and of everyone else in society that is trying to behave rationally with realistic expectations.  If we had two planets, one for rational thinkers, and one for dogmatists, this would no longer be a problem for everyone.  The reality is that we share one planet and thus we all suffer the consequences of any one person’s irresponsible actions.  If we want to make more rational decisions, especially those related to the safety of our planet’s environment, society, and posterity, people need to adjust their risk compensation such that it is based on empirical evidence using a rational, proven scientific methodology.  This point clearly illustrates the need to phase out supernatural beliefs since false beliefs that don’t correspond with reality can be quite dangerous to everybody regardless of who carries the beliefs.  In any case, when one is faced with the choice between knowledge versus ignorance, history has demonstrated which choice is best.

Final thoughts

The fact that we are living in an information age with increasing global connectivity, and the fact that we are living in a society that is becoming increasingly reliant on technology, will likely help to combat the ill effects of these cognitive biases.  These factors are making it increasingly difficult to hide one’s self from the evidence and the proven efficacy of using the scientific method in order to form accurate beliefs regarding the world around us.  Societal expectations are also changing as a result of these factors, and it is becoming less and less socially acceptable to be irrational, dogmatic, and to not take scientific methodology more seriously.  In all of these cases, having knowledge about these cognitive biases will help people to combat them.  Even if we can’t eliminate the biases, we can safeguard ourselves by preparing for any potential ill effects that those biases may produce.  This is where reason and science come into play, providing us with the means to counter our cognitive biases by applying a rational skepticism to all of our beliefs, and by using a logical and reliable methodology based on empirical evidence to arrive at a belief that we can be justifiably confident (though never certain) in.  As Darwin once said, “Ignorance more frequently begets confidence than does knowledge; it is those who know little, and not those who know much, who so positively assert that this or that problem will never be solved by science.”  Science has constantly been ratcheting our way toward a better understanding of the universe we live in, despite the fact that the results we obtain from science are often at odds with our own intuitions about how we think the world is.  Only relatively recently has science started to provide us with information regarding why our intuitions are often at odds with the scientific evidence.

We’ve made huge leaps within neuroscience, cognitive science, and psychology, and these discoveries are giving us the antidote that we need in order to reduce or eliminate irrational and dogmatic thinking.  Furthermore, those that study logic have an additional advantage in that they are more likely to spot fallacious arguments that are used to support this or that belief, and thus they are less likely to be tricked or persuaded by arguments that often appear to be sound and strong, but actually aren’t.  People fall prey to fallacious arguments all the time, and it is mostly because they haven’t learned how to spot them (which courses in logic and other reasoning can help to mitigate).  Thus, overall I believe it is imperative that we integrate knowledge concerning our cognitive biases into our children’s educational curriculum, starting at a young age (in a format that is understandable and engaging of course).  My hope is that if we start to integrate cognitive education (and complementary courses in logic and reasoning) as a part of our foundational education curriculum, we will see a cascade effect of positive benefits that will guide our society more safely into the future.

Learning About Our Cognitive Biases: An Antidote to Irrational & Dogmatic Thinking (Part 2 of 3)

leave a comment »

This is part 2 of 3 of this post.  Click here to read part 1.

The Feeling of Certainty & The Overconfidence Effect

There are a lot of factors that affect how certain we feel that one belief or another is in fact true.  One factor that affects this feeling of certainty is what I like to call the time equals truth fallacy.  With this cognitive bias, we tend to feel more certain about our beliefs as time progresses, despite not gaining any new evidence to support those beliefs. Since the time with a particular belief is the only factor involved here, it will have greater effects on those that are relatively isolated or sheltered from new information.  So if a person has been indoctrinated with a particular set of beliefs (let alone irrational beliefs), and they are effectively “cut off” from any sources of information that could serve to refute those beliefs, it will become increasingly more difficult to persuade them later on.  This situation sounds all too familiar, as it is often the case that dogmatic groups will isolate themselves and shelter their members from outside influences, for this very reason.  If the group can isolate their members for long enough (or the members actively isolate themselves), the dogma effectively gets burned into them, and the brainwashing is far more successful.  That this cognitive bias has been exploited by various dogmatic/religious leaders throughout history is hardly surprising considering how effective it is.

Though I haven’t researched or come across any proposed evolutionary reasons behind the development of this cognitive bias, I do have at least one hypothesis pertaining to its origin.  I’d say that this bias may have resulted from the fact that the beliefs that matter most (from an evolutionary perspective) are those that are related to our survival goals (e.g. finding food sources or evading a predator).  So naturally, any beliefs that didn’t cause noticeable harm to an organism weren’t correlated with harm, and thus were more likely to be correct (i.e. keep using whatever works and don’t fix what ain’t broke).  However, once we started adding many more beliefs to our repertoire, specifically those that didn’t directly affect our survival (including many beliefs in the supernatural), the same cognitive rules and heuristics were still being applied as before, although these new types of beliefs (when false) haven’t been naturally selected against, because they haven’t been detrimental enough to our survival (at least not yet, or not enough to force a significant evolutionary change).  So once again, evolution may have produced this bias for very advantageous reasons, but it is a sub-optimal heuristic (as always) and one that has become a significant liability after we started evolving culturally as well.

Another related cognitive bias, and one that is quite well established, is the overconfidence effect, whereby a person’s subjective confidence level or feeling of confidence in his or her judgements or beliefs are predictably higher than the actual objective accuracy of those judgements and beliefs. In a nutshell, people tend to have far more confidence in their beliefs and decisions than is warranted.  A common example cited to illustrate the intensity of this bias pertains to people taking certain quizzes, claiming to be “99% certain” about their answers to certain questions, and then finding out afterwards that they were wrong about half of the time.  In other cases, this overconfidence effect can be even worse.

In a sense, the feeling of certainty is like an emotion, which, like our other emotions, occur as a natural reflex, regardless of the cause, and independently of reason.  Just like other emotions such as anger, pleasure, or fear, the feeling of certainty can be produced by a seizure, certain drugs, and even electrical stimulation of certain regions of the brain.  In all of these cases, even when no particular beliefs are being thought about, the brain can produce a feeling of certainty nevertheless.  Research has shown that the feeling of knowing or certainty can also be induced through various brain washing and trance-inducing techniques such as a high repetition of words, rhythmic music, sleep deprivation (or fasting), and other types of social/emotional manipulation.  It is hardly a coincidence that many religions often employ a number of these techniques within their repertoire.

The most important point to take away from learning about these “confidence/certainty” biases is to understand that the feeling of certainty is not a reliable way of determining the accuracy of one’s beliefs.  Furthermore, one must realize that no matter how certain we may feel about a particular belief, we could be wrong, and often times are.  Dogmatic belief systems such as those found in many religions are often propagated by a misleading and mistaken feeling of certainty, even if that feeling is more potent than any ever experienced before.  Often times when people ascribing to these dogmatic belief systems are questioned about the lack of empirical evidence supporting their beliefs, even if all of their arguments have been refuted, they simply reply by saying “I just know.”  Irrational, and entirely unjustified responses like these illustrate the dire need for people to become aware of just how fallible their feeling of certainty can be.

Escalation of Commitment

If a person has invested their whole lives in some belief system, even if they encounter undeniable evidence that their beliefs were wrong, they are more likely to ignore it or rationalize it away than to modify their beliefs, and thus they will likely continue investing more time and energy in those false beliefs.  This is due to an effect known as escalation of commitment.  Basically, the higher the cumulative investment in a particular course of action, the more likely someone will feel justified in continuing to increase that investment, despite new evidence showing them that they’d be better off abandoning that investment and cutting their losses.  When it comes to trying to “convert” a dogmatic believer into a more rational, free thinker, this irrational tendency severely impedes any chance of success, more so when that person has been investing themselves in the dogma for a longer period of time, since they ultimately have a lot more to lose.  To put it another way, a person’s religion is often a huge part of their personal identity, so regardless of any undeniable evidence presented that refutes their beliefs, in order to accept that evidence they will have to abandon a large part of themselves which is obviously going to make that acceptance and intellectual honesty quite difficult to implement.  Furthermore, if a person’s family or friends have all invested in the same false beliefs as themselves, even if that person discovers that those beliefs are wrong, they risk their entire family and/or friends rejecting them and then forever losing those relationships that are dearest to them.  We can also see how this escalation of commitment is further reinforced by the time equals truth fallacy mentioned earlier.

Negativity Bias

When I’ve heard various Christians proselytizing to myself or others, one tactic that I’ve seen used over and over again is the use of fear-mongering with theologically based threats of eternal punishment and torture.  If we don’t convert, we’re told, we’re doomed to burn in hell for eternity.  Their incessant use of this tactic suggests that it was likely effective on themselves contributing to their own conversion (it was in fact one of the reasons for my former conversion to Christianity).  Similar tactics have been used in some political campaigns in order to persuade voters by deliberately scaring them into taking one position over another.  Though this strategy is more effective on some than others, there is an underlying cognitive bias in all of us that contributes to its efficacy.  This is known as the negativity bias.  With this bias, information or experiences that are of a more negative nature will tend to have a greater effect on our psychological states and resulting behavior when compared to positive information or experiences that are equally intense.

People will remember threats to their well-being a lot more than they remember pleasurable experiences.  This looks like another example of a simple survival strategy implemented in our brains.  Similar to my earlier hypothesis regarding the time equals truth heuristic, it is far more important to remember and avoid dangerous or life-threatening experiences than it is to remember and seek out pleasurable experiences when all else is equal.  It only takes one bad experience to end a person’s life, whereas it is less critical to experience some minimum number of pleasurable experiences.  Therefore, it makes sense as an evolutionary strategy to allocate more cognitive resources and memory for avoiding the dangerous and negative experiences, and a negativity bias helps us to accomplish that.

Unfortunately, just as with the time equals truth bias, since our cultural evolution has involved us adopting certain beliefs that no longer pertain directly to our survival, the heuristic is often being executed improperly or in the wrong context.  This increases the chances that we will fall prey to adopting irrational, dogmatic belief systems when they are presented to us in a way that utilizes fear-mongering and various forms of threats to our well-being.  When it comes to conceiving of an overtly negative threat, can anyone imagine one more significant than the threat of eternal torture?  It is, by definition, supposed to be the worst scenario imaginable, and thus it is the most effective kind of threat to play on our negativity bias, and lead to irrational beliefs.  If the dogmatic believer is also convinced that their god can hear their thoughts, they’re also far less likely to think about their dogmatic beliefs critically, for they have no mental privacy to do so.

This bias in particular reminds me of Blaise Pascal’s famous Wager, which basically asserts that it is better to believe in God than to risk the consequences of not doing so.  If God doesn’t exist, we have “only” a finite loss (according to Pascal).  If God does exist, then one’s belief leads to an eternal reward, and one’s disbelief leads to an eternal punishment.  Therefore, it is only logical (Pascal asserts) that one should believe in God, since it is the safest position to adopt.  Unfortunately, Pascal’s premises are not sound, and therefore the conclusion is invalid.  For one, is belief in God sufficient enough to avoid the supposed eternal punishment, or does it have to be more than that, such as some other religious tenets, declarations, or rituals?  Second, which god should one believe in?  There have been thousands of gods proposed by various believers over several millennia, and there were obviously many more than we currently have records of in history, therefore Pascal’s Wager merely narrows it down to a choice of several thousand known gods.  Third, even if we didn’t have to worry about choosing the correct god, if it turned out that there was no god, would a life with that belief not have carried a significant cost?  If the belief also involved a host of dogmatic moral prescriptions, rituals, and other specific ways to live one’s life, etc., including perhaps the requirement to abstain from many pleasurable human experiences, this cost could be quite great.  Furthermore, if one applies Pascal’s wager to any number of theoretical possibilities that posit a possible infinite loss over a finite loss, one would be inclined to apply the same principle to any of those possibilities, which is obviously irrational.  It is clear that Pascal hadn’t thought this one through very well, and I wouldn’t doubt that his judgement during this apologetic formulation was highly clouded by his own negativity bias (since the fear of punishment seems to be the primary focus of his “wager”).  As was mentioned earlier, we can see how effective this bias has been on a number of religious converts, and we need to be diligent about watching out for information that is presented to us with threatening strings attached, because it can easily cloud our judgement and lead to the adoption of irrational beliefs and dogma.

Belief Bias & Argument Proximity Effects

When we analyze arguments, we often judge the strength of those arguments based on how plausible their conclusion is, rather than how well those arguments support that conclusion.  In other words, people tend to focus their attention on the conclusion of an argument, and if they think that the conclusion is likely to be true (based on their prior beliefs), this affects their perspective of how strong or weak the arguments themselves appear to be.  This is obviously an incorrect way to analyze arguments, as within logic, only the arguments themselves can be used to determine the validity of the conclusion, not the other way around.  This implies that what is intuitive to us is often incorrect, and thus our cognitive biases are constantly at odds with logic and rationality.  This is why it takes a lot of practice to learn how to apply logic effectively in one’s thinking.  It just doesn’t come naturally, even though we often think it does.

Another cognitive deficit regarding how people analyze arguments irrationally is what I like to call the argument proximity effect.  Basically, when strong arguments are presented along with weak arguments that support a particular conclusion, the strong arguments will often appear to be weaker or less persuasive because of their proximity or association with the weaker ones.  This is partly due to the fact that if a person thinks that they can defeat the moderate or weak arguments, they will often believe that they can also defeat the stronger argument, if only they were given enough time to do so.  It is as if the strong arguments become “tainted” by the weaker ones, even though the opposite is true since the arguments are independent of one another.  That is, when a person has a number of arguments to support their position, a combination of strong and weak arguments is always better than only having the strong arguments, because there are simply more arguments that need to be addressed and rebutted in the former than in the latter.  Another reason for this effect is that the presence of weak arguments also weakens the credibility of the person presenting them, and so then the stronger arguments aren’t taken as seriously by the recipient.  Just as with our belief bias, this cognitive deficit is ultimately caused by not employing logic properly, if at all.  Making people aware of this cognitive flaw is only half the battle, as once again we also need to learn about logic and how to apply it effectively in our thinking.

To read part 3 of 3, click here.

Learning About Our Cognitive Biases: An Antidote to Irrational & Dogmatic Thinking (Part 1 of 3)

leave a comment »

It is often surprising to think about the large number of people that still ascribe to dogmatic beliefs, despite our living in a post-enlightenment age of science, reason, and skeptical inquiry.  We have a plethora of scientific evidence and an enormous amount of acquired data illustrating just how fallacious various dogmatic belief systems are, as well as how dogmatism in general is utterly useless for gaining knowledge or making responsible decisions throughout one’s life.  We also have an ample means of distributing this valuable information to the public through academic institutions, educational television programming, online scientific resources, books, etc.  Yet, we still have an incredibly large number of people preferably believing in various dogmatic claims (and with a feeling of “certainty”) over those supported by empirical evidence and reason.  Furthermore, when some of these people are presented with the scientific evidence that refutes their dogmatic belief, they outright deny that the evidence exists or they simply rationalize it away.  This is a very troubling problem in our society as this kind of muddled thinking often prevents many people from making responsible, rational decisions.  Irrational thinking in general has caused many people to vote for political candidates for all the wrong reasons.  It has caused many people to raise their own children in ways that promote intolerance, prejudice, and that undervalue if not entirely abhor invaluable intellectual virtues such as rational skepticism and critical thought.
Some may reasonably assume that (at least) many of these irrational belief systems and behaviors are simply a result of having low self-esteem, inadequate education and/or low intelligence, and it turns out that several dozen sociological studies have found evidence that supports this line of reasoning.  That is, a person that has lower self-esteem, lower education, and/or lower intelligence is more likely to be religious and dogmatic.  However, there is clearly much more to it than that, especially since there are still a lot of people that obtain these irrational belief systems that are also well educated and intelligent.  Furthermore, every human being is quite often irrational in their thinking.  So what else could be contributing to this irrationality (and dogmatism)?  Well, the answer seems to lie in the realms of cognitive science and psychology.  I’ve decided to split this post into three parts, because there is a lot of information I’d like to cover.  Here’s the link to part 2 of 3, which can also be found at the end of this post.

Cognitive Biases

Human beings are very intelligent relative to most other species on this planet, however, we are still riddled with various flaws in our brains which drastically reduce our ability to think logically or rationally.  This isn’t surprising after one recognizes that our brains are the product of evolution, and thus they weren’t “designed” in any way at all, let alone to operate logically or rationally.  Instead, what we see in human beings is a highly capable and adaptable brain, yet one with a number of cognitive biases and shortcomings.  Though a large number (if not all) of these biases developed as a sub-optimal yet fairly useful survival strategy, specifically within the context of our evolutionary past, most of them have become a liability in our modern civilization, and often impede our intellectual development as individuals and as a society.  A lot of these cognitive biases serve (or once served) as an effective way at making certain decisions rather quickly, that is, the biases have effectively served as heuristics for simplifying the decision making process with many everyday problems.  While heuristics are valuable and often make our decision making faculties more efficient, they are often far less than optimal due to the fact that increased efficiency is often afforded by a reduction in accuracy.  Again, this is exactly what we expect to find with products of evolution — a suboptimal strategy for solving some problem or accomplishing some goal, but one that has worked well enough to give the organism (in this case, human beings) an advantage over the competition within some environmental niche.

So what kinds of cognitive biases do we have exactly, and how many of them have cognitive scientists discovered?  A good list of them can be found here.  I’m going to mention a few of them in this post, specifically those that promote or reinforce dogmatic thinking, those that promote or reinforce an appeal to a supernatural world view, and ultimately those that seem to most hinder overall intellectual development and progress.  To begin, I’d like to briefly discuss Cognitive Dissonance Theory and how it pertains to the automated management of our beliefs.

Cognitive Dissonance Theory

The human mind doesn’t tolerate internal conflicts very well, and so when we have beliefs that contradict one another or when we are exposed to new information that conflicts with an existing belief, some degree of cognitive dissonance results and we are effectively pushed out of our comfort zone.  The mind attempts to rid itself of this cognitive dissonance through a few different methods.  A person may alter the importance of the original belief or that of the new information, they may change the original belief, or they may seek evidence that is critical of the new information.  Generally, the easiest path for the mind to take is the first and last method mentioned here, as it is far more difficult for a person to change their existing beliefs, partly due to the fact that a person’s existing set of beliefs is their only frame of reference when encountering new information, and there is an innate drive to maintain a feeling of familiarity and a feeling of certainty of our beliefs.

The primary problem with these automated cognitive dissonance reduction methods is that they tend to cause people to defend their beliefs in one way or another rather than to question them and try to analyze them objectively.  Unfortunately, this means that we often fail to consider new evidence and information using a rational approach, and this in turn can cause people to acquire quite a large number of irrational, false beliefs, despite having a feeling of certainty regarding the truth of those beliefs.  It is also important to note that many of the cognitive biases that I’m about to mention result from these cognitive dissonance reduction methods, and they can become compounded to create severe lapses in judgement.

Confirmation Bias, Semmelweis Reflex, and The Frequency Illusion

One of the most significant cognitive biases we have is what is commonly referred to as confirmation bias.  Basically, this bias refers to our tendency to seek out, interpret, or remember information in a particular way that serves to confirm our beliefs.  To put it more simply, we often will only see what we want to see.  It is a method that our brain uses in order to be able to sift through large amounts of information and piece it together with what we already believe into a meaningful story line or explanation.  Just as we expect, it ultimately optimizes for efficiency over accuracy, and therein lies the problem.  Even though this bias applies to everyone, the dogmatist in particular has a larger cognitive barrier to overcome because they’re also using a fallacious epistemological methodology right from the start (i.e. appealing to some authority rather than to reason and evidence).  So while everyone is affected by this bias to some degree, the dogmatic believer in particular has an epistemological flaw that serves to compound the issue and make matters worse.  They will, to a much higher degree, live their life unconsciously ignoring any evidence encountered that refutes their beliefs (or fallaciously reinterpreting it to be in their favor), and they will rarely if ever attempt to actively seek out such evidence to try and disprove their beliefs.  A more rational thinker on the other hand, has a belief system primarily based on a reliable epistemological methodology (which uses empirical evidence to support it), and one that has been proven to work and provide increasingly accurate knowledge better than any other method (i.e. the scientific method).  Nevertheless, everyone is affected by this bias, and because it is operating outside the conscious mind, we all fail to notice it as it actively modifies our perception of reality.

One prominent example in our modern society, illustrating how this cognitive bias can reinforce dogmatic thinking (and with large groups of people), is the ongoing debate between Young Earth Creationists and the scientific consensus regarding evolutionary theory.  Even though the theory of evolution is a scientific fact (much like many other scientific theories, such as the theory of gravity, or the Germ theory of disease), and even though there are several hundred thousand scientists that can attest to its validity, as well as a plethora of evidence within a large number of scientific fields supporting its validity, Creationists seem to be in complete and utter denial of this actuality.  Not only do they ignore the undeniable wealth of evidence that is presented to them, but they also misinterpret evidence (or cherry pick) to suit their position.  This problem is compounded by the fact that their beliefs are only supported by fallacious tautologies [e.g. “It’s true because (my interpretation of) a book called the Bible says so…”], and other irrational arguments based on nothing more than a particular religious dogma and a lot of intellectual dishonesty.

I’ve had numerous debates with these kinds of people (and I used to BE one of them many years ago, so I understand how many of them arrived at these beliefs), and I’ll often quote several scientific sources and various logical arguments to support my claims (or to refute theirs), and it is often mind boggling to witness their response, with the new information seemingly going in one ear and coming out the other without undergoing any mental processing or critical consideration.  It is as if they didn’t hear a single argument that was pointed out to them and merely executed some kind of rehearsed reflex.  The futility of the communication is often confirmed when one finds themselves having to repeat the same arguments and refutations over and over again, seemingly falling on deaf ears, with no logical rebuttal of the points made.  On a side note, this reflex-like response where a person quickly rejects new evidence because it contradicts their established belief system is known as the “Semmelweis reflex/effect”, but it appears to be just another form of confirmation bias.  Some of these cases of denial and irrationality are nothing short of ironic, since these people are living in a society that owes its technological advancements exclusively to rational, scientific methodologies, and these people are indeed patronizing and utilizing many of these benefits of science everyday of their lives (even if they don’t realize it).  Yet we still find many of them hypocritically abhorring or ignoring science and reason whenever it is convenient to do so, such as when they try to defend their dogmatic beliefs.

Another prominent example of confirmation bias reinforcing dogmatic thinking relates to various other beliefs in the supernatural.  People that believe in the efficacy of prayer, for example, will tend to remember prayers that were “answered” and forget or rationalize away any prayers that weren’t “answered”, since their beliefs reinforce such a selective memory.  Similarly, if a plane crash or other disaster occurs, some people with certain religious beliefs will often mention or draw attention to the idea that their god or some supernatural force must have intervened or played a role in preventing the death of any survivors (if there are any), while they seem to downplay or ignore the facts pertaining to the many others that did not survive (if there are any survivors at all).  In the case of prayer, numerous studies have shown that prayer for another person (e.g. to heal them from an illness) is ineffective when that person doesn’t know that they’re being prayed for, thus any efficacy of prayer has been shown to be a result of the placebo effect and nothing more.  It may be the case that prayer is one of the most effective placebos, largely due to the incredibly high level of belief in its efficacy, and the fact that there is no expected time frame for it to “take effect”.  That is, since nobody knows when a prayer will be “answered”, then even if the prayer isn’t “answered” for several months or even several years (and time ranges like this are not unheard of for many people that have claimed to have prayers answered), then confirmation bias will be even more effective than a traditional medical placebo.  After all, a typical placebo pill or treatment is expected to work within a reasonable time frame comparable to other medicine or treatments taken in the past, but there’s no deadline for a prayer to be answered by.  It’s reasonable to assume that even if these people were shown the evidence that prayer is no more effective than a placebo (even if it is the most effective placebo available), they would reject it or rationalize it away, once again, because their beliefs require it to be so.  The same bias applies to numerous other purportedly paranormal phenomena, where people see significance in some event because of how their memory or perception is operating in order to promote or sustain their beliefs.  As mentioned earlier, we often see what we want to see.

There is also a closely related bias known as congruence bias, which occurs because people rely too heavily on directly testing a given hypothesis, and often neglect to indirectly test that hypothesis.  For example, suppose that you were introduced to a new lighting device with two buttons on it, a green button and a red button.  You are told that the device will only light up by pressing the green button.  A direct test of this hypothesis would be to press the green button, and see if it lights up.  An indirect test of this hypothesis would be to press the red button, and see if it doesn’t light up.  Congruence bias illustrates that we tend to avoid the indirect testing of hypotheses, and thus we can start to form irrational beliefs by mistaking correlation with causation, or by forming incomplete explanations of causation.  In the case of the aforementioned lighting device, it could be the case that both buttons cause it to light up.  Think about how this congruence bias affects our general decision making process, where when we combine it with our confirmation bias, we are inclined to not only reaffirm our beliefs, but to avoid trying to disprove them (since we tend to avoid indirect testing of those beliefs).  This attitude and predisposition reinforces dogmatism by assuming the truth of one’s beliefs and not trying to verify them in any rational, critical way.

An interesting cognitive bias that often works in conjunction with a person’s confirmation bias is something referred to as the frequency illusion, whereby some detail of an event or some specific object may enter a person’s thoughts or attention, and then suddenly it seems that they are experiencing the object or event at a higher than normal frequency.  For example, if a person thinks about a certain animal, say a turtle, they may start noticing lots of turtles around them that they would have normally overlooked.  This person may even go to a store and suddenly notice clothing or other gifts with turtle patterns or designs on them that they didn’t seem to notice before.  After all, “turtle” is on their mind, or at least in the back of their mind, so their brain is unconsciously “looking” for turtles, and the person isn’t aware of their own unconscious pattern recognition sensitivity.  As a result, they may think that this perceived higher frequency of “seeing turtles” is abnormal and that it must be more than simply a coincidence.  If this happens to a person that appeals to the supernatural, their confirmation bias may mistake this frequency illusion for a supernatural event, or something significant.  Since this happens unconsciously, people can’t control this illusion or prevent it from happening.  However, once a person is aware of the frequency illusion as a cognitive bias that exists, they can at least reassess their experiences with a larger toolbox of rational explanations, without having to appeal to the supernatural or other irrational belief systems.  So in this particular case, we can see how various cognitive biases can “stack up” with one another and cause serious problems in our reasoning abilities and negatively affect how accurately we perceive reality.

To read part 2 of 3, click here.