The Open Mind

Cogito Ergo Sum

Posts Tagged ‘Human nature

Learning About Our Cognitive Biases: An Antidote to Irrational & Dogmatic Thinking (Part 3 of 3)

leave a comment »

This is part 3 of 3 of this post.  Click here to read part 2.

Reactive Devaluation

Studies have shown that if a claim is believed to have come from a friend or ally, the person receiving it will be more likely to think that it has merit and is truthful.  Likewise, if it is believed to have come from an enemy or some member of the opposition, it is significantly devalued.  This bias is known as reactive devaluation.  A few psychologists actually determined that the likeabillity of the source of the message is one of the key factors involved with this effect, and people will actually value information coming from a likeable source more than a source they don’t like, with the same order of magnitude that they would value information coming from an expert versus a non-expert.  This is quite troubling.  We’ve often heard about how political campaigns and certain debates are seen more as popularity contests than anything else, and this bias is likely a primary reason for why this is often the case.

Unfortunately, a large part of society has painted a nasty picture of atheism, skepticism, and the like.  As it turns out, people who do not believe in a god (mainly the Christian god) are the least trusted minority in America, according to a sociological study performed a few years ago (Johanna Olexy and Lee Herring, “Atheists Are Distrusted: Atheists Identified as America’s Most Distrusted Minority, According to Sociological Study,” American Sociological Association News, May 3, 2006).  Regarding the fight against dogmatism, the rational thinker needs to carefully consider how they can improve their likeability to the recipients of their message, if nothing else, by going above and beyond to remain kind, respectful, and maintain a positive and uplifting attitude during their argument presentation.  In any case, the person arguing will always be up against any societal expectations and preferences that may reduce that person’s likeability, so there’s also a need to remind others as well as ourselves about what is most important when discussing an issue — the message rather than the messenger.

The Backfire Effect & Psychological Reactance

There have already been several cognitive biases mentioned that interfere with people’s abilities to accept new evidence that contradicts their beliefs.  To make matters worse, psychologists have discovered that people often react to disconfirming evidence by actually strengthening their beliefs.  This is referred to as the backfire effect.  This can make refutations futile most of the time, but there are some strategies that help to combat this bias.  The bias is largely exacerbated by a person’s emotional involvement and the fact that people don’t want to appear to be unintelligent or incorrect in front of others.  If the debating parties let tempers cool down before resuming the debate, this can be quite helpful regarding the emotional element.  Additionally, if a person can show their opponent how accepting the disconfirming evidence will benefit them, they’ll be more likely to accept it.  There are times when some of the disconfirming evidence mentioned is at least partially absorbed by the person, and they just need some more time in a less tense environment to think things over a bit.  If the situation gets too heated, or if a person risks losing face with peers around them, the ability to persuade that person only decreases.

Another similar cognitive bias is referred to as reactance, which is basically a motivational reaction to instances of perceived infringement of behavioral freedoms.  That is, if a person feels that someone is taking away their choices or limiting the number of them, such as in cases where they are being heavily pressured into accepting a different point of view, they will often respond defensively.  They may feel that freedoms are being taken away from them, and it is only natural for a person to defend whatever freedoms they see themselves having or potentially losing.  Whenever one uses reverse psychology, they are in fact playing on at least an implicit knowledge of this reactance effect.  Does this mean that rational thinkers should use reverse psychology to persuade dogmatists to accept reason and evidence?  I personally don’t think that this is the right approach because this could also backfire and I would prefer to be honest and straightforward, even if this results in less efficacy.  At the very least however, one needs to be aware of this bias and needs to be careful with how they phrase their arguments, how they present the evidence, and to maintain a calm and respectful attitude during these discussions, so that the other person doesn’t feel the need to defend themselves with the cost of ignoring evidence.

Pareidolia & Other Perceptual Illusions

Have you ever seen faces of animals or people while looking at clouds, shadows, or other similar situations?  The brain has many pattern recognition modules and will often respond erroneously to vague or even random stimuli, such that we perceive something familiar, even when it isn’t actually there.  This is called pareidolia, and is a type of apophenia (i.e. seeing patterns in random data).  Not surprisingly, many people have reported instances of seeing various religious imagery and so forth, most notably the faces of prominent religious figures, from ordinary phenomena.  People have also erroneously heard “hidden messages” in records when playing them in reverse, and similar illusory perceptions.  This results from the fact that the brain often fills in gaps and if one expects to see a pattern (even if unconsciously driven by some emotional significance), then the brain’s pattern recognition modules can often “over-detect” or result in false positives by too much sensitivity and by conflating unconscious imagery with one’s actual sensory experiences, leading to a lot of distorted perceptions.  It goes without saying that this cognitive bias is perfect for reinforcing superstitious or supernatural beliefs, because what people tend to see or experience from this effect are those things that are most emotionally significant to them.  After it occurs, people believe that what they saw or heard must have been real and therefore significant.

Most people are aware that hallucinations occur in some people from time to time, under certain neurological conditions (generally caused by an imbalance of neurotransmitters).  A bias like pareidolia can effectively produce similar experiences, but without requiring the more specific neurological conditions that a textbook hallucination requires.  That is, the brain doesn’t need to be in any drug-induced state nor does it need to be physically abnormal in any way for this to occur, making it much more common than hallucinations.  Since pareidolia is also compounded with one’s confirmation bias, and since the brain is constantly implementing cognitive dissonance reduction mechanisms in the background (as mentioned earlier), this can also result in groups of people having the same illusory experience, specifically if the group shares the same unconscious emotional motivations for “seeing” or experiencing something in particular.  These circumstances along with the power of suggestion from even just one member of the group can lead to what are known as collective hallucinations.  Since collective hallucinations like these inevitably lead to a feeling of corroboration and confirmation between the various people experiencing it (say, seeing a “spirit” or “ghost” of someone significant), they can reinforce one another’s beliefs, despite the fact that the experience was based on a cognitive illusion largely induced by powerful human emotions.  It’s likely no coincidence that this type of group phenomenon seems to only occur with members of a religion or cult, specifically those that are in a highly emotional and synchronized psychological state.  There have been many cases of large groups of people from various religions around the world claiming to see some prominent religious figure or other supernatural being, illustrating just how universal this phenomena is within a highly emotional group context.

Hyperactive Agency Detection

There is a cognitive bias that is somewhat related to the aforementioned perceptual illusion biases which is known as hyperactive agency detection.  This bias refers to our tendency to erroneously assign agency to events that happen without an agent causing them.  Human beings all have a theory of mind that is based in part on detecting agency, where we assume that other people are causal agents in the sense that they have intentions and goals just like we do.  From an evolutionary perspective, we can see that our ability to recognize that others are intentional agents allows us to better predict another person’s behavior which is extremely beneficial to our survival (notably in cases where we suspect others are intending to harm us).

Unfortunately our brain’s ability to detect agency is hyperactive in that it errs on the side of caution by over-detecting possible agency, as opposed to under-detecting (which could negatively affect our survival prospects).  As a result, people will often ascribe agency to things and events in nature that have no underlying intentional agent.  For example, people often anthropomorphize (or implicitly assume agency to) machines and get angry when they don’t work properly (such as an automobile that breaks down while driving to work), often with the feeling that the machine is “out to get us”, is “evil”, etc.  Most of the time, if we have feelings like this, they are immediately checked and balanced by our rational minds that remind us that “it’s only a car”, or “it’s not conscious”, etc.  Similarly, when natural disasters or other similar events occur, our hyperactive agency detection snaps into gear, trying to find “someone to blame”.  This cognitive bias explains quite well, at least as a contributing factor as to why so many ancient cultures assumed that there were gods that caused earthquakes, lightning, thunder, volcanoes, hurricanes, etc.  Quite simply, they needed to ascribe the events as having been caused by someone.

Likewise, if we are walking in a forest and hear some strange sounds in the bushes, we may often assume that some intentional agent is present (whether another person or other animal), even though it may simply be the wind that is causing the noise.  Once again, we can see in this case the evolutionary benefit of this agency detection, even with the occasional “false positive”, the noise in the bushes could very well be a predator or other source of danger, so it is usually better for our brains to assume that the noise is coming from an intentional agent.  It’s also entirely possible that even in cases where the source of sound was determined to be the wind, early cultures may have ascribed an intentional stance to the wind itself (e.g. a “wind” god, etc.).  In all of these cases, we can see how our hyperactive agency detection can lead to irrational, theistic and other supernatural belief systems, and we need to be aware of such a bias when evaluating our intuitions of how the world works.

Risk Compensation

It is only natural that the more a person feels protected from various harms, the less careful they tend to behave, and vice versa.  This apparent tendency, sometimes referred to as risk compensation, is often a harmless approach because generally speaking, if one is well protected from harm, they do have less to worry about than one who is not, and thus they can take greater relative risks as a result of that larger cushion of safety.  Where this tends to cause a problem however, is in cases when the perceived level of protection is far higher than it actually is.  The worst case scenario that comes to mind is if a person that appeals to the supernatural thinks that it doesn’t matter what happens to them or that no harm can come to them because of some belief in a supernatural source of protection, let alone one that they believe provides the most protection possible.  In this scenario, their level of risk compensation is fully biased to a negative extreme, and they will be more likely to behave irresponsibly because they don’t have realistic expectations of the consequences of their actions.  In a post-enlightenment society, we’ve acquired a higher responsibility to heed the knowledge gained from scientific discoveries so that we can behave more rationally as we learn more about the consequences of our actions.

It’s not at all difficult to see how the effects of various religious dogma on one’s level of risk compensation can inhibit that society from making rational, responsible decisions.  We have several serious issues plaguing modern society including those related to environmental sustainability, climate change, pollution, war, etc.  If believers in the supernatural have an attitude that everything is “in God’s hands” or that “everything will be okay” because of their belief in a protective god, they are far less likely to take these kinds of issues seriously because they fail to acknowledge the obvious harm on everyone in society.  What’s worse is that not only are dogmatists reducing their own safety, but they’re also reducing the safety of their children, and of everyone else in society that is trying to behave rationally with realistic expectations.  If we had two planets, one for rational thinkers, and one for dogmatists, this would no longer be a problem for everyone.  The reality is that we share one planet and thus we all suffer the consequences of any one person’s irresponsible actions.  If we want to make more rational decisions, especially those related to the safety of our planet’s environment, society, and posterity, people need to adjust their risk compensation such that it is based on empirical evidence using a rational, proven scientific methodology.  This point clearly illustrates the need to phase out supernatural beliefs since false beliefs that don’t correspond with reality can be quite dangerous to everybody regardless of who carries the beliefs.  In any case, when one is faced with the choice between knowledge versus ignorance, history has demonstrated which choice is best.

Final thoughts

The fact that we are living in an information age with increasing global connectivity, and the fact that we are living in a society that is becoming increasingly reliant on technology, will likely help to combat the ill effects of these cognitive biases.  These factors are making it increasingly difficult to hide one’s self from the evidence and the proven efficacy of using the scientific method in order to form accurate beliefs regarding the world around us.  Societal expectations are also changing as a result of these factors, and it is becoming less and less socially acceptable to be irrational, dogmatic, and to not take scientific methodology more seriously.  In all of these cases, having knowledge about these cognitive biases will help people to combat them.  Even if we can’t eliminate the biases, we can safeguard ourselves by preparing for any potential ill effects that those biases may produce.  This is where reason and science come into play, providing us with the means to counter our cognitive biases by applying a rational skepticism to all of our beliefs, and by using a logical and reliable methodology based on empirical evidence to arrive at a belief that we can be justifiably confident (though never certain) in.  As Darwin once said, “Ignorance more frequently begets confidence than does knowledge; it is those who know little, and not those who know much, who so positively assert that this or that problem will never be solved by science.”  Science has constantly been ratcheting our way toward a better understanding of the universe we live in, despite the fact that the results we obtain from science are often at odds with our own intuitions about how we think the world is.  Only relatively recently has science started to provide us with information regarding why our intuitions are often at odds with the scientific evidence.

We’ve made huge leaps within neuroscience, cognitive science, and psychology, and these discoveries are giving us the antidote that we need in order to reduce or eliminate irrational and dogmatic thinking.  Furthermore, those that study logic have an additional advantage in that they are more likely to spot fallacious arguments that are used to support this or that belief, and thus they are less likely to be tricked or persuaded by arguments that often appear to be sound and strong, but actually aren’t.  People fall prey to fallacious arguments all the time, and it is mostly because they haven’t learned how to spot them (which courses in logic and other reasoning can help to mitigate).  Thus, overall I believe it is imperative that we integrate knowledge concerning our cognitive biases into our children’s educational curriculum, starting at a young age (in a format that is understandable and engaging of course).  My hope is that if we start to integrate cognitive education (and complementary courses in logic and reasoning) as a part of our foundational education curriculum, we will see a cascade effect of positive benefits that will guide our society more safely into the future.

Advertisements

Learning About Our Cognitive Biases: An Antidote to Irrational & Dogmatic Thinking (Part 2 of 3)

leave a comment »

This is part 2 of 3 of this post.  Click here to read part 1.

The Feeling of Certainty & The Overconfidence Effect

There are a lot of factors that affect how certain we feel that one belief or another is in fact true.  One factor that affects this feeling of certainty is what I like to call the time equals truth fallacy.  With this cognitive bias, we tend to feel more certain about our beliefs as time progresses, despite not gaining any new evidence to support those beliefs. Since the time with a particular belief is the only factor involved here, it will have greater effects on those that are relatively isolated or sheltered from new information.  So if a person has been indoctrinated with a particular set of beliefs (let alone irrational beliefs), and they are effectively “cut off” from any sources of information that could serve to refute those beliefs, it will become increasingly more difficult to persuade them later on.  This situation sounds all too familiar, as it is often the case that dogmatic groups will isolate themselves and shelter their members from outside influences, for this very reason.  If the group can isolate their members for long enough (or the members actively isolate themselves), the dogma effectively gets burned into them, and the brainwashing is far more successful.  That this cognitive bias has been exploited by various dogmatic/religious leaders throughout history is hardly surprising considering how effective it is.

Though I haven’t researched or come across any proposed evolutionary reasons behind the development of this cognitive bias, I do have at least one hypothesis pertaining to its origin.  I’d say that this bias may have resulted from the fact that the beliefs that matter most (from an evolutionary perspective) are those that are related to our survival goals (e.g. finding food sources or evading a predator).  So naturally, any beliefs that didn’t cause noticeable harm to an organism weren’t correlated with harm, and thus were more likely to be correct (i.e. keep using whatever works and don’t fix what ain’t broke).  However, once we started adding many more beliefs to our repertoire, specifically those that didn’t directly affect our survival (including many beliefs in the supernatural), the same cognitive rules and heuristics were still being applied as before, although these new types of beliefs (when false) haven’t been naturally selected against, because they haven’t been detrimental enough to our survival (at least not yet, or not enough to force a significant evolutionary change).  So once again, evolution may have produced this bias for very advantageous reasons, but it is a sub-optimal heuristic (as always) and one that has become a significant liability after we started evolving culturally as well.

Another related cognitive bias, and one that is quite well established, is the overconfidence effect, whereby a person’s subjective confidence level or feeling of confidence in his or her judgements or beliefs are predictably higher than the actual objective accuracy of those judgements and beliefs. In a nutshell, people tend to have far more confidence in their beliefs and decisions than is warranted.  A common example cited to illustrate the intensity of this bias pertains to people taking certain quizzes, claiming to be “99% certain” about their answers to certain questions, and then finding out afterwards that they were wrong about half of the time.  In other cases, this overconfidence effect can be even worse.

In a sense, the feeling of certainty is like an emotion, which, like our other emotions, occur as a natural reflex, regardless of the cause, and independently of reason.  Just like other emotions such as anger, pleasure, or fear, the feeling of certainty can be produced by a seizure, certain drugs, and even electrical stimulation of certain regions of the brain.  In all of these cases, even when no particular beliefs are being thought about, the brain can produce a feeling of certainty nevertheless.  Research has shown that the feeling of knowing or certainty can also be induced through various brain washing and trance-inducing techniques such as a high repetition of words, rhythmic music, sleep deprivation (or fasting), and other types of social/emotional manipulation.  It is hardly a coincidence that many religions often employ a number of these techniques within their repertoire.

The most important point to take away from learning about these “confidence/certainty” biases is to understand that the feeling of certainty is not a reliable way of determining the accuracy of one’s beliefs.  Furthermore, one must realize that no matter how certain we may feel about a particular belief, we could be wrong, and often times are.  Dogmatic belief systems such as those found in many religions are often propagated by a misleading and mistaken feeling of certainty, even if that feeling is more potent than any ever experienced before.  Often times when people ascribing to these dogmatic belief systems are questioned about the lack of empirical evidence supporting their beliefs, even if all of their arguments have been refuted, they simply reply by saying “I just know.”  Irrational, and entirely unjustified responses like these illustrate the dire need for people to become aware of just how fallible their feeling of certainty can be.

Escalation of Commitment

If a person has invested their whole lives in some belief system, even if they encounter undeniable evidence that their beliefs were wrong, they are more likely to ignore it or rationalize it away than to modify their beliefs, and thus they will likely continue investing more time and energy in those false beliefs.  This is due to an effect known as escalation of commitment.  Basically, the higher the cumulative investment in a particular course of action, the more likely someone will feel justified in continuing to increase that investment, despite new evidence showing them that they’d be better off abandoning that investment and cutting their losses.  When it comes to trying to “convert” a dogmatic believer into a more rational, free thinker, this irrational tendency severely impedes any chance of success, more so when that person has been investing themselves in the dogma for a longer period of time, since they ultimately have a lot more to lose.  To put it another way, a person’s religion is often a huge part of their personal identity, so regardless of any undeniable evidence presented that refutes their beliefs, in order to accept that evidence they will have to abandon a large part of themselves which is obviously going to make that acceptance and intellectual honesty quite difficult to implement.  Furthermore, if a person’s family or friends have all invested in the same false beliefs as themselves, even if that person discovers that those beliefs are wrong, they risk their entire family and/or friends rejecting them and then forever losing those relationships that are dearest to them.  We can also see how this escalation of commitment is further reinforced by the time equals truth fallacy mentioned earlier.

Negativity Bias

When I’ve heard various Christians proselytizing to myself or others, one tactic that I’ve seen used over and over again is the use of fear-mongering with theologically based threats of eternal punishment and torture.  If we don’t convert, we’re told, we’re doomed to burn in hell for eternity.  Their incessant use of this tactic suggests that it was likely effective on themselves contributing to their own conversion (it was in fact one of the reasons for my former conversion to Christianity).  Similar tactics have been used in some political campaigns in order to persuade voters by deliberately scaring them into taking one position over another.  Though this strategy is more effective on some than others, there is an underlying cognitive bias in all of us that contributes to its efficacy.  This is known as the negativity bias.  With this bias, information or experiences that are of a more negative nature will tend to have a greater effect on our psychological states and resulting behavior when compared to positive information or experiences that are equally intense.

People will remember threats to their well-being a lot more than they remember pleasurable experiences.  This looks like another example of a simple survival strategy implemented in our brains.  Similar to my earlier hypothesis regarding the time equals truth heuristic, it is far more important to remember and avoid dangerous or life-threatening experiences than it is to remember and seek out pleasurable experiences when all else is equal.  It only takes one bad experience to end a person’s life, whereas it is less critical to experience some minimum number of pleasurable experiences.  Therefore, it makes sense as an evolutionary strategy to allocate more cognitive resources and memory for avoiding the dangerous and negative experiences, and a negativity bias helps us to accomplish that.

Unfortunately, just as with the time equals truth bias, since our cultural evolution has involved us adopting certain beliefs that no longer pertain directly to our survival, the heuristic is often being executed improperly or in the wrong context.  This increases the chances that we will fall prey to adopting irrational, dogmatic belief systems when they are presented to us in a way that utilizes fear-mongering and various forms of threats to our well-being.  When it comes to conceiving of an overtly negative threat, can anyone imagine one more significant than the threat of eternal torture?  It is, by definition, supposed to be the worst scenario imaginable, and thus it is the most effective kind of threat to play on our negativity bias, and lead to irrational beliefs.  If the dogmatic believer is also convinced that their god can hear their thoughts, they’re also far less likely to think about their dogmatic beliefs critically, for they have no mental privacy to do so.

This bias in particular reminds me of Blaise Pascal’s famous Wager, which basically asserts that it is better to believe in God than to risk the consequences of not doing so.  If God doesn’t exist, we have “only” a finite loss (according to Pascal).  If God does exist, then one’s belief leads to an eternal reward, and one’s disbelief leads to an eternal punishment.  Therefore, it is only logical (Pascal asserts) that one should believe in God, since it is the safest position to adopt.  Unfortunately, Pascal’s premises are not sound, and therefore the conclusion is invalid.  For one, is belief in God sufficient enough to avoid the supposed eternal punishment, or does it have to be more than that, such as some other religious tenets, declarations, or rituals?  Second, which god should one believe in?  There have been thousands of gods proposed by various believers over several millennia, and there were obviously many more than we currently have records of in history, therefore Pascal’s Wager merely narrows it down to a choice of several thousand known gods.  Third, even if we didn’t have to worry about choosing the correct god, if it turned out that there was no god, would a life with that belief not have carried a significant cost?  If the belief also involved a host of dogmatic moral prescriptions, rituals, and other specific ways to live one’s life, etc., including perhaps the requirement to abstain from many pleasurable human experiences, this cost could be quite great.  Furthermore, if one applies Pascal’s wager to any number of theoretical possibilities that posit a possible infinite loss over a finite loss, one would be inclined to apply the same principle to any of those possibilities, which is obviously irrational.  It is clear that Pascal hadn’t thought this one through very well, and I wouldn’t doubt that his judgement during this apologetic formulation was highly clouded by his own negativity bias (since the fear of punishment seems to be the primary focus of his “wager”).  As was mentioned earlier, we can see how effective this bias has been on a number of religious converts, and we need to be diligent about watching out for information that is presented to us with threatening strings attached, because it can easily cloud our judgement and lead to the adoption of irrational beliefs and dogma.

Belief Bias & Argument Proximity Effects

When we analyze arguments, we often judge the strength of those arguments based on how plausible their conclusion is, rather than how well those arguments support that conclusion.  In other words, people tend to focus their attention on the conclusion of an argument, and if they think that the conclusion is likely to be true (based on their prior beliefs), this affects their perspective of how strong or weak the arguments themselves appear to be.  This is obviously an incorrect way to analyze arguments, as within logic, only the arguments themselves can be used to determine the validity of the conclusion, not the other way around.  This implies that what is intuitive to us is often incorrect, and thus our cognitive biases are constantly at odds with logic and rationality.  This is why it takes a lot of practice to learn how to apply logic effectively in one’s thinking.  It just doesn’t come naturally, even though we often think it does.

Another cognitive deficit regarding how people analyze arguments irrationally is what I like to call the argument proximity effect.  Basically, when strong arguments are presented along with weak arguments that support a particular conclusion, the strong arguments will often appear to be weaker or less persuasive because of their proximity or association with the weaker ones.  This is partly due to the fact that if a person thinks that they can defeat the moderate or weak arguments, they will often believe that they can also defeat the stronger argument, if only they were given enough time to do so.  It is as if the strong arguments become “tainted” by the weaker ones, even though the opposite is true since the arguments are independent of one another.  That is, when a person has a number of arguments to support their position, a combination of strong and weak arguments is always better than only having the strong arguments, because there are simply more arguments that need to be addressed and rebutted in the former than in the latter.  Another reason for this effect is that the presence of weak arguments also weakens the credibility of the person presenting them, and so then the stronger arguments aren’t taken as seriously by the recipient.  Just as with our belief bias, this cognitive deficit is ultimately caused by not employing logic properly, if at all.  Making people aware of this cognitive flaw is only half the battle, as once again we also need to learn about logic and how to apply it effectively in our thinking.

To read part 3 of 3, click here.

Learning About Our Cognitive Biases: An Antidote to Irrational & Dogmatic Thinking (Part 1 of 3)

leave a comment »

It is often surprising to think about the large number of people that still ascribe to dogmatic beliefs, despite our living in a post-enlightenment age of science, reason, and skeptical inquiry.  We have a plethora of scientific evidence and an enormous amount of acquired data illustrating just how fallacious various dogmatic belief systems are, as well as how dogmatism in general is utterly useless for gaining knowledge or making responsible decisions throughout one’s life.  We also have an ample means of distributing this valuable information to the public through academic institutions, educational television programming, online scientific resources, books, etc.  Yet, we still have an incredibly large number of people preferably believing in various dogmatic claims (and with a feeling of “certainty”) over those supported by empirical evidence and reason.  Furthermore, when some of these people are presented with the scientific evidence that refutes their dogmatic belief, they outright deny that the evidence exists or they simply rationalize it away.  This is a very troubling problem in our society as this kind of muddled thinking often prevents many people from making responsible, rational decisions.  Irrational thinking in general has caused many people to vote for political candidates for all the wrong reasons.  It has caused many people to raise their own children in ways that promote intolerance, prejudice, and that undervalue if not entirely abhor invaluable intellectual virtues such as rational skepticism and critical thought.
 .
Some may reasonably assume that (at least) many of these irrational belief systems and behaviors are simply a result of having low self-esteem, inadequate education and/or low intelligence, and it turns out that several dozen sociological studies have found evidence that supports this line of reasoning.  That is, a person that has lower self-esteem, lower education, and/or lower intelligence is more likely to be religious and dogmatic.  However, there is clearly much more to it than that, especially since there are still a lot of people that obtain these irrational belief systems that are also well educated and intelligent.  Furthermore, every human being is quite often irrational in their thinking.  So what else could be contributing to this irrationality (and dogmatism)?  Well, the answer seems to lie in the realms of cognitive science and psychology.  I’ve decided to split this post into three parts, because there is a lot of information I’d like to cover.  Here’s the link to part 2 of 3, which can also be found at the end of this post.

Cognitive Biases

Human beings are very intelligent relative to most other species on this planet, however, we are still riddled with various flaws in our brains which drastically reduce our ability to think logically or rationally.  This isn’t surprising after one recognizes that our brains are the product of evolution, and thus they weren’t “designed” in any way at all, let alone to operate logically or rationally.  Instead, what we see in human beings is a highly capable and adaptable brain, yet one with a number of cognitive biases and shortcomings.  Though a large number (if not all) of these biases developed as a sub-optimal yet fairly useful survival strategy, specifically within the context of our evolutionary past, most of them have become a liability in our modern civilization, and often impede our intellectual development as individuals and as a society.  A lot of these cognitive biases serve (or once served) as an effective way at making certain decisions rather quickly, that is, the biases have effectively served as heuristics for simplifying the decision making process with many everyday problems.  While heuristics are valuable and often make our decision making faculties more efficient, they are often far less than optimal due to the fact that increased efficiency is often afforded by a reduction in accuracy.  Again, this is exactly what we expect to find with products of evolution — a suboptimal strategy for solving some problem or accomplishing some goal, but one that has worked well enough to give the organism (in this case, human beings) an advantage over the competition within some environmental niche.

So what kinds of cognitive biases do we have exactly, and how many of them have cognitive scientists discovered?  A good list of them can be found here.  I’m going to mention a few of them in this post, specifically those that promote or reinforce dogmatic thinking, those that promote or reinforce an appeal to a supernatural world view, and ultimately those that seem to most hinder overall intellectual development and progress.  To begin, I’d like to briefly discuss Cognitive Dissonance Theory and how it pertains to the automated management of our beliefs.

Cognitive Dissonance Theory

The human mind doesn’t tolerate internal conflicts very well, and so when we have beliefs that contradict one another or when we are exposed to new information that conflicts with an existing belief, some degree of cognitive dissonance results and we are effectively pushed out of our comfort zone.  The mind attempts to rid itself of this cognitive dissonance through a few different methods.  A person may alter the importance of the original belief or that of the new information, they may change the original belief, or they may seek evidence that is critical of the new information.  Generally, the easiest path for the mind to take is the first and last method mentioned here, as it is far more difficult for a person to change their existing beliefs, partly due to the fact that a person’s existing set of beliefs is their only frame of reference when encountering new information, and there is an innate drive to maintain a feeling of familiarity and a feeling of certainty of our beliefs.

The primary problem with these automated cognitive dissonance reduction methods is that they tend to cause people to defend their beliefs in one way or another rather than to question them and try to analyze them objectively.  Unfortunately, this means that we often fail to consider new evidence and information using a rational approach, and this in turn can cause people to acquire quite a large number of irrational, false beliefs, despite having a feeling of certainty regarding the truth of those beliefs.  It is also important to note that many of the cognitive biases that I’m about to mention result from these cognitive dissonance reduction methods, and they can become compounded to create severe lapses in judgement.

Confirmation Bias, Semmelweis Reflex, and The Frequency Illusion

One of the most significant cognitive biases we have is what is commonly referred to as confirmation bias.  Basically, this bias refers to our tendency to seek out, interpret, or remember information in a particular way that serves to confirm our beliefs.  To put it more simply, we often will only see what we want to see.  It is a method that our brain uses in order to be able to sift through large amounts of information and piece it together with what we already believe into a meaningful story line or explanation.  Just as we expect, it ultimately optimizes for efficiency over accuracy, and therein lies the problem.  Even though this bias applies to everyone, the dogmatist in particular has a larger cognitive barrier to overcome because they’re also using a fallacious epistemological methodology right from the start (i.e. appealing to some authority rather than to reason and evidence).  So while everyone is affected by this bias to some degree, the dogmatic believer in particular has an epistemological flaw that serves to compound the issue and make matters worse.  They will, to a much higher degree, live their life unconsciously ignoring any evidence encountered that refutes their beliefs (or fallaciously reinterpreting it to be in their favor), and they will rarely if ever attempt to actively seek out such evidence to try and disprove their beliefs.  A more rational thinker on the other hand, has a belief system primarily based on a reliable epistemological methodology (which uses empirical evidence to support it), and one that has been proven to work and provide increasingly accurate knowledge better than any other method (i.e. the scientific method).  Nevertheless, everyone is affected by this bias, and because it is operating outside the conscious mind, we all fail to notice it as it actively modifies our perception of reality.

One prominent example in our modern society, illustrating how this cognitive bias can reinforce dogmatic thinking (and with large groups of people), is the ongoing debate between Young Earth Creationists and the scientific consensus regarding evolutionary theory.  Even though the theory of evolution is a scientific fact (much like many other scientific theories, such as the theory of gravity, or the Germ theory of disease), and even though there are several hundred thousand scientists that can attest to its validity, as well as a plethora of evidence within a large number of scientific fields supporting its validity, Creationists seem to be in complete and utter denial of this actuality.  Not only do they ignore the undeniable wealth of evidence that is presented to them, but they also misinterpret evidence (or cherry pick) to suit their position.  This problem is compounded by the fact that their beliefs are only supported by fallacious tautologies [e.g. “It’s true because (my interpretation of) a book called the Bible says so…”], and other irrational arguments based on nothing more than a particular religious dogma and a lot of intellectual dishonesty.

I’ve had numerous debates with these kinds of people (and I used to BE one of them many years ago, so I understand how many of them arrived at these beliefs), and I’ll often quote several scientific sources and various logical arguments to support my claims (or to refute theirs), and it is often mind boggling to witness their response, with the new information seemingly going in one ear and coming out the other without undergoing any mental processing or critical consideration.  It is as if they didn’t hear a single argument that was pointed out to them and merely executed some kind of rehearsed reflex.  The futility of the communication is often confirmed when one finds themselves having to repeat the same arguments and refutations over and over again, seemingly falling on deaf ears, with no logical rebuttal of the points made.  On a side note, this reflex-like response where a person quickly rejects new evidence because it contradicts their established belief system is known as the “Semmelweis reflex/effect”, but it appears to be just another form of confirmation bias.  Some of these cases of denial and irrationality are nothing short of ironic, since these people are living in a society that owes its technological advancements exclusively to rational, scientific methodologies, and these people are indeed patronizing and utilizing many of these benefits of science everyday of their lives (even if they don’t realize it).  Yet we still find many of them hypocritically abhorring or ignoring science and reason whenever it is convenient to do so, such as when they try to defend their dogmatic beliefs.

Another prominent example of confirmation bias reinforcing dogmatic thinking relates to various other beliefs in the supernatural.  People that believe in the efficacy of prayer, for example, will tend to remember prayers that were “answered” and forget or rationalize away any prayers that weren’t “answered”, since their beliefs reinforce such a selective memory.  Similarly, if a plane crash or other disaster occurs, some people with certain religious beliefs will often mention or draw attention to the idea that their god or some supernatural force must have intervened or played a role in preventing the death of any survivors (if there are any), while they seem to downplay or ignore the facts pertaining to the many others that did not survive (if there are any survivors at all).  In the case of prayer, numerous studies have shown that prayer for another person (e.g. to heal them from an illness) is ineffective when that person doesn’t know that they’re being prayed for, thus any efficacy of prayer has been shown to be a result of the placebo effect and nothing more.  It may be the case that prayer is one of the most effective placebos, largely due to the incredibly high level of belief in its efficacy, and the fact that there is no expected time frame for it to “take effect”.  That is, since nobody knows when a prayer will be “answered”, then even if the prayer isn’t “answered” for several months or even several years (and time ranges like this are not unheard of for many people that have claimed to have prayers answered), then confirmation bias will be even more effective than a traditional medical placebo.  After all, a typical placebo pill or treatment is expected to work within a reasonable time frame comparable to other medicine or treatments taken in the past, but there’s no deadline for a prayer to be answered by.  It’s reasonable to assume that even if these people were shown the evidence that prayer is no more effective than a placebo (even if it is the most effective placebo available), they would reject it or rationalize it away, once again, because their beliefs require it to be so.  The same bias applies to numerous other purportedly paranormal phenomena, where people see significance in some event because of how their memory or perception is operating in order to promote or sustain their beliefs.  As mentioned earlier, we often see what we want to see.

There is also a closely related bias known as congruence bias, which occurs because people rely too heavily on directly testing a given hypothesis, and often neglect to indirectly test that hypothesis.  For example, suppose that you were introduced to a new lighting device with two buttons on it, a green button and a red button.  You are told that the device will only light up by pressing the green button.  A direct test of this hypothesis would be to press the green button, and see if it lights up.  An indirect test of this hypothesis would be to press the red button, and see if it doesn’t light up.  Congruence bias illustrates that we tend to avoid the indirect testing of hypotheses, and thus we can start to form irrational beliefs by mistaking correlation with causation, or by forming incomplete explanations of causation.  In the case of the aforementioned lighting device, it could be the case that both buttons cause it to light up.  Think about how this congruence bias affects our general decision making process, where when we combine it with our confirmation bias, we are inclined to not only reaffirm our beliefs, but to avoid trying to disprove them (since we tend to avoid indirect testing of those beliefs).  This attitude and predisposition reinforces dogmatism by assuming the truth of one’s beliefs and not trying to verify them in any rational, critical way.

An interesting cognitive bias that often works in conjunction with a person’s confirmation bias is something referred to as the frequency illusion, whereby some detail of an event or some specific object may enter a person’s thoughts or attention, and then suddenly it seems that they are experiencing the object or event at a higher than normal frequency.  For example, if a person thinks about a certain animal, say a turtle, they may start noticing lots of turtles around them that they would have normally overlooked.  This person may even go to a store and suddenly notice clothing or other gifts with turtle patterns or designs on them that they didn’t seem to notice before.  After all, “turtle” is on their mind, or at least in the back of their mind, so their brain is unconsciously “looking” for turtles, and the person isn’t aware of their own unconscious pattern recognition sensitivity.  As a result, they may think that this perceived higher frequency of “seeing turtles” is abnormal and that it must be more than simply a coincidence.  If this happens to a person that appeals to the supernatural, their confirmation bias may mistake this frequency illusion for a supernatural event, or something significant.  Since this happens unconsciously, people can’t control this illusion or prevent it from happening.  However, once a person is aware of the frequency illusion as a cognitive bias that exists, they can at least reassess their experiences with a larger toolbox of rational explanations, without having to appeal to the supernatural or other irrational belief systems.  So in this particular case, we can see how various cognitive biases can “stack up” with one another and cause serious problems in our reasoning abilities and negatively affect how accurately we perceive reality.

To read part 2 of 3, click here.

Objective Morality & Arguments For God

with 2 comments

Morality is certainly an important facet of the human condition, and as a philosophical topic of such high regard, it clearly deserves critical reflection and a thorough analysis.  It is often the case that when people think of ethics, moral values, and moral duties, religion enters the discussion, specifically in terms of the widely held (although certainly not ubiquitous) belief that religions provide some form of objective foundation for morals and ethics.  The primary concern here regarding morals is determining whether our morals are ontologically objective in some way or another, and even if they are, is it still accurate to describe morality as some kind of an emergent human construct that is malleable and produced by naturalistic socio-biological processes?

One of the most common theistic arguments, commonly referred to as the Divine Command Theory, states that the existence of a God (or many gods for that matter) necessarily provides an ontologically objective foundation for morals and ethics.  Furthermore, coinciding with this belief are the necessary supportive beliefs that God exists and that this God is inherently “good”, for if either of these assumptions were not also the case, then the theistic foundation for morals (i.e. what is deemed to be “good”) would be unjustified. The assumption that God exists, and that this God is inherently “good” is based upon yet a few more assumptions, although there is plenty of religious and philosophical contention regarding which assumptions are necessary, let alone which are valid.

Let’s examine some of the arguments that have been used to try and prove the existence of God as well as some arguments used to show that an existent God is necessarily good. After these arguments are examined, I will conclude this post with a brief look at moral objectivity including the most common motivations underlying its proposed existence, the implications of believing in theologically grounded objective morals, and finally, some thoughts about our possible moral future.

Cosmological Argument

The Cosmological Argument for God’s existence basically asserts that every effect has a cause, and thus if the universe began to exist, it too must have had a cause.  It is then proposed that the initial cause is something transcendent from physical reality, something supernatural, or what many would refer to as a God.  We can see that this argument most heavily relies on the initial assumption of causality.  While causality certainly appears to be an attribute of our universe, Hume was correct to point out the problem of induction, whereby, causality itself is not known to exist by a priori reasoning, but rather by a posteriori reasoning, otherwise known as induction.  Because of this, our assumption of causality is not logically grounded, and therefore it is not necessarily true.

Clearly science relies on this assumption of causality as well as on the efficacy of induction, but its predictive power and efficacy only requires that causal relationships hold up most of the time, although perhaps it would be better to say that science only requires that causal relationships hold up with the phenomena it wishes to describe.  It is not a requirement for performing science that everything is causally closed or operating under causal principles.  Even quantum mechanics has shown us acausal properties whereby atomic and subatomic particles exhibit seemingly random behavior with no local hidden variables found.  It may be the case that ontologically speaking, the seemingly random quantum behavior is actually governed by causal processes (albeit with non-local hidden variables), but we’ve found no evidence for such causal processes. So it seems unjustified to assume that causality is necessarily the case, not only because this assumption has been derived from logically uncertain induction alone, but also because within science, specifically within quantum physics, we’ve actually observed what appear to be completely acausal processes.  As such, it is certainly both possible and plausible that the universe arose from acausal processes as well, with this possibility heavily supported by the quantum mechanical principles that underlie it.

To provide a more satisfying explanation for how something could come from nothing (as in some acausal process), one could look at abstract concepts within mathematics for an analogy.  For example, if 0 = (-1) + (1), and “0” is analogous to “nothing”, then couldn’t “nothing” (i.e. “0”) be considered equivalent to the collection of complementary “somethings” (e.g. “-1” and “+1”)?  That is, couldn’t a “0” state have existed prior to the Big Bang, and this produced two universes, say, “-1” and “+1”?  Clearly one could ask how or why the “0” state transformed into anything at all, but if the collection or sum of those things are equivalent to the “0” which one started with, then perhaps the question of how or why is an illogical question to begin with.  Perhaps this ill-formulated question would be analogous to asking how zero can spontaneously give rise to zero.  In any case, quantum mechanical principles certainly defy logic and intuition, and so there’s no reason to suppose that the origins of the universe should be any less illogical or counter-intuitive.  Additionally, it is entirely possible that our conceptions of “nothing” and “something” may not be ontologically accurate or coherent with respect to cosmology and quantum physics, even if we think of those concepts as trivial and seemingly obvious in other domains of knowledge.

Even if the universe was internally causal within its boundaries and thus with every process inside that universe, would that imply that the universe as a whole, from an external perspective, would be bound by the same causal processes?  To give an analogy, imagine that the universe is like a fishbowl, and the outer boundary of the fishbowl is completely opaque and impenetrable.  To all inhabitants inside the fishbowl (e.g. some fish swimming in water), there isn’t anything to suppose except for what exists within the boundary, i.e., the water, the fish, and the laws of physics that govern the motion and physical processes therein (e.g. buoyant or freely floating objects and a certain amount of frictional drag between the fish and the water).  Now it could be that this fishbowl of a universe is itself contained within a much larger environment (e.g. a multi-verse or some meta-space) with physical laws that don’t operate like those within the fishbowl.  For example, the meta-space could be completely dry, where the fishbowl of a universe isn’t itself buoyant or floating in any way, and the universe (when considered as one object) doesn’t experience any frictional drag between itself and the meta-space medium around it.  Due to the opaque surface of the fishbowl, the inhabitants are unaware that the fishbowl itself isn’t floating, just as they are unaware of any of the other foreign physical laws or properties that lay outside of it.  In the same sense, we could be erroneously assuming that the universe itself is a part of some causal process, simply because everything within the universe appears to operate under causal processes.  Thus, it may be the case that the universe as a whole, from an external perspective that we have no access to, is not governed by the laws we see within the universe, be they the laws of time, space, causality, etc.

Even if the universe was caused by something, one can always ask, what caused the cause?  The proposition that a God exists provides no solution to this problem, for we’d then want to know who or what created that God, and this would create an infinite regress.  If one tries to solve the infinite regress by contending that a God has always existed, then we can simplify the explanation further by removing any God from it and simply positing that the universe has always existed.  Even if the Big Bang model within cosmology is correct in some sense, what if the universe has constantly undergone some kind of cycle whereby a Big Bang is preceded by and eventually succeeded by a Big Crunch ad infinitum?  Even if we have an epistemological limitation from ever confirming such a model, for example, if the information of any previous universe is somehow lost with the start of every new cycle, it is certainly a possible model, and one that no longer requires an even more complex entity to explain, such as a God.

Fine-Tuning Argument

It is often claimed by theists that the dimensionless physical constants in our universe appear to be finely tuned such that matter, let alone intelligent life, could exist.  Supposedly, if these physical constants were changed by even a small amount, life as we know it (including the evolution of consciousness) wouldn’t be possible, therefore, the universe was finely tuned by an intelligent designer, or a God.  Furthermore, it is often argued that it has been finely tuned for the eventual evolution of conscious human beings.

One question that can be posed in response to this argument is whether or not the physical constants could be better than they currently are, such that the universe would be even more conducive to matter, life, and eventually intelligent life.  Indeed, it has been determined that the physical constants could be much better than they are, and we can also clearly see that the universe is statistically inhospitable to life, empirically supported by the fact that we have yet to find life elsewhere in the universe.  Statistically, it is still very likely that life exists in many other places throughout the universe, but it certainly doesn’t exist in most places.  Changing the physical constants in just the right way would indeed make life ubiquitous.  So it doesn’t appear that the universe was really finely tuned at all, at least not for any of the reasons that have been supposed.

There have also been other naturalistic theories presented as possible solutions to the fine-tuning argument, such as that of the Multi-verse, whereby we are but one universe living among an extremely large number of other universes (potentially infinite, although not necessarily so), and each universe has slightly different physical constants.  In a way, we could say that a form of natural selection among universes occurs, where the appearance of a finely tuned universe is analogous to the apparent design in biological nature.  We now know that natural selection along with some differentiation mechanisms are all that are necessary to produce the appearance of designed phenotypes.  The same thing could apply to universes, and by the anthropic principle, we can see that those universes that had physical constants within a particular range conducive to life, and eventually intelligent life, would indeed be the type of universe that we are living in such that we can even ask the question.  That is, some universes could be naturally selected to undergo the evolution of consciousness and eventually self-awareness.

There have been other theories presented to account for the appearance of a finely tuned universe such as a quantum superposition of initial conditions during the Big Bang, but they utilize the same basic principles of cosmic differentiation and natural selection, and so need not be mentioned further.  In any case, we can see that there are several possible naturalistic explanations for what appear to be finely tuned physical constants.

An even more important point worth mentioning is the possibility that every combination of physical constants could produce some form of consciousness completely unfathomable to us. We have yet to solve the mind-body problem (if it is indeed solvable), and so without knowing what physical mechanism produces consciousness, are we justified in assuming which processes can not produce consciousness? Even if consciousness as we know it is limited to carbon-based biological organisms with brains, can we justifiably dismiss the possibility of completely different mechanisms and processes that lead to some form of self-regulating “life”, “consciousness”, or “awareness”? Even a form of life or consciousness that does not involve brains, let alone atoms or molecules?  If this is the case, then all universes could have some form of “life” or “consciousness”, even if they would never come close to falling within our limited definition of such concepts.

“God is Good” & The Ontological Argument

The assumption that a God which exists must necessarily be a good God is definitely necessary for one to believe that the existence of that God provides an ontologically objective foundation for morals and ethics. So what exactly is the basis for this assumption that a God must necessarily be good?

This assumption has been derived by many from some versions of what is known as the Ontological Argument for God’s existence. This argument, believed to have been first asserted by St. Anselm of Canterbury in the year 1078 CE, basically asserts that God, by definition, is the greatest conceivable being.  However, if the greatest conceivable being is supposedly limited to the mind, that is, as a mental construct, then an even greater conceivable being is possible, namely one that actually exists outside of the mind as an entity in reality, therefore, God exists in both the mind as well as in reality.  Furthermore, regarding the concept of God being good, some people take this argument further and believe that the greatest conceivable being, that is, a God, also has to be good, since it is believed that the most perfect God, by definition, would deserve to be worshipped, and would only create or command that which is best.  So it follows then by the Ontological Argument, that not only God exists, but also that God must necessarily be good.

One obvious criticism to this argument is the fact that just because one can conceive of something, that act in itself certainly doesn’t make that conception exist in any sense other than as a mental construct.  Even if I can conceive of a perfect object, like a perfect planet that is perfectly spherical for example, this doesn’t mean that it necessarily has to exist.  Even if I limit my conceptions to a perfect God, what if I conceive of two perfect beings, with the assumption that two perfect beings are somehow better than one?  Does this mean that two perfect beings must necessarily exist? How about an infinite number of perfect beings? Isn’t an infinite number of infinitely perfect beings the best conception of all?  If so, why isn’t this conception necessarily existent in reality as well?  Such an assertion would indeed provide proof for polytheism.  One could certainly argue over which conceptions are truly perfect or the best, and thus which should truly produce something necessarily in reality, but regardless, one still hasn’t shown how conceptions alone can lead to realities.  Notice also that the crux of St. Anselm’s argument is dependent on one’s definition of what God is, which leads me to what I believe to be a much more important criticism of the Ontological Argument.

The primary criticism I have with such an argument, or any argument claiming particular attributes of a God for that matter, is the lack of justification for assuming that anyone could actually know anything about a God.  Are we to assume that any attributes at all of a God should necessarily be within the limits of human comprehension?  This assumption of such a potent human attribute of understanding sounds incredibly pretentious, egotistical, and entirely unsubstantiated.  As for the common assumptions about what God is, why would a God necessarily have to be different from, or independent of, the universe itself, as presumably required for an ontologically objective foundation for morality?  Pantheists for example (which can be classified as atheists as far as I’m concerned), assume that the universe itself is God, and thus the universe needed no creator nor anything independent of itself.  Everything in the universe is considered a part of that God and that’s simply all there is to it.

If one takes a leap of faith and assumes that a presumed God not only exists, but is indeed also independent of the universe in some way, aren’t they even less justified in making claims about the attributes of this God?  Wouldn’t it be reasonable to assume that they’d have an even larger epistemological barrier between themselves and an external, separate, and independent God?  It seems incredibly clear that any claims about what a God would be like are based on the unsubstantiated assumption that humans must necessarily have access to such knowledge, and in order to hold such a view, it seems that one would have to abandon all logic and reasoning.

Euthyphro dilemma

One common challenge to the Divine Command Theory mentioned earlier is the Euthyphro dilemma, whereby one must determine if actions are good simply because a presumed God commands them, or rather that the presumed God commands particular actions because they are good independently of that God.  If the former premise is chosen, this would imply that whatever a God commands, even if humans or others see those commands to be immoral, that they must be moral and good regardless of human criticisms. If the latter premise is chosen, then morality is clearly not dependent on God thus defeating the Divine Command Theory altogether as well as the precept that God is omnipotent (since God in this case wouldn’t ultimately have control over defining what is good and what is not good).  So for those that ascribe to the Divine Command Theory, it appears that they also have to accept that all moral actions (no matter how immoral they may seem to us) are indeed moral simply because a God commands them. One should also contemplate that if a God were theoretically able to modify its commands over time (presumably possible with an omnipotent God), then any theological objective foundation for morals would be malleable and subject to change, thereby reducing, if not defeating, the pragmatic utility of that objective foundation.

There are many people that have absolutely no problem with such Divine Command Theory assumptions, including the many theists that accept and justify the purported acts of their God (or gods), despite there being an enormous number of people outside of those particular religions that see many of those acts as heinous and morally reprehensible (e.g. divinely authorized war, murder, rape, genocide, slavery, racism, sexism, sexual-orientationism, etc.).  Another problem that exists for the Divine Command Theory is the problem of contradictory divine commands, whereby many different religions each claim to follow divine commands despite the fact that the divine commands of one religion may differ from another.  These differences clearly indicate that even if the Divine Command Theory were true, the fact that people don’t agree on what those divine commands are, and the fact that there is no known method for confirming what the true divine commands are, illustrates that the theory is pragmatically useless as it fails to actually provide any way of knowing what these ontologically objective morals and ethics would be.  In other words, even if morals did have a theologically-based ontologically objective foundation, it appears that we have an epistemological barrier from ever confirming such an objective status.

Argument from Morality for the Existence of God

Some believe in what is often referred to as the “Moral Argument for God” or the “Argument from Morality”, whereby at least one variation asserts that because moral values exist in some sense, it then follows that a God must necessarily exist, since nature on its own appears to be morally neutral, as nature doesn’t appear to have any reason or mechanism for producing moral values from purely physical or materialistic processes. One can also see that by accepting such an assertion, if one wants to believe in the existence of an objective foundation for morals, one need only believe that morals exist, for this supposedly implies that God exists, and it is presumed that an existent God (if one ascribes to the common assumption that “God” must be good as explained earlier) also provides an objective foundation for morals.

Well, what if morals are not actually separate from naturalistic mechanisms and explanations?  While nature may appear to be morally neutral, there is evidence to suggest that what we often call “morality” (at least partially) resulted from natural selection pressures ingraining into humans a tendency for reciprocal altruism among other innate behaviors that have been beneficial to the survival of our highly social species, or at least beneficial in the context of the environment we once lived in prior to our cultural evolution into civilization.  For example, altruism, which can roughly be expressed or represented by the Golden Rule (i.e. do to others what you would have them do to you), is a beneficial behavior for it provides an impulse toward productive cooperation and reciprocal favors between individuals.  Another example of innate morality would be the innate aversion from incest, and this also makes evolutionary sense because incestual reproduction is more likely to produce birth defects due to genetically identical recessive mutations or problematic genes being expressed more often.

These innate tendencies, that is, what we innately feel to be good and bad behaviors are what we often label as “moral” and “immoral” behaviors, respectively.  It is certainly plausible that after our unconscious, pre-conscious, or primitively conscious ancestors evolved into self-aware and more complex conscious beings that were able to culturally transmit information over generations as well as learn new behavior, they also realized that their innate tendencies and feelings were basically fixed attributes of their human nature that couldn’t simply be unlearned or modified culturally.  Without having any idea where these innate tendencies came from, due to a lack of knowledge about evolutionary biology and psychology, humans likely intuitively concluded that moral values (or at least those that are innate) were something supernaturally based or divinely ordained.  It is at least arguable that not all morals that humans ascribe to are necessarily innate, as there also appears to be a malleable moral influence derived from the cultural transmission of certain memes, often aided by our intellectual ability to override certain instincts.  However, I think it would be more accurate to say that our most fundamental goals in life in terms of achieving personal satisfaction (through cultivating virtues and behaving with respect to the known consequences of our actions) constitutes our fundamental morality — and I think that this morality is indeed innate based on evolutionary psychology, biology, etc.

Additionally, a large number of these culturally transmitted behaviors (that we often label as “morals”) often align with our innate moral tendencies anyway, for example, memes promoting racism may be supported by our natural tendency to conveniently lump people into groups and see outsiders as dangerous or threatening.  Or the opposite may occur, for example, when memes promoting racial equality may be supported by our natural tendency for racially-neutral reciprocal altruism.  Clearly what we tend to call “morals” are an amalgam of culturally transmitted ideas as well as innate predispositions, that is, they result from socio-biological or cultural-biological processes — even if there is an innate fundamental morality that serves as an objective foundation for those culturally constructed morals.

Moreover, since other animals (or at least most other animals) do not seem to exhibit what we call moral behavior, it is likely that most humans saw it (and many still continue to see it) as a unique property of humans alone, and thus somehow existing independently of the rest of the nature around them.  One response to this anthropocentric perspective would be to note that if we look at other animals’ behavior, they may just as easily be described as having their own morals based on their own naturally selected innate behavioral tendencies, even if those morals are completely different from our own, and even if those morals are not as intelligently informed due to our more complex brains and self-awareness (most notably in the case of culturally transmitted morals).  Now it may be true that what evolutionary biologists, psychologists, and sociologists have discovered to be the mechanism or explanation for human morality, as well as how we choose to define that morality naturalistically, is not something that certain people want to accept.  However, that lack of acceptance or lack of comfort doesn’t make it any less true or any less plausible.  It seems that some people simply want morality to have a different kind of ontological status or some level of objectivity, such that they can find more solace in their convictions and also to support their anthropocentric presuppositions.

Objective Morality, Moral Growth, and our Moral Future

While the many arguments for God have been refuted or at least highly challenged, it appears that the actual existence of God isn’t nearly as important as people’s belief in such a God, especially when it comes to concepts such as morality.  Sartre once quoted Dostoyevsky as saying, “If there is no God, then everything is permissible.”  I personally feel that this quote illustrates quite eloquently why so many people feel compelled to argue that a God exists (among other reasons), as many seem to feel that without the notion of a God existing, the supposed lack of an objective foundation for morality will lead people to do whatever they want to do, and thus people will no longer ascribe to truly “moral” behavior.  However, as we can clearly see, there are many atheists who behave quite morally relative to the Golden Rule, if we must indeed specify some moral frame of reference.  There are also plenty of people who believe in a God and yet behave in ways that are morally reprehensible relative to the same Golden Rule standard.  The key difference between the atheist and the theist, at least concerning moral objectivity, is that the atheist, by definition, doesn’t believe that any of their behavior has a theologically grounded objective ontological status to justify it, although the atheist may still believe in some type of moral objectivity (likely grounded in a science of morality, which is a view I actually agree with).  On the other hand, the theist does believe in a theological basis for moral objectivity, so if either the atheist or theist behave in ways that you or I would find morally reprehensible, the theist alone would actually feel religiously obligated to do so.

Regarding the concern for a foundation for morals, I think it is fair to say that the innate morality of human beings, that is, those morals that have been ingrained in us for evolutionary reasons (such as altruism), could be described as having a reliable foundation, even if not an ontologically objective one.  On top of this “naturally selected” foundation for morality, we can build upon it by first asking ourselves why we believe moral behavior is important in the first place.  If humans overwhelmingly agree that morality is important for promoting and maximizing the well-being of conscious creatures (with higher-level conscious creatures prioritized over those with less complex brains and lower-level consciousness), or if they agree with the contra-positive of that proposition, that morality is important for inhibiting and minimizing the suffering of conscious creatures, then one could say that humans at least have a moral axiom that they could ascribe to.  This moral axiom, i.e., that moral behavior is defined as that which maximizes the well-being of conscious creatures (as proposed by many “Science of Morality” proponents such as Sam Harris), is indeed an axiom that one can further build upon, refine, and implement through the use of epistemologically objective methods in science.  Even if this “moral axiom” doesn’t provide an ontologically objective morality, it has a foundation that is grounded on human intuition, reason, and empirical data.  If one argues that this still isn’t as good as having a theologically grounded ontologically objective morality, then one must realize that the theological assumptions for said moral objectivity have no empirical basis at all.  After all, even if a God does in fact exist, why exactly would a God necessarily provide an objective foundation for morals?  More importantly, as I mentioned earlier, there appears to be no epistemologically objective way to ascertain any ontologically objective morals, so it doesn’t really matter anyway.

One can also see that the theist’s position, in terms of which morals to follow, is supposedly fixed, although history has shown us that religions and their morals can change over time, either by modifying the scripture or basic tenets, or by modifying the interpretation of said scripture or basic tenets. Even if moral modifications take place with a religion or its followers, the claim of moral objectivity (and an intentional resistance to change those morals) is often maintained, paradoxically. On the other hand, the atheist’s position on morals is not inherently fixed and thus the atheist is at least possibly amenable to reason in order to modify the morals they ascribe to, with the potential to culturally adapt to a society that increasingly abhors war, murder, rape, genocide, slavery, racism, sexism, sexual-orientationism, etc.  Whereas the typical theist can not morally adapt to the culturally evolving world around them (at least not consciously or admittedly), even as more evidence and data are obtained pertaining to a better understanding of that world, the typical atheist indeed has these opportunities for moral growth.

As I’ve mentioned in previous posts, human nature is malleable and will continue to change as our species continues to evolve.  As such, our innate predispositions regarding moral behavior will likely continue to change as it has throughout our evolutionary history.  If we utilize “engineered selection” through the aid of genetic engineering, our moral malleability will be catalyzed and these changes to human nature will precipitate incredibly quickly and with conscious foresight.  Theists are no exception to evolution, and thus they will continue to evolve as well, and as a result their innate morality will also be subject to change.  Any changes that do occur to human nature will also likely affect which memes are culturally transmitted (including memes pertaining to morality) and thus morality will likely continue to be a dynamic amalgam of both biological and cultural influences.  So despite the theistic fight for an objective foundation for morality, it appears that the complex interplay between evolution and culture that led to theism in the first place will continue to change, and the false idea of any ontologically objective foundation for morality existing will likely continue to dissipate.

History has shown us that reason as well as our innate drive for reciprocal altruism is all we need in order to behave in ways that adhere to the Golden Rule (or to some other moral axiom that maximizes the well-being of conscious creatures).  Reason and altruism have also given us the capability of adapting our morals as we learn more about our species and the consequences of our actions. These assets, combined with a genetically malleable human nature will likely lead us to new moral heights over time. In the mean time, we have reason and an innate drive for altruism to morally guide us. It should be recognized that some religions which profess the existence of a God and an objective morality also abide by some altruistic principles, but many of them do not (or do so inconsistently), and when they do, they are likely driven by our innate altruism anyway. However, it takes belief in a God and its objective foundation for morality to most effectively justify behaving in any way imaginable, often in ways that negate both reason and our instinctual drive for altruism, and often reinforced by the temptation of eternal reward and the threat of eternal damnation. In any case, the belief in moral objectivity (or more specifically moral absolutes), let alone the belief in theologically grounded moral objectivity or absolutism, appears to be a potentially dangerous one.

Religious Paradigms in the Wake of Science

with 2 comments

Albert Einstein once said “Science without religion is lame, religion without science is blind.” From my perspective, I see the latter as most certainly true as science is the only way we’ve been able to gain a falsifiable world view of our universe. As for the former, it seems that Einstein was mainly pointing out how religion has largely precipitated from the human aspiration to ascertain truth, and without that drive for truth, science would be ineffective. That also sounds reasonable, as early on and throughout most of human history, religion was more or less the dominant world view used to provide many explanations for the unknown. Many if not most of these explanations were supernatural and the world view in general was also highly anthropomorphic and anthropocentric, perhaps due to its highly subjective basis and the failure to see that subjectivity bias as a fundamental problem (even if it sometimes produces more intuitive explanations). For a more in depth analysis of religion, I recommend you read one of my previous posts.

As people stumbled upon science, realizing that the same empirical and causally-based methodologies used to tackle everyday problems could actually be applied to the investigation of all phenomena, it has been slowly but surely replacing the religious world views with a more objective perspective as the human quests for truth, understanding, and predictive power are perpetuated. In the hopes of maintaining many of the old religious world views, there has no doubt been an enormous amount of religious opposition to science. It’s certainly not difficult to see why so many different religious proponents oppose science. After all, the pragmatic knowledge and explanatory power derived from science has replaced the hundreds of different gods and supernatural explanations proposed over the centuries, and it has also been taking power away from the priests and clergy whose authority throughout history has been based on the presumed existence of those gods and supernatural processes. Above and beyond the fact that science has been eliminating the “gods of the gaps” one by one, science has also been refuting some primary and often necessary assumptions within certain religions. Overall, it seems that the religious world views are slowly fading away in the wake of science. Let’s examine a few…

Human Origins

There is a strong belief held by many religious proponents that human beings along with all other species were created by a deity in their present form. Science has shown us no evidence of any deities, but it has shown us a plethora of evidence within evolutionary biology (among other disciplines) which shows that human beings, like all other life forms on Earth, have indeed evolved from a common ancestor thus forming the diversity of life we see today. Furthermore, we are seeing many different species continue to evolve (including human beings). Despite the scientific consensus that evolution is a fact, there are a large number of people that ignore the evidence in order to preserve their creation origin myths as well as to preserve many other parts of their old world view. While this ignorance may be seen as inconsequential to some (people are entitled to their own beliefs after all), it definitely becomes problematic when it enters and poisons the educational and political spheres of society where reason and intellect are needed most.

Some people have actually gone so far as to try and add Creationism as a complement to the Theory of Evolution currently being taught within the science curriculum of various public schools, despite the fact that the creationist’s claims aren’t supported by any scientific evidence, and thus should remain in the academic realms of cultural studies, religious studies, and mythology. To make matters worse, many religious proponents have also tried to use pseudo-scientific arguments to disprove evolution (although to no avail). Some have even resorted to using the intellectually dishonest (or merely ignorant) argument claiming that “evolution is just a theory”, not realizing that the meaning of the word “theory” within science is quite different from the common everyday usage. Whereas the common everyday usage of the word “theory” is meant to imply a “hypothesis”, the scientific usage implies an explanation with a factual basis that is generally supported by most if not all of the scientific community within the relevant fields. Einstein’s General Theory of Relativity is no different and thus would also be considered as “just a theory”, but we know for a fact that some force which we call “gravity” does indeed exist, and this force also produces measurable temporal dilation, as well as the non-Euclidean or curved space effects predicted by the theory. While some of the details of these theories may remain under contention, and while the theories may be incomplete in one way or another, the main crux of these scientific theories are widely accepted as scientific facts.

These kinds of arguments and tactics have far less precedent, for in the past, religious claims were largely supported by religious authority and intuition alone and didn’t require falsifiable scientific support. As science has continued to gain more influence and followers through its explanatory power, and as more educated people begin to participate in these kinds of public discourses, the necessity of scientifically grounded arguments has grown substantially. So it isn’t all that surprising to see many of the people with religious-based world views try and find scientific arguments to support their case, although it is obviously hypocritical and inconsistent when the same people undermine science when it no longer supports their position. The crucial difference worth noting here is that science is ultimately about trying to find an explanatory and descriptive model that fits the data best, whereas those trying to prove religious beliefs to be true are effectively cherry-picking data to fit a presupposed model. That is, science is always willing to scrap a poor model for a better one that has more explanatory and predictive power as more and more data is collected, whereas religion clings to one model and one model only no matter how poorly it fits the ever increasing amount of data and despite it’s usual lack of explanatory and predictive power.

Teleological Evolution of Humans

Evolutionary theists believe that evolution is factual, but some of them also believe that evolution has had a specific purpose or end-goal in mind determined by a deity, namely to produce human beings (another example of religious anthropocentrism).  In a few of these religious accounts, it has been suggested that once humans evolved from other life forms, they were given a soul and have been participating in some kind of an ongoing religious narrative.  Some have claimed that humans evolved to worship some god(s), to prepare for an apocalypse, to prepare for the afterlife, and other similar stories.  The main point here is that within these types of religious claims, the human species is purportedly the final speciation goal of evolution, and as a result, humans are thought to be the most remarkable, most intellectually capable, and most important species that will ever exist.

In terms of the scientific credibility of such claims, none of the claims are falsifiable except perhaps one — that humans are the end-all be-all for evolution and speciation, or to put it another way, that humans (or another species for that matter) will not evolve further (let alone evolve to produce a species that is more remarkable or one with more intelligent capabilities than homo sapiens).  We can already see that the assumption that humans will no longer evolve is patently false by noticing some relatively recent evolutionary changes to human beings, including the otherwise unnecessary ability for some human adults to digest lactose (this mutation became favorable after the recent development of agriculture and dairy farming several thousand years ago), the existence of specific disease resistances (and their genetic markers) within certain ethnic populations, and other gene pool changes due to genetic drift.

Perhaps more importantly, with the recent development of genetic engineering, we are beginning to consciously and directly guide our own evolution at the molecular level (and the evolution of other species).  As this technology develops further, we are likely to change extremely quickly into a completely different species, and one with more advanced capabilities engineered into the genome. Interestingly enough, there hasn’t been any evidence for the teleological evolution of any species until relatively recently, but it is human beings that are teleologically driving it through both artificial and, what I call, “engineered” selection.

Free Will

If science has shown us anything, it has shown us that there is a causal structure that exists in the world around us in which events that occur are ultimately caused by prior events. If this weren’t the case, then we could never successfully apply the scientific method, let alone live our daily lives with any predictable order or structure. Fortunately, because of the causal (and potentially deterministic) nature of our universe, we’ve been able to successfully formulate hypotheses, test them, and use the results to make further testable predictions.  Regarding free will, there is no known way for humans (or any other entity or object for that matter) to circumvent this causality without their actions being causa sui which would not only undermine the process of rational thought (which depends on causal thought processes), but would also go against every bit of scientific evidence we have obtained thus far.

Even if the randomness proposed within quantum mechanics were ontologically the case (which we’ll likely never know), we all know that randomness can’t produce freely willed actions either, since there have to be non-random conscious intentions and thought processes behind any deliberate action.  So whether the universe is ontologically deterministic or indeterministic (i.e. random), classical free will is logically incompatible with either possibility. Obviously this presents a serious problem to those religious views which assume that humans do in fact possess free will. Concepts such as moral responsibility, human fate in some proposed afterlife, karma, etc., lose their luster when free will is taken out of the equation since this would imply that any spiritual fate supposed isn’t something we can actually change or control anyway, and thus any implemented punishment or reward is ultimately futile.

Despite the fact that we don’t have free will, we all live with the illusion of free will since we don’t directly experience the prior causes to our thoughts and subsequent actions, and thus we truly feel that we self-cause those thoughts and actions.  In the grand scheme of things, even without any free will, we can see that our societal approach of implementing laws, crime deterrence measures, and any punishment-reward system for that matter, isn’t based on the assumption that we can freely choose our behaviors so much as they are based on their efficacy to maximize safety, productivity, as well as what society deems to be acceptable behavior.  It’s efficacy is accomplished primarily through the physical constraint measures put into place as well as the pragmatic application of psychological conditioning principles.

It doesn’t ultimately matter whether or not we could have chosen to behave differently unless one is trying to maintain certain metaphysical presuppositions, such as those proposed in many religions. However, our recognition that free will doesn’t exist can certainly affect how we approach problems in society. As a result of science demonstrating that we lack free will through the discovery of causal constraints such as genes, the body’s internal environment, and the body’s external physical environment (including that which causes the psychological conditioning of the brain), we’re definitely becoming more able to address the actual root causes of many problematic behaviors. In doing so, rather than wasting resources and erroneously blaming an individual for not “choosing” to behave differently (as in many religions), we can appropriately view every individual as an innocent amalgam of genetic and environmental information (regardless of their behavior) and then take more effective measures to improve their behavior by attempting to change any problematic genes and environmental factors.

Struggle for Morality

One of the most pressing issues regarding the human condition is the constant struggle to behave in ways that society deems to be moral. Many religions have their own ideas about what is considered to be moral behavior and they often claim that their particular morals are ordained by a god or some form of divine authority. It is also common that morality and immorality play an important role within various religious narratives.  For example, within the Abrahamic religions, if a person commits what the religion deems to be immoral acts, that is, if they “sin”, and this person does not repent or have their sins absolved, they are destined to eternal damnation.  Within Christianity, “sin” is considered an inevitable act passed down from generation to generation ever since the supposed “fall of man” which, as the story goes, began with a first descendent, named Adam.  This concept of seeing humans as inherent sinners is sometimes referred to as “original sin”.

As was mentioned in the previous section, humans’ lack of free will suggests that humans ultimately have no control over whether they “sin” or not.  Behavior is determined by prior causes such as a person’s genes and the psychological conditioning they’ve undergone throughout their lives.  Evolutionary biologists have also shown that the reasons for humans behaving in ways that society or various religions deem immoral is because of selfish genes as well as an ongoing conflict between biological instincts and societal conventions and expectations.

The strategy that genes tend to implement through their respective phenotypes (including behavior) tend to perpetuate those genes through means of self-preservation, reproductive success, and subsequent child-rearing success.  Additionally, because of the incredible speed of cultural evolution and ever-changing social conventions, humans may find difficulties adhering to particular conventions due to their biological evolution lagging behind that cultural evolution. To give some examples, if people kill others or steal, it is likely (or was likely long ago) to increase one’s chances of survival or increase one’s chances of successful mating by gaining power, property, and social status.  Infidelity could also be seen as a result of being sexually attracted to others because they may provide better genes for new offspring or simply provide more offspring.  Also, if humans are naturally more of a polygamous primate, it would make sense that monogamy, even if the current societal convention, would be difficult to maintain. Thus, there are many possible reasons for why humans behave the way they do, and science has been continuing to enlighten us with these reasons as we gain more information from evolutionary biology and psychology (among other disciplines).

As for the religious claim that humans will always be immoral, a few things must be made clear. For one, morality is largely determined by society, and so what is considered moral in one society may be considered immoral in another. Despite the claim by some religious proponents that religions provide some kind of objective foundation for ethics and morals, we can see that different religions often proclaim different morals, thus it is clear that no such objectivity exists. Science and reason on the other hand do provide a nice resource for answering moral questions by showing us in detail the consequences of our actions (such that we can better determine how we ought to behave), by providing us with a clearer picture of how the world really is so that our moral goals aren’t based on false pretenses, and by providing us with increasingly better ways to achieve those moral goals.

As we continue to evolve as humans or into another species entirely, our innate feelings or instincts about what is moral or immoral will likely continue to change (as will our behavior) since anything that is innate has a biological basis. Most importantly, as we continue to consciously guide our own species’ evolution through genetic engineering, we will have the power to shape human nature into anything we desire. In other words, we aren’t necessarily trapped in a struggle for morality as many religions claim, for we are going to have greater and greater abilities to change our instinctual behavior such that we are naturally inclined to behave in any way that society desires. The key point here is that, as opposed to some religious views which assume that mankind is forever doomed to immoral behavior, science is providing a way out of this supposed predicament.

Final Thoughts

It’s not at all surprising to see certain religious groups highly opposed to science, for there are countless ways that science has been threatening to their world view. Even the fear of death which has likely attracted people to religions for the promise of eternal life is being addressed by science as advances in genetic engineering, medicine, and artificial intelligence work toward increasing life expectancy potentially to the theoretical upper limit (i.e. for as long as the universe is able to support life, given the Second Law of Thermodynamics, etc.).

One striking parallel between many religious claims and the actual efficacy of science is that science truly appears to be providing the ultimate salvation of our species (and whatever species we will become). However, it is being accomplished by taking the ever increasing knowledge acquired over time and addressing every problem we face within the human condition, one by one. While religion has played an important role in history, most notably, in the human quest for truth — it seems clear to me that history has indeed also shown us that the more we accept and use science to learn about the universe, the better chance we have to achieve our goals as our species continually evolves.

An Evolved Consciousness Creating Conscious Evolution

with 3 comments

Two Evolutionary Leaps That Changed It All

As I’ve mentioned in a previous post, human biological evolution has led to the emergence of not only consciousness but also a co-existing yet semi-independent cultural evolution (through the unique evolution of the human brain).  This evolutionary leap has allowed us to produce increasingly powerful technologies which in turn have provided a means for circumventing many natural selection pressures that our physical bodies would otherwise be unable to handle.

One of these technologies has been the selective breeding of plants and animals, with this process often referred to as “artificial” selection, as opposed to “natural” selection since human beings have served as an artificial selection pressure (rather than the natural selection pressures of the environment in general).  In the case of our discovery of artificial selection, by choosing which plants and animals to cultivate and raise, we basically just catalyzed the selection process by providing a selection pressure based on the plant or animal traits that we’ve desired most.  By doing so, rather than the selection process taking thousands or even millions of years to produce what we have today (in terms of domesticated plants and animals), it only took a minute fraction of that time since it was mediated through a consciously guided or teleological process, unlike natural selection which operates on randomly differentiating traits leading to differential reproductive success (and thus new genomes and species) over time.

This second evolutionary leap (artificial selection that is) has ultimately paved the way for civilization, as it has increased the landscape of our diet and thus our available options for food, and the resultant agriculture has allowed us to increase our population density such that human collaboration, complex distribution of labor, and ultimately the means for creating new and increasingly complex technologies, have been made possible.  It is largely because of this new evolutionary leap that we’ve been able to reach the current pinnacle of human evolution, the newest and perhaps our last evolutionary leap, or what I’ve previously referred to as “engineered selection”.

With artificial selection, we’ve been able to create new species of plants and animals with very unique and unprecedented traits, however we’ve been limited by the rate of mutations or other genomic differentiating mechanisms that must arise in order to create any new and desirable traits. With engineered selection, we can simply select or engineer the genomic sequences required to produce the desired traits, effectively allowing us to circumvent any genomic differentiation rate limitations and also allowing us instant access to every genomic possibility.

Genetic Engineering Progress & Applications

After a few decades of genetic engineering research, we’ve gained a number of capabilities including but not limited to: producing recombinant DNA, producing transgenic organisms, utilizing in vivo trans-species protein production, and even creating the world’s first synthetic life form (by adding a completely synthetic or human-constructed bacterial genome to a cell containing no DNA).  The plethora of potential applications for genetic engineering (as well as those applications currently in use) has continued to grow as scientists and other creative thinkers are further discovering the power and scope of areas such as mimetics, micro-organism domestication, nano-biomaterials, and many other inter-related niches.

Domestication of Genetically Engineered Micro and Macro-organisms

People have been genetically modifying plants and animals for the same reasons they’ve been artificially selecting them — in order to produce species with more desirable traits. Plants and animals have been genetically engineered to withstand harsher climates, resist harmful herbicides or pesticides (or produce their own pesticides), produce more food or calories per acre (or more nutritious food when all else is equal), etc.  Plants and animals have also been genetically modified for the purposes of “pharming”, where substances that aren’t normally produced by the plant or animal (e.g. pharmacological substances, vaccines, etc.) are expressed, extracted, and then purified.

One of the most compelling applications of genetic engineering within agriculture involves solving the “omnivore’s dilemma”, that is, the prospect of growing unconscious livestock by genetically inhibiting the development of certain parts of the brain so that the animal doesn’t experience any pain or suffering.  There have also been advancements made with in vitro meat, that is, producing cultured meat cells so that no actual animal is needed at all other than some starting cells taken painlessly from live animals (which are then placed into a culture media to grow into larger quantities of meat), however it should be noted that this latter technique doesn’t actually require any genetic modification, although genetic modification may have merit in improving these techniques.  The most important point here is that these methods should decrease the financial and environmental costs of eating meat, and will likely help to solve the ethical issues regarding the inhumane treatment of animals within agriculture.

We’ve now entered a new niche regarding the domestication of species.  As of a few decades ago, we began domesticating micro-organisms. Micro-organisms have been modified and utilized to produce insulin for diabetics as well as other forms of medicine such as vaccines, human growth hormone, etc.  There have also been certain forms of bacteria genetically modified in order to turn cellulose and other plant material directly into hydrocarbon fuels.  This year (2014), E. coli bacteria have been genetically modified in order to turn glucose into pinene (a high energy hydrocarbon used as a rocket fuel).  In 2013, researchers at the University of California, Davis, genetically engineered cyanobacteria (a.k.a. blue-green algae) by adding particular DNA sequences to its genome which coded for specific enzymes such that it can use sunlight and the process of photosynthesis to turn CO2 into 2,3 butanediol (a chemical that can be used as a fuel, or to make paint, solvents, and plastics), thus producing another means of turning our over abundant carbon emissions back into fuel.

On a related note, there are also efforts underway to improve the efficiency of certain hydro-carbon eating bacteria such as A. borkumensis in order to clean up oil spills even more effectively.  Imagine one day having the ability to use genetically engineered bacteria to directly convert carbon emissions back into mass-produced fuel, and if the fuel spills during transport, also having the counterpart capability of cleaning it up most efficiently with another form of genetically engineered bacteria.  These capabilities are being further developed and are only the tip of the iceberg.

In theory, we should also be able to genetically engineer bacteria to decompose many other materials or waste products that ordinarily decompose extremely slowly. If any of these waste products are hazardous, bacteria could be genetically engineered to breakdown or transform the waste products into a safe and stable compound.  With these types of solutions we can make many new materials and have a method in line for their proper disposal (if needed).  Additionally, by utilizing some techniques mentioned in the next section, we can also start making more novel materials that decompose using non-genetically-engineered mechanisms.

It is likely that genetically modified bacteria will continue to provide us with many new types of mass-produced chemicals and products. For those processes that do not work effectively (if at all) in bacterial (i.e. prokaryotic) cells, then eukaryotic cells such as yeast, insect cells, and mammalian cells can often be used as a viable option. All of these genetically engineered domesticated micro-organisms will likely be an invaluable complement to the increasing number of genetically modified plants and animals that are already being produced.

Mimetics

In the case of mimetics, scientists are discovering new ways of creating novel materials using a bottom-up approach at the nano-scale by utilizing some of the self-assembly techniques that natural selection has near-perfected over millions of years.  For example, mollusks form sea shells with incredibly strong structural/mechanical properties by their DNA coding for the synthesis of specific proteins, and those proteins bonding the raw materials of calcium and carbonate into alternating layers until a fully formed shell is produced.  The pearls produced by clams are produced with similar techniques. We could potentially use the same DNA sequence in combination with a scaffold of our choosing such that a similar product is formed with unique geometries, or through genetic engineering techniques, we could modify the DNA sequence so that it performs the same self-assembly with completely different materials (e.g. silicon, platinum, titanium, polymers, etc.).

By combining the capabilities of scaffolding as well as the production of unique genomic sequences, one can further increase the number of possible nanomaterials or nanostructures, although I’m confident that most if not all scaffolding needs could eventually be accomplished by the DNA sequence alone (much like the production of bone, exoskeleton, and other types of structural tissues in animals).  The same principles can be applied by looking at how silk is produced by spiders, how the cochlear hair cells are produced in mammals, etc.  Many of these materials are stronger, lighter, and more defect-free than some of the best human products ever engineered.  By mimicking and modifying these DNA-induced self-assembly techniques, we can produce entirely new materials with unprecedented properties.

If we realize that even the largest plants and animals use these same nano-scale assembly processes to build themselves, it isn’t hard to imagine using these genetic engineering techniques to effectively grow complete macro-scale consumer products.  This may sound incredibly unrealistic with our current capabilities, but imagine one day being able to grow finished products such as clothing, hardware, tools, or even a house.  There are already people working on these capabilities to some degree (for example using 3D printed scaffolding or other scaffolding means and having plant or animal tissue grow around it to form an environmentally integrated bio-structure).  If this is indeed realizable, then perhaps we could find a genetic sequence to produce almost anything we want, even a functional computer or other device.  If nature can use DNA and natural selection to produce macro-scale organisms with brains capable of pattern recognition, consciousness, and computation (and eventually the learned capability of genetic engineering in the case of the human brain), then it seems entirely reasonable that we could eventually engineer DNA sequences to produce things with at least that much complexity, if not far higher complexity, and using a much larger selection of materials.

Other advantages from using such an approach include the enormous energy savings gained by adopting the naturally selected economically efficient process of self-assembly (including less changes in the forms of energy used, and thus less loss) and a reduction in specific product manufacturing infrastructure. That is, whereas we’ve typically made industrial scale machines individually tailored to produce specific components which are later assembled into a final product, by using DNA (and the proteins it codes for) to do the work for us, we will no longer require nearly as much manufacturing capital, for the genetic engineering capital needed to produce any genetic sequence is far more versatile.

Transcending the Human Species

Perhaps the most important application of genetic engineering will be the modification of our own species.  Many of the world’s problems are caused by sudden environmental changes (many of them anthropogenic), and if we can change ourselves and/or other species biologically in order to adapt to these unexpected and sudden environmental changes (or to help prevent them altogether), then the severity of those problems can be reduced or eliminated.  In a sense, we would be selecting our own as well as other species by providing the proper genes to begin with, rather than relying on extremely slow genomic differentiation mechanisms and the greater rates of suffering and loss of life that natural selection normally follows.

Genetic Enhancement of Existing Features

With power over the genome, we may one day be able to genetically increase our life expectancy, for example, by modifying the DNA polymerase-g enzyme in our mitochondria such that they make less errors (i.e. mutations) during DNA replication, by genetically altering telomeres in our nuclear DNA such that they can maintain their length and handle more mitotic divisions, or by finding ways to preserve nuclear DNA, etc. If we also determine which genes lead to certain diseases (as well as any genes that help to prevent them), genetic engineering may be the key to extending the length of our lives perhaps indefinitely.  It may also be the key to improving the quality of that extended life by replacing the techniques we currently use for health and wellness management (including pharmaceuticals) with perhaps the most efficacious form of preventative medicine imaginable.

If we can optimize our brain’s ability to perform neuronal regeneration, reconnection, rewiring, and/or re-weighting based on the genetic instructions that at least partially mediate these processes, this optimization should drastically improve our ability to learn by improving the synaptic encoding and consolidation processes involved in memory and by improving the combinatorial operations leading to higher conceptual complexity.  Thinking along these lines, by increasing the number of pattern recognition modules that develop in the neo-cortex, or by optimizing their configuration (perhaps by increasing the number of hierarchies), our general intelligence would increase as well and would be an excellent complement to an optimized memory.  It seems reasonable to assume that these types of cognitive changes will likely have dramatic effects on how we think and thus will likely affect our philosophical beliefs as well.  Religious beliefs are also likely to change as the psychological comforts provided by certain beliefs may no longer be as effective (if those comforts continue to exist at all), especially as our species continues to phase out non-naturalistic explanations and beliefs as a result of seeing the world from a more objective perspective.

If we are able to manipulate our genetic code in order to improve the mechanisms that underlie learning, then we should also be able to alter our innate abilities through genetic engineering. For example, what if infants could walk immediately after birth (much like a newborn calf)? What if infants had adequate motor skills to produce (at least some) spoken language much more quickly? Infants normally have language acquisition mechanisms which allow them to eventually learn language comprehension and productivity but this typically takes a lot of practice and requires their motor skills to catch up before they can utter a single word that they do in fact understand. Circumventing the learning requirement and the motor skill developmental lag (at least to some degree) would be a phenomenal evolutionary advancement, and this type of innate enhancement could apply to a large number of different physical skills and abilities.

Since DNA ultimately controls the types of sensory receptors we have, we should eventually be able to optimize these as well.  For example, photoreceptors could be modified such that we would be able to see new frequencies of electro-magnetic radiation (perhaps a more optimized range of frequencies if not a larger range altogether).  Mechano-receptors of all types could be modified, for example, to hear a different if not larger range of sound frequencies or to increase tactile sensitivity (i.e. touch).  Olfactory or gustatory receptors could also be modified in order to allow us to smell and taste previously undetectable chemicals.  Basically, all of our sensations could be genetically modified and, when combined with the aforementioned genetic modifications to the brain itself, this would allow us to have greater and more optimized dimensions of perception in our subjective experiences.

Genetic Enhancement of Novel Features

So far I’ve been discussing how we may be able to use genetic engineering to enhance features we already possess, but there’s no reason we can’t consider using the same techniques to add entirely new features to the human repertoire. For example, we could combine certain genes from other animals such that we can re-grow damaged limbs or organs, have gills to breathe underwater, have wings in order to fly, etc.  For that matter, we may even be able to combine certain genes from plants such that we can produce (at least some of) our own chemical energy from the sun, that is, create at least partially photosynthetic human beings.  It is certainly science fiction at the moment, but I wouldn’t discount the possibility of accomplishing this one day after considering all of the other hybrid and transgenic species we’ve created already, and after considering the possible precedent mentioned in the endosymbiotic theory (where an ancient organism may have “absorbed” another to produce energy for it, e.g. mitochondria and chloroplasts in eukaryotic cells).

Above and beyond these possibilities, we could also potentially create advanced cybernetic organisms.  What if we were able to integrate silicon-based electronic devices (or something more biologically compatible if needed) into our bodies such that the body grows or repairs some of these technologies using biological processes?  Perhaps if the body is given the proper diet (i.e. whatever materials are needed in the new technological “organ”) and has the proper genetic code such that the body can properly assimilate those materials to create entirely new “organs” with advanced technological features (e.g. wireless communication or wireless access to an internet database activated by particular thoughts or another physiological command cue), we may eventually be able to get rid of external interface hardware and peripherals altogether.  It is likely that electronic devices will first become integrated into our bodies through surgical implantation in order to work with our body’s current hardware (including the brain), but having the body actually grow and/or repair these devices using DNA instruction would be the next logical step of innovation if it is eventually feasible.

Malleable Human Nature

When people discuss complex issues such as social engineering, sustainability, crime-reduction, etc., it is often mentioned that there is a fundamental barrier between our current societal state and where we want or need to be, and this barrier is none other than human nature itself.  Many people in power have tried to change human behavior with brute force while operating under the false assumption that human beings are analogous to some kind of blank slate that can simply learn or be conditioned to behave in any way without limits. This denial of human nature (whether implicit or explicit) has led to a lot of needless suffering and has also led to the de-synchronization of biological and cultural evolution.

Humans often think that they can adapt to any cultural change, but we often lose sight of the detrimental power that technology and other cultural inventions and changes can have over our physiological and psychological well-being. In a nutshell, the speed of cultural evolution can often make us feel like a fish out of water, perhaps better suited to live in an environment closer to our early human ancestors.  Whatever the case, we must embrace human nature and realize that our efforts to improve society (or ourselves) will only have long term efficacy if we work with human nature rather than against it.  So what can we do if our biological evolution is out-of-sync with our cultural evolution?  And what can we do if we have no choice but to accept human nature, that is, our (often selfish) biologically-driven motivations, tendencies, etc.?  Once again, genetic engineering may provide a solution to many of these previously insoluble problems.  To put it simply, if we can change our genome as desired, then we may be able to not only synchronize our biological and cultural evolution, but also change human nature itself in the process.  This change could not only make us feel better adjusted to the modern cultural environment we’re living in, but it could also incline us to instinctually behave in ways that are more beneficial to each other and to the world as a whole.

It’s often said that we have selfish genes in some sense, that is, many if not all of our selfish behaviors (as well as instinctual behaviors in general) are a reflection of the strategy that genes implement in their vehicles (i.e. our bodies) in order for the genes to maintain themselves and reproduce.  That genes possess this kind of strategy does not require us to assume that they are conscious in any way or have actual goals per se, but rather that natural selection simply selects genes that code for mechanisms which best maintain and spread those very genes.  Natural selection tends toward effective self-replicators, and that’s why “selfish” genes (in large part) cause many of our behaviors.  Improving reproductive fitness and successful reproduction has been the primary result of this strategy and many of the behaviors and motivations that were most advantageous to accomplish this are no longer compatible with modern culture including the long-term goals and greater good that humans often strive for.

Humans no longer exclusively live under the law of the jungle or “survival of the fittest” because our humanistic drives and their cultural reinforcements have expanded our horizons beyond simple self-preservation or a Machiavellian mentality.  Many humans have tried to propagate principles such as honesty, democracy, egalitarianism, immaterialism, sustainability, and altruism around the world, and they are often high-jacked by our often short-sighted sexual and survival-based instinctual motivations to gain sexual mates, power, property, a higher social status, etc.  Changing particular genes should also allow us to change these (now) disadvantageous aspects of human nature and as a result this would completely change how we look at every problem we face. No longer would we have to say “that solution won’t work because it goes against human nature”, or “the unfortunate events in human history tend to recur in one way or another because humans will always…”, but rather we could ask ourselves how we want or need to be and actually make it so by changing our human nature. Indeed, if genetic engineering is used to accomplish this, history would no longer have to repeat itself in the ways that we abhor. It goes without saying that a lot of our behavior can be changed for the better by an appropriate form of environmental conditioning, but for those behaviors that can’t be changed through conditioning, genetic engineering may be the key to success.

To Be or Not To Be?

It seems that we have been given a unique opportunity to use our ever increasing plethora of experiential data and knowledge and combine it with genetic engineering techniques to engineer a social organism that is by far the best adapted to its environment.  Additionally, we may one day find ourselves living in a true global utopia, if the barriers of human nature and the de-synchronization of biological and cultural evolution are overcome, and genetic engineering may be the only way of achieving such a goal.  One extremely important issue that I haven’t mentioned until now is the ethical concerns regarding the continued use and development of genetic engineering technology.  There are obviously concerns over whether or not we should even be experimenting with this technology.  There are many reasonable arguments both for and against using this technology, but I think that as a species, we have been driven to manipulate our environment in any way that we are capable of and this curiosity is a part of human nature itself.  Without genetic engineering, we can’t change any of the negative aspects of human nature but can only let natural selection run its course to modify our species slowly over time (for better or for worse).

If we do accept this technology, there are other concerns such as the fact that there are corporations and interested parties that want to use genetic engineering primarily if not exclusively for profit gain (often at the expense of actual universal benefits for our species) and which implement questionable practices like patenting plant and animal food sources in a potentially monopolized future agricultural market.  Perhaps an even graver concern is the potential to patent genes that become a part of the human genome, and the (at least short term) inequality that would ensue from the wealthier members of society being the primary recipients of genetic human enhancement. Some people may also use genetic engineering to create new bio-warfare weaponry and find other violent or malicious applications.  Some of these practices could threaten certain democratic or other moral principles and we need to be extremely cautious with how we as a society choose to implement and regulate this technology.  There are also numerous issues regarding how these technologies will affect the environment and various ecosystems, whether caused by people with admirable intentions or not.  So it is definitely prudent that we proceed with caution and get the public heavily involved with this cultural change so that our society can move forward as responsibly as possible.

As for the feasibility of the theoretical applications mentioned earlier, it will likely be computer simulation and computing power that catalyze the knowledge base and capability needed to realize many of these goals (by decoding the incredibly complex interactions between genes and the environment) and thus will likely be the primary limiting factor. If genetic engineering also involves expanding the DNA components we have to work with, for example, by expanding our base-four system (i.e. four nucleotides to choose from) to a higher based system through the use of other naturally occurring nucleotides or even the use of UBPs (i.e. “Unnatural Base Pairs”), while still maintaining low rates of base-pair mismatching and while maintaining adequate genetic information processing rates, we may be able to utilize previously inaccessible capabilities by increasing the genetic information density of DNA.  If we can overcome some of the chemical natural selection barriers that were present during abiogenesis and the evolution of DNA (and RNA), and/or if we can change the very structure of DNA itself (as well as the proteins and enzymes that are required for its implementation), we may be able to produce an entirely new type of genetic information storage and processing system, potentially circumventing many of the limitations of DNA in general, and thus creating a vast array of new species (genetically coded by a different nucleic acid or other substance).  This type of “nucleic acid engineering”, if viable, may complement the genetic engineering we’re currently performing on DNA and help us to further accomplish some of the aforementioned goals and applications.

Lastly, while some of the theoretical applications of genetic engineering that I’ve presented in this post may not sound plausible at all to some, I think it’s extremely important and entirely reasonable (based on historical precedent) to avoid underestimating the capabilities of our species.  We may one day be able to transform ourselves into whatever species we desire, effectively taking us from trans-humanism to some perpetual form of conscious evolution and speciation.  What I find most beautiful here is that the evolution of consciousness has actually led to a form of conscious evolution. Hopefully our species will guide this evolution in ways that are most advantageous to our species, and to the entire diversity of life on this planet.

Neuroscience Arms Race & Our Changing World View

leave a comment »

At least since the time of Hippocrates, people began to realize that the brain was the physical correlate of consciousness and thought.  Since then, the fields of psychology, neuroscience, and several inter-related fields have emerged.  There have been numerous advancements made within the field of neuroscience during the last decade or so, and in that same time frame there has also been an increased interest in the social, religious, philosophical, and moral implications that have precipitated from such a far-reaching field.  Certainly the medical knowledge we’ve obtained from the neurosciences has been the primary benefit of such research efforts, as we’ve learned quite a bit more about how the brain works, how it is structured, and the ongoing neuropathology that has led to improvements in diagnosing and treating various mental illnesses.  However, it is the other side of neuroscience that I’d like to focus on in this post — the paradigm shift relating to how we are starting to see the world around us (including ourselves), and how this is affecting our goals as well as how to achieve them.

Paradigm Shift of Our World View

Aside from the medical knowledge we are obtaining from the neurosciences, we are also gaining new perspectives on what exactly the “mind” is.  We’ve come a long way in demonstrating that “mental” or “mind” states are correlated with physical brain states, and there is an ever growing plethora of evidence which suggests that these mind states are in fact caused by these brain states.  It should come as no surprise then that all of our thoughts and behaviors are also caused by these physical brain states.  It is because of this scientific realization that society is currently undergoing an important paradigm shift in terms of our world view.

If all of our thoughts and behaviors are mediated by our physical brain states, then many everyday concepts such as thinking, learning, personality, and decision making can take on entirely new meanings.  To illustrate this point, I’d like to briefly mention the well known “nature vs. nurture” debate.  The current consensus among scientists is that people (i.e. their thoughts and behavior) are ultimately products of both their genes and their environment.

Genes & Environment

From a neuroscientific perspective, the genetic component is accounted for by noting that genes have been shown to play a very large role in directing the initial brain wiring schema of an individual during embryological development and through gestation.  During this time, the brain is developing very basic instinctual behavioral “programs” which are physically constituted by vastly complex neural networks, and the body’s developing sensory organs and systems are also connected to particular groups of these neural networks.  These complex neural networks, which have presumably been naturally selected for in order to benefit the survival of the individual, continue being constructed after gestation and throughout the entire ontogenic evolution of the individual (albeit to lesser degrees over time).

As for the environmental component, this can be further split into two parts: the internal and the external environment.  The internal environment within the brain itself, including various chemical concentration gradients partly mediated by random Brownian motion, provides some gene expression constraints as well as some additional guidance to work with the genetic instructions to help guide neuronal growth, migration, and connectivity.  The external environment, consisting of various sensory stimuli, seems to modify this neural construction by providing a form of inputs which may cause the constituent neurons within these neural networks to change their signal strength, change their action potential threshold, and/or modify their connections with particular neurons (among other possible changes).

Causal Constraints

This combination of genetic instructions and environmental interaction and input produces a conscious, thinking, and behaving being through a large number of ongoing and highly complex hardware changes.  It isn’t difficult to imagine why these insights from neuroscience might modify our conventional views of concepts such as thinking, learning, personality, and decision making.  Prior to these developments over the last few decades, the brain was largely seen as a sort of “black box”, with its internal milieu and functional properties remaining mysterious and inaccessible.  From that time and prior to it, for millennia, many people have assumed that our thoughts and behaviors were self-caused or causa sui.  That is, people believed that they themselves (i.e. some causally free “consciousness”, or “soul”, etc.) caused their own thoughts and behavior as opposed to those thoughts and behaviors being ultimately caused by physical processes (e.g. neuronal activity, chemical reactions, etc.).

Neuroscience (as well as biochemistry and its underlying physics) has shed a lot of light on this long-held assumption and, as it stands, the evidence has shown this prior assumption to be false.  The brain is ultimately controlled by the laws of physics since every chemical reaction and neural event that physically produces our thoughts, choices, and behaviors, have never been shown to be causally free from these physically guiding constraints.  I will mention that quantum uncertainty or quantum “randomness” (if ontologically random) does provide some possible freedom from physical determinism.  However, these findings from quantum physics do not provide any support for self-caused thoughts or behaviors.  Rather, it merely shows that those physically constrained thoughts and behaviors may never be completely predictable by physical laws no matter how much data is obtained.  In other words, our thoughts and behaviors are produced by highly predictable (although not necessarily completely predictable) physical laws and constraints as well as some possible random causal factors.

As a result of these physical causal constraints, the conventional perspective of an individual having classical free will has been shattered.  Our traditional views of human attributes including morality, choices, ideology, and even individualism are continuing to change markedly.  Not surprisingly, there are many people uncomfortable with these scientific discoveries including members of various religious and ideological groups that are largely based upon and thus depend on the very presupposition of precepts such as classical free will and moral responsibility.  The evidence that is compiling from the neurosciences is in fact showing that while people are causally responsible for their thoughts, choices, and behavior (i.e. an individual’s thoughts and subsequent behavior are constituents of a causal chain of events), they are not morally responsible in the sense that they can choose to think or behave any differently than they do, for their thoughts and behavior are ultimately governed by physically constrained neural processes.

New World View

Now I’d like to return to what I mentioned earlier and consider how these insights from neuroscience may be drastically modifying how we look at concepts such as thinking, learning, personality, and decision making.  If our brain is operating via these neural network dynamics, then conscious thought appears to be produced by a particular subset of these neural network configurations and processes.  So as we continue to learn how to more directly control or alter these neural network arrangements and processes (above and beyond simply applying electrical potentials to certain neural regions in order to bring memories or other forms of imagery into consciousness, as we’ve done in the past), we should be able to control thought generation from a more “bottom-up” approach.  Neuroscience is definitely heading in this direction, although there is a lot of work to be done before we have any considerable knowledge of and control over such processes.

Likewise, learning seems to consist of a certain type of neural network modification (involving memory), leading to changes in causal pattern recognition (among other things) which results in our ability to more easily achieve our goals over time.  We’ve typically thought of learning as the successful input, retention, and recall of new information, and we have been achieving this “learning” process through the input of environmental stimuli via our sensory organs and systems.  In the future, it may be possible to once again, as with the aforementioned bottom-up thought generation, physically modify our neural networks to directly implant memories and causal pattern recognition information in order to “learn” without any actual sensory input, and/or we may be able to eventually “upload” information in a way that bypasses the typical sensory pathways thus potentially allowing us to catalyze the learning process in unprecedented ways.

If we are one day able to more directly control the neural configurations and processes that lead to specific thoughts as well as learned information, then there is no reason that we won’t be able to modify our personalities, our decision-making abilities and “algorithms”, etc.  In a nutshell, we may be able to modify any aspect of “who” we are in extraordinary ways (whether this is a “good” or “bad” thing is another issue entirely).  As we come to learn more about the genetic components of these neural processes, we may also be able to use various genetic engineering techniques to assist with the necessary neural modifications required to achieve these goals.  The bottom line here is that people are products of their genes and environment, and by manipulating both of those causal constraints in more direct ways (e.g. through the use of neuroscientific techniques), we may be able to achieve previously unattainable abilities and perhaps in a relatively miniscule amount of time.  It goes without saying that these methods will also significantly affect our evolutionary course as a species, allowing us to enter new landscapes through our substantially enhanced ability to adapt.  This may be realized through these methods by finding ways to improve our intelligence, memory, or other cognitive faculties, effectively giving us the ability to engineer or re-engineer our brains as desired.

Neuroscience Arms Race

We can see that increasing our knowledge and capabilities within the neurosciences has the potential for drastic societal changes, some of which are already starting to be realized.  The impact that these fields will have on how we approach the problem of criminal, violent, or otherwise undesirable behavior can not be overstated.  Trying to correct these issues by focusing our efforts on the neural or cognitive substrate that underlie them, as opposed to using less direct and more external means (e.g. social engineering methods) that we’ve been using thus far, may lead to much less expensive solutions as well as solutions that may be realized much, much more quickly.

As with any scientific discovery or subsequent technology produced from it, neuroscience has the power to bestow on us both benefits as well as disadvantages.  I’m reminded of the ground-breaking efforts made within nuclear physics several decades ago, whereby physicists not only gained precious information about subatomic particles (and their binding energies) but also how to release these enormous amounts of energy from nuclear fusion and fission reactions.  It wasn’t long after these breakthrough discoveries were made before they were used by others to create the first atomic bombs.  Likewise, while our increasing knowledge within neuroscience has the power to help society improve by optimizing our brain function and behavior, it can also be used by various entities to manipulate the populace for unethical reasons.

For example, despite the large number of free market proponents who claim that the economy need not be regulated by anything other than rational consumers and their choices of goods and services, corporations have clearly increased their use of marketing strategies that take advantage of many humans’ irrational tendencies (whether it is “buy one get one free” offers, “sales” on items that have artificially raised prices, etc.).  Politicians and other leaders have been using similar tactics by taking advantage of voters’ emotional vulnerabilities on certain controversial issues that serve as nothing more than an ideological distraction in order to reduce or eliminate any awareness or rational analysis of the more pressing issues.

There are already research and development efforts being made by these various entities in order to take advantage of these findings within neuroscience such that they can have greater influence over people’s decisions (whether it relates to consumers’ purchases, votes, etc.).  To give an example of some of these R&D efforts, it is believed that MRI (Magnetic Resonance Imaging) or fMRI (functional Magnetic Resonance Imaging) brain scans may eventually be able to show useful details about a person’s personality or their innate or conditioned tendencies (including compulsive or addictive tendencies, preferences for certain foods or behaviors, etc.).  This kind of capability (if realized) would allow marketers to maximize how many dollars they can squeeze out of each consumer by optimizing their choices of goods and services and how they are advertised. We have already seen how purchases made on the internet, if tracked, begin to personalize the advertisements that we see during our online experience (e.g. if you buy fishing gear online, you may subsequently notice more advertisements and pop-ups for fishing related goods and services).  If possible, the information found using these types of “brain probing” methods could be applied to other areas, including that of political decision making.

While these methods derived from the neurosciences may be beneficial in some cases, for instance, by allowing the consumer more automated access to products that they may need or want (which will likely be a selling point used by these corporations for obtaining consumer approval of such methods), it will also exacerbate unsustainable consumption and other personal or societally destructive tendencies and it is likely to continue to reduce (or eliminate) whatever rational decision making capabilities we still have left.

Final Thoughts

As we can see, neuroscience has the potential to (and is already starting to) completely change the way we look at the world.  Further advancements in these fields will likely redefine many of our goals as well as how to achieve them.  It may also allow us to solve many problems that we face as a species, far beyond simply curing mental illnesses or ailments.  The main question that comes to mind is:  Who will win the neuroscience arms race?  Will it be those humanitarians, scientists, and medical professionals that are striving to accumulate knowledge in order to help solve the problems of individuals and societies as well as to increase their quality of life?  Or will it be the entities that are trying to accumulate similar knowledge in order to take advantage of human weaknesses for the purposes of gaining wealth and power, thus exacerbating the problems we currently face?