The Open Mind

Cogito Ergo Sum

Posts Tagged ‘Philosophy

Is Death Bad For You? A Response to Shelly Kagan

leave a comment »

I’ve enjoyed reading and listening to the philosopher Shelly Kagan, both in debate, lectures, and various articles.  One topic he’s well known for is that of death, specifically the fear of death, and trying to understand the details behind, and justification for, the general attitude people have toward the concept of death.  I’ve written a little about the fear of death long ago, but coming across an article of Kagan’s reignited my interest in the topic.  He wrote an article a few years ago in The Chronicle, where he expounds on some of the ontological puzzles related to the concept of death.  I thought I’d briefly summarize the article’s main points and give a response to it here.

Can Death Be Bad For Us?

Kagan begins with the assumption that the death of a person’s body results in the end of that person’s existence.  This is certainly a reasonable assumption as there’s no evidence to the contrary, that is, that persons can exist without a living body.  Simple enough.  Then he asks the question, if death is the end of our existence, then how can being dead be bad for us?  While some would say that death is particularly bad for the survivors of the deceased since they miss the person who’s died and the relationship they once had with that person.  But it seems more complicated than that, because we could likewise have an experience where a cherished friend or family member leaves us and goes somewhere far away such that we can be confident that we’ll never see that person ever again.

Both the death of that person, and the alternative of their leaving forever to go somewhere such that we’ll never have contact with them again, result in the same loss of relationship.  Yet most people would say that if we knew about their dying instead of simply leaving forever, there’s more to be sad about in terms of death being bad for them, not simply bad for us.  And this sadness results from more than simply knowing how they died — the process of death itself — which could have been unpleasant, but also could have been entirely benign (such as dying peacefully in one’s sleep).  Similarly, Kagan tells us, the prospect of dying can be unpleasant as well, but he asserts, this only seems to make sense if death itself is bad for us.

Kagan suggests:

Maybe nonexistence is bad for me, not in an intrinsic way, like pain, and not in an instrumental way, like unemployment leading to poverty, which in turn leads to pain and suffering, but in a comparative way—what economists call opportunity costs. Death is bad for me in the comparative sense, because when I’m dead I lack life—more particularly, the good things in life. That explanation of death’s badness is known as the deprivation account.

While the deprivation account seems plausible, Kagan thinks that accepting it results in a couple of potential problems.  He argues, if something is true, it seems as if there must be some time when it’s true.  So when would it be true that death is bad for us?  Not now, he says.  Because we’re not dead now.  Not after we’re dead either, because then we no longer exist so nothing can be bad for a being that no longer exists.  This seems to lead to the conclusion that either death isn’t bad for anyone after all, or alternatively, that not all facts are datable.  He gives us another possible example of an undatable fact.  If Kagan shoots “John” today such that John slowly bleeds to death after two days, but Kagan dies tomorrow (before John dies) then after John dies, can we say that Kagan killed John?  If Kagan did kill John, when did he kill him?  Kagan no longer existed when John died so how can we say that Kagan killed John?

I think we could agree with this and say that while it’s true that Kagan didn’t technically kill John, a trivial response to this supposed conundrum is to say that Kagan’s actions led to John’s death.  This seems to solve that conundrum by working within the constraints of language, while highlighting the fact that when we say someone killed X what we really mean is that someone’s actions led to the death of X, thus allowing us to be consistent with our conceptions of existence, causality, killing, blame, etc.

Existence Requirement, Non-Existential Asymmetry, & It’s Implications

In any case, if all facts are datable (or at least facts like these), then we should be able to say when exactly death is bad for us.  Can things only be bad for us when we exist?  If so, this is what Kagan refers to as the existence requirement.  If we don’t accept such a requirement — that one must exist in order for things to be bad for us — that produces other problems, like being able to say for example that non-existence could be bad for someone who has never existed but that could have possibly existed.  This seems to be a pretty strange claim to hold to.  So if we refuse to accept that it’s a tragedy for possibly existent people to never come into existence, then we’d have to accept the existence requirement, which I would contend is a more plausible assumption to accept.  But if we do so, then it seems that we have to accept that death isn’t in fact bad for us.

Kagan suggests that we may be able to reinterpret the existence requirement, and he does this by distinguishing between two versions, a modest version which asserts that something can be bad for you only if you exist at some time or another, and a bold version which asserts that something can be bad for you only if you exist at the same time as that thing.  Accepting the modest version seems to allow us a way out of the problems posed here, but that it too has some counter-intuitive implications.

He illustrates this with another example:

Suppose that somebody’s got a nice long life. He lives 90 years. Now, imagine that, instead, he lives only 50 years. That’s clearly worse for him. And if we accept the modest existence requirement, we can indeed say that, because, after all, whether you live 50 years or 90 years, you did exist at some time or another. So the fact that you lost the 40 years you otherwise would have had is bad for you. But now imagine that instead of living 50 years, the person lives only 10 years. That’s worse still. Imagine he dies after one year. That’s worse still. An hour? Worse still. Finally, imagine I bring it about that he never exists at all. Oh, that’s fine.

He thinks this must be accepted if we accept the modest version of the existence requirement, but how can this be?  If one’s life is shortened relative to what they would have had, this is bad, and gets progressively worse as the life is hypothetically shortened, until a life span of zero is reached, in which case they no longer meet the modest existence requirement and thus can’t have anything be bad for them.  So it’s as if it gets infinitely worse as the potential life span approaches the limit of zero, and then when zero is reached, becomes benign and is no longer an issue.

I think a reasonable response to this scenario is to reject the claim that hypothetically shrinking the life span to zero is suddenly no longer an issue.  What seems to be glossed over in this example is the fact that this is a set of comparisons of one hypothetical life to another hypothetical life (two lives with different non-zero life spans), resulting in a final comparison between one hypothetical life and no life at all (a life span of zero).  This example illustrates whether or not something is better or worse in comparison, not whether something is good or bad intrinsically speaking.  The fact that somebody lived for as long as 90 years or only for 10 years isn’t necessarily good or bad but only better or worse in comparison to somebody who’s lived for a different length of time.

The Intrinsic Good of Existence & Intuitions On Death

However, I would go further and say that there is an intrinsic good to existing or being alive, and that most people would agree with such a claim (and that the strong will to live that most of us possess is evidence of our acknowledging such a good).  That’s not to say that never having lived is bad, but only to say that living is good.  If not living is neither good nor bad but considered a neutral or inconsequential state, then we can hold the position that living is better than not living, even if not living isn’t bad at all (after all it’s neutral, neither good nor bad).  Thus we can still maintain our modest existence requirement while consistently holding these views.  We can say that not living is neither good nor bad, that living 10 years is good (and better than not living), that living 50 years is even better, and that living 90 years is even better yet (assuming, for the sake of argument, that the quality of life is equivalently good in every year of one’s life).  What’s important to note here is that not having lived in the first place doesn’t involve the loss of a good, because there was never any good to begin with.  On the other hand, extending the life span involves increasing the quantity of the good, by increasing it’s duration.

Kagan seems to agree overall with the deprivation account of why we believe death is bad for us, but that some puzzles like those he presented still remain.  I think one of the important things to take away from this article is the illustration that we have obvious limitations in the language that we use to describe our ontological conceptions.  These scenarios and our intuitions about them also seem to show that we all generally accept that living or existence is intrinsically good.  It may also highlight the fact that many people intuit that some part of us (such as a soul) continues to exist after death such that death can be bad for us after all (since our post-death “self” would still exist).  While the belief in souls is irrational, it may help to explain some common intuitions about death.

Dying vs. Death, & The Loss of An Intrinsic Value

Remember that Kagan began his article by distinguishing between how one dies, the prospect of dying and death itself.  He asked us, how can the prospect of dying be bad if death itself (which is only true when we no longer exist) isn’t bad for us. Well, perhaps we should consider that when people say that death is bad for us they tend to mean that dying itself is bad for us.  That is to say, the prospect of dying isn’t unpleasant because death is bad for us, but rather because dying itself is bad for us.  If dying occurs while we’re still alive, resulting in one’s eventual loss of life, then dying can be bad for us even if we accepted the bold existence requirement — that something can only be bad for us if we exist at the same time as that thing.  So if the “thing” we’re referring to is our dying rather than our death, this would be consistent with the deprivation account of death, would allow us to put a date (or time interval) on such an event, and would seem to resolve the aforementioned problems.

As for Kagan’s opening question, when is death bad for us?  If we accept my previous response that dying is what’s bad for us, rather than death, then it would stand to reason that death itself isn’t ever bad for us (or doesn’t have to be), but rather what is bad for us is the loss of life that occurs as we die.  If I had to identify exactly when the “badness” that we’re actually referring to occurs, I suppose I would choose an increment of time before one’s death occurs (with an exclusive upper bound set to the time of death).  If time is quantized, as per quantum mechanics, then that means that the smallest interval of time is one Planck second.  So I would argue that at the very least, the last Planck second of our life (if not a longer interval), marks the event or time interval of our dying.

It is this last interval of time ticking away that is bad for us because it leads to our loss of life, which is a loss of an intrinsic good.  So while I would argue that never having received an intrinsic good in the first place isn’t bad (such as never having lived), the loss of (or the process of losing) an intrinsic good is bad.  So I agree with Kagan that the deprivation account is on the right track, but I also think the problems he’s posed are resolvable by thinking more carefully about the terminology we use when describing these concepts.

On Parfit’s Repugnant Conclusion

with 54 comments

Back in 1984, Derek Parfit’s work on population ethics led him to formulate a possible consequence of total utilitarianism, namely, what was deemed as the Repugnant Conclusion (RC).  For those unfamiliar with the term, Parfit described it accordingly:

For any possible population of at least ten billion people, all with a very high quality of life, there must be some much larger imaginable population whose existence, if other things are equal, would be better even though its members have lives that are barely worth living.

To better explain this conclusion, let’s consider a few different populations, A, A+, and B, where the width of the bar represents the relative population and the height represents the average “well-being rating” of the people in that sub-population:

Figure 2

Image taken from http://plato.stanford.edu/entries/repugnant-conclusion/fig2.png

In population A, let’s say we have 10 billion people with a “well-being rating” of +10 (on a scale of -10 to +10, with negative values indicating a life not worth living).  Now in population A+, let’s say we have all the people in population A with the mere addition of 10 billion people with a “well-being rating” of +8.  According to the argument as Parfit presents it, it seems reasonable to hold that population A+ is better than population A, or at the very least, not worse than population A.  This is believed to be the case because the only difference between the two populations is the mere addition of more people with lives worth living (even if their well-being isn’t as good as those represented by the “A” population, so it is believed that adding additional lives worth living cannot make an outcome worse when all else is equal.

Next, consider population B where it has the same number of people as population A+, but every person has a “well-being rating” that is slightly higher than the average “well-being rating” in population A+, and that is slightly lower than that of population A.  Now if one accepts that population A+ is better than A (or at least not worse) and if one accepts that population B is better than population A+ (since it has an average well being that is higher) then one has to accept the conclusion that population B is better than population A (by transitive logic;  A <= A+ <B, therefore, A<B).  If this is true then we can take this further and show that a population that is sufficiently large enough would still be better than population A, even if the “well-being rating” of each person was only +1.  This is the RC as presented by Parfit, and he along with most philosophers found it to be unacceptable.  So he worked diligently on trying to solve it, but hadn’t succeeded in the way he hoped for.  This has since become one of the biggest problems in ethics, particularly in the branch of population ethics.

Some of the strategies that have been put forward to resolve the RC include adopting an average principle, a variable value principle, or some kind of critical level principle.  However all of these supposed resolutions are either wrought with their own problems (if accepted) or they are highly unsatisfactory, unconvincing, or very counter-intuitive.  A brief overview of the argument and the supposed solutions and their associated problems can be found here.

I’d like to respond to the RC argument as well because I think that there are at least a few problems with the premises right off the bat.  The foundation for my rebuttal relies on an egoistic moral realist ethics, based on a goal theory of morality (a subset of desire utilitarianism), which can be summarized as follows:

If one wants X above all else, then one ought to Y above all else.  Since it can be shown that ultimately what one wants above all else is satisfaction and fulfillment with one’s life (or what Aristotle referred to as eudaimonia) then one ought to do above all else all possible actions that will best achieve that goal.  The actions required to best accomplish this satisfaction can be determined empirically (based on psychology, neuroscience, sociology, etc.), and therefore we theoretically have epistemic access to a number of moral facts.  These moral facts are what we ought to do above all else in any given situation given all the facts available to us and via a rational assessment of said facts.

So if one is trying to choose whether one population is better or worse than another, I think that assessment should be based on the same egoistic moral framework which accounts for all known consequences resulting from particular actions and which implements “altruistic” behaviors precipitated by cultivating virtues that benefit everyone including ourselves (such as compassion, integrity, and reasonableness).  So in the case of evaluating the comparison between population A and that of A+ as presented by Parfit, which is better?  Well if one applies the veil of ignorance as propagated by the social contract theories of philosophers such as Kant, Hobbes, Locke, and Rousseau, whereby we would hypothetically choose between worlds, not knowing which subpopulation we would end up in, which world ought we to prefer?  It would stand to reason that population A is certainly better than that of A+ (and better than population B) because one has the highest probability of having a higher level of well-being in that population/society (for any person chosen at random).  This reasoning would then render the RC as false, as it only followed from fallacious reasoning (i.e. it is fallacious to assume that adding more people with lives worth living is all that matters in the assessment).

Another fundamental flaw that I see in the premises is the assumption that population A+ contains the same population of high well-being individuals as in A with the mere addition of people with a somewhat lower level of well-being.  If the higher well-being subpopulation of A+ has knowledge of the existence of other people in that society with a lower well-being, wouldn’t that likely lead to a decrease in their well-being (compared to those in the A population that had no such concern)?  It would seem that the only way around this is if the higher well-being people were ignorant of those other members of society or if there were other factors that were not equal between the two high-well-being subpopulations in A and A+ to somehow compensate for that concern, in which case the mere addition assumption is false since the hypothetical scenario would involve a number of differences between the two higher well-being populations.  If the higher well-being subpopulation in A+ is indeed ignorant of the existence of the lower well-being subpopulation in A+, then they are not able to properly assess the state of the world which would certainly factor into their overall level of well-being.

In order to properly assess this and to behave morally at all, one needs to use as many facts as are practical to obtain and operate according to those facts as rationally as possible.  It would seem plausible that the better-off subpopulation of A+ would have at least some knowledge of the fact that there exist people with less well-being than themselves and this ought to decrease their happiness and overall well-being when all else is truly equal when compared to A.  But even if the subpopulation can’t know this for some reason (i.e. if the subpopulations are completely isolated from one another), we do have this knowledge and thus must take account of it in our assessment of which population is better than the other.  So it seems that the comparison of population A to A+ as stated in the argument is an erroneous one based on fallacious assumptions that don’t account for these factors pertaining to the well-being of informed people.

Now I should say that if we had knowledge pertaining to the future of both societies we could wind up reversing our preference if, for example, it turned out that population A had a future that was likely going to turn out worse than the future of population A+ (where the most probable “well-being rating” decreased comparatively).  If this was the case, then being armed with that probabilistic knowledge of the future (based on a Bayesian analysis of likely future outcomes) could force us to switch preferences.  Ultimately, the way to determine which world we ought to prefer is to obtain the relevant facts about which outcome would make us most satisfied overall (in the eudaimonia sense), even if this requires further scientific investigation regarding human psychology to determine the optimized trade-off between present and future well-being.

As for comparing two populations that have the same probability for high well-being, yet with different populations (say “A” and “double A”), I would argue once again that one should assess those populations based on what the most likely future is for each population based on the facts available to us.  If the larger population is more likely to be unsustainable, for example, then it stands to reason that the smaller population is what one ought to strive for (and thus prefer) over the larger one.  However, if sustainability is not likely to be an issue based on the contingent facts of the societies being evaluated, then I think one ought to choose the society that has the best chances of bettering the planet as a whole through maximized stewardship over time.  That is to say, if more people can more easily accomplish goals of making the world a better place, then the larger population would be what one ought to strive for since it would secure more well-being in the future for any and all conscious creatures (including ourselves).  One would have to evaluate the societies they are comparing to one another for these types of factors and then make the decision accordingly.  In the end, it would maximize the eudaimonia for any individual chosen at random both in that present society and in the future.

But what if we are instead comparing two populations that both have “well-being ratings” that are negative?  For example what if we compare a population S containing only one person that has a well-being rating of -10 (the worst possible suffering imaginable) versus another population T containing one million people that have well-being ratings of -9 (almost the worst possible suffering)?  It sure seems that if we apply the probabilistic principle I applied to the positive well being populations, that would lead to preferring a world with millions of people suffering horribly instead of a world with just one person suffering a bit more.  However, this would only necessarily follow if one applied the probabilistic principle while ignoring the egoistically-based “altruistic” virtues such as compassion and reasonableness, as it pertains to that moral decision.  In order to determine which world one ought to prefer over another, just as in any other moral decision, one must determine what behaviors and choices make us most satisfied as individuals (to best obtain eudaimonia).  If people are generally more satisfied (perhaps universally once fully informed of the facts and reasoning rationally) in preferring to have one person suffer at a -10 level over one million people suffering at a -9 level (even if it was you or I chosen as that single person), then that is the world one ought to prefer over the other.

Once again, our preference could be reversed if we were informed that the most likely futures of these populations had their levels of suffering reversed or changed markedly.  And if the scenario changed to, say, 1 million people at a -10 level versus 2 million people at a -9 level, our preferred outcomes may change as well, even if we don’t yet know what that preference ought to be (i.e. if we’re ignorant of some facts pertaining to our psychology at the present, we may think we know, even though we are incorrect due to irrational thinking or some missing facts).  As always, the decision of which population or world is better depends on how much knowledge we have pertaining to those worlds (to make the most informed decision we can given our present epistemological limitations) and thus our assessment of their present and most likely future states.  So even if we don’t yet know which world we ought to prefer right now (in some subset of the thought experiments we conjure up), science can find these answers (or at least give us the best shot at answering them).

Substance Dualism, Interactionism, & Occam’s razor

with 13 comments

Recently I got into a discussion with Gordon Hawkes on A Philosopher’s Take about the arguments and objections for substance dualism, that is, the position that there are actually two ontological substances that exist in the world: physical substances and mental (or non-physical) substances.  Here’s the link to part 1, and part 2 of that author’s post series (I recommend taking a look at this group blog and see what he and others have written over there on various interesting topics).  Many dualists would identify or liken this supposed second substance as a soul or the like, but I’m not going to delve into that particular detail within this post.  I’d prefer to focus on the basics of the arguments presented rather than those kinds of details.  Before I dive into the topic, I want to mention that Hawkes was by no means making an argument for substance dualism, but rather he was merely pointing out some flaws in common arguments against substance dualism.  Now that that’s been said, I’m going to get to the heart of the topic, but I will also be providing evidence and arguments against substance dualism.  The primary point raised in the first part of that series was the fact that just because neuroscience is continuously finding correlations between particular physical brain states and particular mental states, this doesn’t mean that these empirical findings show that dualism is necessarily false — since some forms of dualism seem to be entirely compatible with these empirical findings (e.g. interactionist dualism).  So the question ultimately boils down to whether or not mental processes are identical with, or metaphysically supervenient upon, the physical processes of the brain (if so, then substance dualism is necessarily false).

Hawkes talks about how the argument from neuroscience (as it is sometimes referred to) is fallacious because it is based on the mistaken belief that correlation (between mental states and brain states) is equivalent with identity or supervenience of mental and physical states.  Since this isn’t the case, then one can’t rationally use the neurological correlation to disprove (all forms of) substance dualism.  While I agree with this, that is, that the argument from neuroscience can’t be used to disprove (all forms of) substance dualism, it is nevertheless strong evidence that a physical foundation exists for the mind and it also provides evidence against all forms of substance dualism that posit that the mind can exist independently of the physical brain.  At the very least, it shows that the prior probability of minds existing without brains is highly unlikely.  This would seem to suggest that any supposed mental substance is necessarily dependent on a physical substance (so disembodied minds would be out of the question for the minimal substance dualist position).  Even more damning for substance dualists though, is the fact that since the argument from neuroscience suggests that minds can’t exist without physical brains, this would mean that prior to brains evolving in any living organisms within our universe, at some point in the past there weren’t any minds at all.  This in turn would suggest that the second substance posited by dualists isn’t at all conserved like the physical substances we know about are conserved (as per the Law of Conservation of Mass and Energy).  Rather, this second substance would have presumably had to have come into existence ex nihilo once some subset of the universe’s physical substances took on a particular configuration (i.e. living organisms that eventually evolved a brain complex enough to afford mental experiences/consciousness/properties).  Once all the brains in the universe disappear in the future (after the heat death of the universe guarantees such a fate), then this second substance will once again disappear from our universe.

The only way around this (as far as I can tell) is to posit that the supposed mental substance had always existed and/or will always continue to exist, but in an entirely undetectable way somehow detached from any physical substance (which is a position that seems hardly defensible given the correlation argument from neuroscience).  Since our prior probabilities of any hypothesis are based on all our background knowledge, and since the only substance we can be sure of exists (a physical substance) has been shown to consistently abide by conservation laws (within the constraints of general relativity and quantum mechanics), it is more plausible that any other ontological substance would likewise be conserved rather than not conserved.  If we had evidence to the contrary, that would change the overall consequent probability, but without such evidence, we only have data points from one ontological substance, and it appears to follow conservation laws.  For this reason alone, it is less likely that a second substance exists at all, if it isn’t itself conserved as that of the physical.

Beyond that, the argument from neuroscience also provides at least some evidence against interactionism (the idea that the mind and brain can causally interact with each other in both causal directions), and interactionism is something that substance dualists would likely need in order to have any reasonable defense of their position at all.  To see why this is true, one need only recognize the fact that the correlates of consciousness found within neuroscience consist of instances of physical brain activity that are observed prior to the person’s conscious awareness of any experience, intentions, or willed actions produced by said brain activity.  For example, studies have shown that when a person makes a conscious decision to do something (say, to press one of two possible buttons placed in front of them), there are neurological patterns that can be detected prior to their awareness of having made a decision and so these patterns can be used to correctly predict which choice the person will make even before they do!  I would say that this is definitely evidence against interactionism, because we have yet to find any cases of mental experiences occurring prior to the brain activity that is correlated with it.  We’ve only found evidence of brain activity preceding mental experiences, never the other way around.  If the mind was made from a different substance, existing independently of the physical brain (even if correlated with it), and able to causally interact with the physical brain, then it seems reasonable to expect that we should be able to detect and confirm instances of mental processes/experiences occurring prior to correlated changes in physical brain states.  Since this hasn’t been found yet in the plethora of brain studies performed thus far, the prior probability of interactionism being true is exceedingly low.  Additionally, the conservation of mass and energy that we observe (as well as the laws of physics in general) in our universe also challenges the possibility of any means of causal energy transfer between a mind to a brain or vice versa.  For the only means of causal interaction we’ve observed thus far in our universe is by way of energy/momentum transfer from one physical particle/system to another.  If a mind is non-physical, then by what means can it interact at all with a brain or vice versa?

The second part of the post series from Hawkes talked about Occam’s razor and how it’s been applied in arguments against dualism.  Hawkes argues that even though one ontological substance is less complex and otherwise preferred over two substances (when all else is equal), Occam’s razor apparently isn’t applicable in this case because physicalism has been unable to adequately address what we call a mind, mental properties, etc.  My rebuttal to this point is that dualism doesn’t adequately address what we call a mind, mental properties, etc., either.  In fact it offers no additional explanatory power than physicalism does because nobody has proposed how it could do so.  That is, nobody has yet demonstrated (as far as I know) what any possible mechanisms would be for this new substance to instantiate a mind, how this non-physical substance could possibly interact with the physical brain, etc.  Rather it seems to have been posited out of an argument from ignorance and incredulity, which is a logical fallacy.  Since physicalism hasn’t yet provided a satisfactory explanation for what some call the mind and mental properties, it is therefore supposed by dualists that a second ontological substance must exist that does explain it or account for it adequately.

Unfortunately, because of the lack of any proposed mechanism for the second substance to adequately account for the phenomena, one could simply replace the term “second substance” with “magic” and be on the same epistemic footing.  It is therefore an argument from ignorance to presume that a second substance exists, for the sole reason that nobody has yet demonstrated how the first substance can fully explain what we call mind and mental phenomena.  Just as we’ve seen throughout history where an unknown phenomena is attributed to magic or the supernatural and later found to be accounted for by a physical explanation, this means that the prior probability that this will also be the case for the phenomena of the mind and mental properties is extraordinarily high.  As a result, I think that Occam’s razor is applicable in this case, because I believe it is always applicable to an argument from ignorance that’s compared to an (even incomplete) argument from evidence.  Since physicalism accounts for many aspects of mental phenomena (such as the neuroscience correlations, etc.), dualism needs to be supported by at least some proposed mechanism (that is falsifiable) in order to nullify the application of Occam’s razor.

Those are my thoughts on the topic for now.  I did think that Hawkes made some valid points in his post series — such as the fact that correlation doesn’t equal identity or supervenience and also the fact that Occam’s razor is only applicable under particular circumstances (such as when both hypotheses explain the phenomena equally, good or bad, with one containing a more superfluous ontology).  However, I think that overall the arguments and evidence against substance dualism are strong enough to eliminate any reasonable justification for supposing that dualism is true (not that Hawkes was defending dualism as he made clear at the beginning of his post) and I also believe that both physicalism and dualism explain the phenomena equally well such that Occam’s razor is applicable (since dualism doesn’t seem to add any explanatory power to that already provided by physicalism).  So even though correlation doesn’t equal identity or supervenience, the arguments and evidence from neuroscience and physics challenge the possibility of any interactionism between the physical and supposed non-physical substance, and it challenges the existence of the second substance in general (due to it’s apparent lack of conservation over time among other reasons).

Neurological Configuration & the Prospects of an Innate Ontology

with 2 comments

After a brief discussion on another blog pertaining to whether or not humans possess some kind of an innate ontology or other forms of what I would call innate knowledge, I decided to expand on my reply to that blog post.

While I agree that at least most of our knowledge is acquired through learning, specifically through the acquisition and use of memorized patterns of perception (as this is generally how I would define knowledge), I also believe that there are at least some innate forms of knowledge, including some that would likely result from certain aspects of our brain’s innate neurological configuration and implementation strategy.  This proposed form of innate knowledge would seem to bestow a foundation for later acquiring the bulk of our knowledge that is accomplished through learning.  This foundation would perhaps be best described as a fundamental scaffold of our ontology and thus an innate aspect that our continually developing ontology is based on.

My basic contention is that the hierarchical configuration of neuronal connections in our brains is highly analogous to the hierarchical relationships utilized to produce our conceptualization of reality.  In order for us to make sense of the world, our brains seem to fracture reality into many discrete elements, properties, concepts, propositions, etc., which are all connected to each other through various causal relationships or what some might call semantic hierarchies.  So it seems plausible if not likely that the brain is accomplishing a fundamental aspect of our ontology by our utilizing an innate hardware schema that involves neurological branching.

As the evidence in the neurosciences suggests, it certainly appears that our acquisition of knowledge through learning what those discrete elements, properties, concepts, propositions, etc., are, involves synaptogenesis followed by pruning, modifying, and reshaping a hierarchical neurological configuration, in order to end up with a more specific hierarchical neurological arrangement, and one that more accurately correlates with the reality we are interacting with and learning about through our sensory organs.  Since the specific arrangement that eventually forms couldn’t have been entirely coded for in our DNA (due to it’s extremely high level of complexity and information density), it ultimately had to be fine-tuned to this level of complexity after it’s initial pre-sensory configuration developed.  Nevertheless, the DNA sequences that were naturally selected for to produce the highly capable brains of human beings (as opposed to the DNA that guides the formation of the brain of a much less intelligent animal), clearly have encoded increasingly more effective hardware implementation strategies than our evolutionary ancestors.  These naturally selected neurological strategies seem to control what particular types of causal patterns the brain is theoretically capable of recognizing (including some upper limit of complexity), and they also seem to control how the brain stores and organizes these patterns for later use.  So overall, my contention is that these naturally selected strategies in themselves are a type of knowledge, because they seem to provide the very foundation for our initial ontology.

Based on my understanding, after many of the initial activity-independent mechanisms for neural development have occurred in some region of the developing brain such as cellular differentiation, cellular migration, axon guidance, and some amount of synapse formation, then the activity-dependent mechanisms for neuronal development (such as neural activity caused by the sensory organs in the process of learning), finally begin to modify those synapses and axons into a new hierarchical arrangement.  It is especially worth noting that even though much of the synapse formation during neural development is mediated by activity-dependent mechanisms, such as the aforementioned neural activity produced by the sensory organs during perceptual development and learning, there is also spontaneous neural activity forming many of these synapses even before any sensory input is present, thus contributing to the innate neurological configuration (i.e. that which is formed before any sensation or learning has occurred).

Thus, the subsequent hierarchy formed through neural/sensory stimulation via learning appears to begin from a parent hierarchical starting point based on neural developmental processes that are coded for in our DNA as well as synaptogenic mechanisms involving spontaneous pre-sensory neural activity.  So our brain’s innate (i.e. pre-sensory) configuration likely contributes to our making sense of the world by providing a starting point that reflects the fundamental hierarchical nature of reality that all subsequent knowledge is built off of.  In other words, it seems that if our mature conceptualization of reality involves a very specific type of hierarchy, then an innate/pre-sensory hierarchical schema of neurons would be a plausible if not expected physical foundation for it (see Edelman’s Theory of Neuronal Group Selection within this link for more empirical support of these points).

Additionally, if the brain’s wiring has evolved in order to see dimensions of difference in the world (unique sensory/perceptual patterns that is, such as quantity, colors, sounds, tastes, smells, etc.), then it would make sense that the brain can give any particular pattern an identity by having a unique schema of hardware or unique use of said hardware to perceive such a pattern and distinguish it from other patterns.  After the brain does this, the patterns are then arguably organized by the logical absolutes.  For example, if the hardware scheme or process used to detect a particular pattern “A” exists and all other patterns we perceive have or are given their own unique hardware-based identity (i.e. “not-A” a.k.a. B, C, D, etc.), then the brain would effectively be wired such that pattern “A” = pattern “A” (law of identity), any other pattern which we can call “not-A” does not equal pattern “A” (law of non-contradiction), and any pattern must either be “A” or some other pattern even if brand new, which we can also call “not-A” (law of the excluded middle).  So by the brain giving a pattern a physical identity (i.e. a specific type of hardware configuration in our brain that when activated, represents a detection of one specific pattern), our brains effectively produce the logical absolutes by nature of the brain’s innate wiring strategy which it uses to distinguish one pattern from another.  So although it may be true that there can’t be any patterns stored in the brain until after learning begins (through sensory experience), the fact that the DNA-mediated brain wiring strategy inherently involves eventually giving a particular learned pattern a unique neurological hardware identity to distinguish it from other stored patterns, suggests that the logical absolutes themselves are an innate and implicit property of how the brain stores recognized patterns.

In short, if it is true that any and all forms of reasoning as well as the ability to accumulate knowledge simply requires logic and the recognition of causal patterns, and if the brain’s innate neurological configuration schema provides the starting foundation for both, then it would seem reasonable to conclude that the brain has at least some types of innate knowledge.

The Kalam Cosmological Argument for God’s Existence

with 20 comments

Previously, I’ve written briefly about some of the cosmological arguments for God.  I’d like to expand on this topic, and I’ll begin doing so in this post by analyzing the Kalam Cosmological Argument (KCA), since it is arguably the most well known version of the argument, which can be described with the following syllogism:

(1) Everything that begins to exist has a cause;

(2) The universe began to exist;

Therefore,

(3) The universe has a cause.

The conclusion of this argument is often expanded by theists to suggest that the cause must be supernaturally transcendent, immaterial, timeless, spaceless, and perhaps most importantly, this cause must itself be uncaused, in order to avoid the causal infinite regress implied by the KCA’s first premise.

Unfortunately this argument fails for a number of reasons.  The first thing that needs to be clarified is the definitions of terms used in these premises.  What is meant by “everything”, or “begins to exist”?  “Everything” in this context does imply that there are more than one of these things, which means that we are referring to a set of things, indeed the set of all things in this case.  The set of all things implied here apparently refers to all matter and energy in the universe, specifically the configuration of any subset of all matter and/or energy.  Then we have the second element in the first premise, “begins to exist”, which would thus refer to when the configuration of some set of matter and/or energy changes to a new configuration.  So we could rewrite the first premise as “any configuration of matter and/or energy that exists at time T and which didn’t exist at the time immediately prior to time T (which we could call T’), was a result of some cause”.  If we want to specify how “immediately prior” T’ is to T, we could use the smallest unit of time that carries any meaning per the laws of physics which would be the Planck time (roughly 10^-43 seconds), which is the time it takes the fastest entity in the universe (light) to traverse the shortest distance in the universe (the Planck length).

Does Everything Have a Cause?

Now that we’ve more clearly defined what is meant by the first premise, we can address whether or not that premise is sound.  It seems perfectly reasonable based on the nature of causality that we currently understand that there is indeed some cause that drives the changes in the configurations of sets of matter and energy that we observe in the universe, most especially in the everyday world that we observe.  On a most fundamental physical level, we would typically say that the cause of these configuration changes is described as the laws of physics.  Particles and waves all behave as they do, very predictably changing from one form into another based on these physical laws or consistent patterns that we’ve discovered.  However, depending on the interpretation of quantum mechanics used, there may be acausal quantum processes happening, for example, as virtual particle/anti-particle pairs pop into existence without any apparent deterministic path.  That is, unless there are non-local hidden variables that we are unaware of which guide/cause these events, there don’t appear to be any deterministic or causal driving forces behind certain quantum phenomena.  At best, the science is inconclusive as to whether all phenomena have causes, and thus one can’t claim certainty to the first premise of the KCA.  Unless we find a way to determine that quantum mechanics is entirely deterministic, we simply don’t know that matter and energy are fundamentally causally connected as are objects that we observe at much larger scales.

The bottom line here is that quantum indeterminism carries with it the possibility of acausality until proven otherwise, thus undermining premise one of the KCA with the empirical evidence found within the field of quantum physics.  As such, it is entirely plausible that if the apparent quantum acausal processes are fundamental to our physical world, the universe itself may have arisen from said acausal processes, thus undermining premise two as well as the conclusion of the KCA.  We can’t conclude that this is the case, but it is entirely possible and is in fact plausible given the peculiar quantum phenomena we’ve observed thus far.

As for the second premise, if we apply our clarified definition of “began to exist” introduced in the first premise to the second, then “the universe began to exist” would mean more specifically that “there was once a time (T’) when the universe didn’t exist and then at time T, the universe did exist.”  This is the most obviously problematic premise, at least according to the evidence we’ve found within cosmology.  The Big Bang Theory as most people are familiar with, which is the prevailing cosmological model for the earliest known moment of the universe, implies that spacetime itself had it’s earliest moment roughly 13.8 billion years ago, and continued to expand and transform over 13.8 billion years until reaching the state that we see it in today.  Many theists try to use this as evidence for the universe being created by God.  However, since time itself was non-existent prior to the Big Bang, it is not sensible to speak of any creation event happening prior to this moment, since there was no time for such an event to happen within.  This presents a big problem for the second premise in the KCA, because in order for the universe to “begin to exist”, it is implied that there was a time prior in which it didn’t exist, and this goes against the Big Bang model in which time never existed prior to that point.

Is Simultaneous Causation Tenable?

One way that theologians and some philosophers have attempted to circumvent this problem is to invoke the concept of simultaneous causation, that is, that (at least some) causes and effects can happen simultaneously.  Thus, if the cause of the universe happened at the same time as the effect (the Big Bang), then the cause of the universe (possibly “creation”) did happen in time, and thus the problem is said to be circumvented.

The concept of simultaneous causation has been proposed for some time by philosophers, most notably Immanuel Kant and others since.  However, there are a few problems with simultaneous causation that I’ll point out briefly.  For one, there don’t appear to be any actual examples in our universe of simultaneous causation occurring.  Kant did propose what he believed to be a couple examples of simultaneous causation to support the idea.  One example he gave was a scenario where the effect of a heated room supposedly occurs simultaneously with a fire in a fireplace that caused it.  Unfortunately, this example fails, because it actually takes time for thermal energy to make its way from the fire in the fireplace to any air molecules in the room (even those that are closest to the fire).  As combustion is occurring and oxygen is combining with hydrocarbon fuels in the wood to produce carbon dioxide and a lot of heat, that heat takes time to propagate.  As the carbon dioxide is being formed, and the molecule is assuming an energetically favorable state, there is still a lag between this event and any heat given off to nearby molecules in the room.  In fact, no physical processes can occur faster than the speed of light by the principles of Relativity, so this refutes any other example analogous to this one.  The fastest way a fire can propagate heat is through radiation (as opposed to conduction or convection), and we know that the propagation of radiation is limited by the speed of light.  Even pulling a solid object causes it to stretch (at least temporarily) so the end of the object farthest away from where it is being pulled will actually remain at rest for a short time while the other end of the object is first pulled in a particular direction.  It isn’t until a short time lag, that the rest of the object “catches up” with the end being pulled, so even with mechanical processes involving solid materials, we never see instantaneous speeds of causal interactions.

Another example Kant gave was one in which a lead ball lies on a cushion and simultaneously causes the effect of an indentation or “hollow” in the cushion.  Again, in order for the ball to cause a dent in the cushion in the first place it had to be moved into the cushion which took some finite amount of time.  Likewise with the previous example, Relativity prevents any simultaneous causation of this sort.  We can see this by noting that at the molecular level, as the electron orbitals from the lead ball approach those of the cushion, the change in the strength of the electric field between the electron orbitals of the two objects can’t travel faster than the speed of light, and thus as the ball moves toward the cushion and eventually “touches” it, the increased strength of the repulsion takes some amount of time to be realized.

One last example I’ve seen given by defenders of simultaneous causation is that of a man sitting down, thus forming a lap.  That is, as the man sits down, and his knees bend, a lap is created in the process, and we’re told that the man sitting down is the cause and the formation of the lap is the simultaneous effect.  Unfortunately, this example also fails because the man sitting down and the lap being formed are really nothing more than two different descriptions of the same event.  One could say that the man formed a lap, or one could say that the man sat down.  Clearly the intentions behind the man were most likely to sit down rather than to form a lap, but nevertheless forming a lap was incidental in the process of sitting down.  Both are describing different aspects of the same event, and thus there aren’t two distinct causal relatum in this example.  In the previous examples mentioned (the fire and heated room or ball denting a cushion), if there are states described that occur simultaneously even after taking Relativity into account, they can likewise be shown to be merely two different aspects or descriptions of the same event.  Even if we could grant that simultaneous causation were possible (which so far, we haven’t seen any defensible examples in the real world), how can we assign causal priority to determine which was the cause and which was the effect?  In terms of the KCA, one could ask, if the cause (C) of the universe occurred at the same time as the effect (E) or existence of the universe, how could one determine if C caused E rather than the other way around?  One has to employ circular argumentation in order to do so, by invoking other metaphysical assumptions in the terms that are being defined which simply begs the question.

Set Theory & Causal Relations

Another problem with the second premise of the KCA is that even if we ignore the cosmological models that refute it, and even ignore the problematic concept of simultaneous causation altogether, there is an implicit assumption that the causal properties of the “things” in the universe also apply to the universe as a whole.  This is fallacious because one can’t assume that the properties of members of a set or system necessarily apply to the system or entire set as a whole.  Much work has been done within set theory to show that this is the case, and thus while some properties of the members or subsets of a system can apply to the whole system, not all properties necessarily do (in fact some properties applying to both members of a set and to the set as a whole can lead to logical contradictions or paradoxes).  One of the properties that is being misapplied here involves the concept of “things” in general.  If we try to consider the universe as a “thing” we can see how this is problematic by noting that we seem to define and conceptualize “things” with causal properties as entities or objects that are located in time and space (that’s an ontology that I think is pretty basic and universal).  However, the universe as a whole is the entirety of space and time (i.e. spacetime), and thus the universe as a whole contains all space and time, and thus can’t itself (as a whole) be located in space or time.

Since the universe appears to be composed of all the things we know about, one might say that the universe is located within “nothing” at all, if that’s at all intelligible to think of.  Either way, the universe as a whole doesn’t appear to be located in time or space, and thus it isn’t located anywhere at all.  Thus, it technically isn’t a “thing” at all, or at the very least, it is not a thing that has any causal properties of its own, since it isn’t located in time or space in order to have causal relations with other things.  Even if one insists on calling it a thing, despite the problems listed here, we are still left with the problem that we can’t assume that causal principles found within the universe apply to the universe as a whole.  So for a number of reasons, premise two of the KCA fails.  Since both premises fail for a number of reasons, the conclusion no longer follows.  So even if the universe does in fact have a cause, in some way unknown to us, the KCA doesn’t successfully support such a claim with its premises.

Is the Kalam Circular?

Yet another problem that Dan Barker and others have pointed out involves the language used in the first premise of the KCA.  The clause, “everything that begins to exist”, implies that reality can be divided into two sets: items that begin to exist (BE) and items that do not begin to exist (NBE).  In order for the KCA to work in arguing for God’s existence, the NBE set can’t be empty.  Even more importantly, it must accommodate more than one item to avoid simply being a synonym for God, for if God is the only object or item within NBE, then the premise “everything that begins to exist has a cause” is equivalent to “everything except God has a cause”.  This simply puts God into the definition of the premise of the argument that is supposed to be used to prove God’s existence, and thus would simply beg the question.  It should be noted that just because the NBE set must accommodate more than one possible item, this doesn’t entail that the NBE set must contain more than one item.  This specific problem with the KCA could be resolved if one could first show that there are multiple possible NBE candidates, followed by showing that of the multiple possible candidates within NBE, only one candidate is valid, and finally by showing that this candidate is in fact some personal creator, i.e., God.  If it can’t be shown that NBE can accommodate more than one item, then the argument is circular.  Moreover, if the only candidate for NBE is God, then the second premise “The universe began to exist” simply reduces to “The universe is not God”, which simply assumes what the argument is trying to prove.  Thus if the NBE set is simply synonymous with God, then the Kalam can be reduced to:

(1) Everything except God has a cause;

(2) The universe is not God;

Therefore,

(3) The universe has a cause.

As we can see, this syllogism is perfectly logical (though the conclusion only follows if the premises are true which is open to debate), but this syllogism is entirely useless as an argument for God’s existence.  Furthermore, regarding the NBE set, one must ask, where do theists obtain the idea that this NBE set exists?  That is, by what observations and/or arguments is the possibility of beginningless objects justified?  We don’t find any such observations in science, although it is certainly possible that the universe itself never began (we just don’t have observations to support this, at least, not at this time) and the concept of a “beginningless universe” is in fact entirely consistent with many eternal cosmological models that have been proposed, in which case the KCA would still be invalidated by refuting premise two in yet another way.  Other than the universe itself potentially being an NBE (which is plausible, though not empirically demonstrated as of yet), there don’t appear to be any other possible NBEs, and there don’t appear to be any observations and/or arguments to justify proposing that any NBEs exist at all (other than perhaps the universe itself, which would be consistent with the law of conservation of mass and energy and/or the Quantum Eternity Theorem).

The KCA Fails

As we can see, the Kalam Cosmological Argument fails for a number of reasons, and thus is unsuccessful in arguing for the existence of God.  Thus, even though it may very well be the case that some god exists and did in fact create the universe, the KCA fails to support such a claim.

Here’s an excellent debate between the cosmologist Sean Carroll and the Christian apologist William Lane Craig which illustrates some of the problems with the KCA, specifically in terms of evidence found within cosmology (or lack thereof).  It goes without saying that Carroll won the debate by far, though he could certainly have raised more points in his rebuttals than he did.  Nevertheless, it was entertaining and a nice civil debate with good points presented on both sides.  Here’s another link to Carroll’s post debate reflections on his blog.

Christianity: A Theological & Moral Critique

leave a comment »

Previously I’ve written several posts concerning religion, including many of the contributing factors that led to the development and perpetuation of religion (among them, our cognitive biases and other psychological driving forces) and the various religious ideas contained within.  I’ve also written a little about some of the most common theological arguments for God, as well as the origin and apparent evolution of human morality.  As a former Christian, I’ve also been particularly interested in the religion of Christianity (or to be more accurate, the various Christianities that have existed throughout the last two millennia), and as a result I’ve previously written a few posts relevant to that topic including some pertaining to historical critical analyses.  In this post, I’d like to elaborate on some of the underlying principles and characteristics of Christianity, although some of these characteristics will indeed also apply to the other Abrahamic religions, and indeed to other non-Abrahamic religions as well.  Specifically, I’d like to examine some of the characteristics of a few of the most primary religious tenets and elements of Christian theology, with regard to some of their resulting philosophical (including logical and ethical) implications and problems.

What Do Christians Believe?

There are a number of basic beliefs that all Christians seem to have in common.  They believe in a God that is all-loving, omnibenevolent, omniscient, omnipotent, and omnipresent (yet somehow transcendent from an actual physical/materialistic omnipresence).  This is a God that they must love, a God that they must worship, and a God that they must also fear.  They believe that their God has bestowed upon them a set of morals and rules that they must follow, although many of the rules originating from their religious predecessor, Judaism (as found in the Old Testament of the Bible), are often ignored and/or are seen as superceded by a “New Covenant” created through their messiah, believed to be the son of God, namely Jesus Christ.  Furthermore, the sacrifice of their purported messiah is believed to have saved mankind from their original sin (which will be discussed further later in this post), and this belief in Jesus Christ as the spiritual savior of mankind is believed by Christians to confer to them an eternal life in heaven after they die.  Whichever list of rules is accepted by any particular sect or denomination of Christianity, along with their own unique interpretation of those rules (and the rest of their scripture for that matter), those rules and scriptural interpretations are to be followed without question as a total solution for how to conduct themselves and live their lives.  If they do so, they will be granted a reward of eternal paradise in heaven.  If they do not, they will suffer the wrath of God and be punished severely by an eternity of torture in hell.  Let’s examine some of these specific attributes of the Christian God.

Omnibenevolence
.
This God is supposedly all-loving and omnibenevolent.  If we simply look at the Old Testament of the Bible we can see numerous instances of this God implementing, ordaining, or condoning: theft, rape, slavery (and the beating of slaves), sexism, sexual-orientationism (and the murder of homosexuals), child abuse, filicide, murder, genocide, cannibalism, and one of the most noteworthy, vicarious redemption, though this last example may not be as obviously immoral as the rest which is why I mention it in more detail later in this post.  Granted, some of these acts were punishments for disobedience, but this is hardly an excuse worth defending at all, let alone on any moral grounds.  Furthermore, many of the people harmed in this way were innocent, some of them children, which had no responsibility over what their parents did, nor over what society they were brought up in and the values bestowed upon them therein.
.
Most Christians that are aware of these morally reprehensible actions make excuses for them including: the need to examine those actions within the cultural or historical “context” that they occurred (thus implying that their God’s morals aren’t objective or culturally independent), the claim that whatever their God does is considered moral and good by definition (which either fails to address the Euthyphro Dilemma or fails to meet a non-arbitrary or rational standard of goodness), and/or that some or all of the violent and horrible things commanded in the Old Testament were eventually superceded or nullified by a “New Covenant” with a new set of morals.  In any case, we mustn’t forget about the eternal punishment for those that do not follow God’s wishes.  Does a God that threatens his most prized creation with eternal torture — the worst fate imaginable — and with no chance of defense or forgiveness after death, really possess omnibenevolence and an all-loving nature?  Some people may have a relatively easy life where circumstances have easily encouraged living a life that fits in line with Christianity, but many are not afforded those circumstances and thus there is no absolute fairness or equality for all humans in “making this choice”.
.
Another point worth considering is the fact that the Christian God didn’t just skip creating the physical world altogether in the first place.  Didn’t God have the power to simply have all humans (or all conscious creatures for that matter) exist in heaven without having to live through any possible suffering on Earth first?  Though life is worth living for many, there has been a lot of suffering for many conscious creatures, and regardless of how good one’s life is on Earth, it could never compare to existence in heaven (according to Christians).  There’s no feasible or coherent reason to explain why God didn’t do this if he is truly omnipotent and omnibenevolent.  It appears that this is either an example of something God didn’t have the power to do, which is an argument against his omnipotence (see next section for more on this attribute), or God was able to do this but didn’t, which is an argument against his omnibenevolence.  Christians can’t have it both ways.
.
Christians must employ a number of mental gymnastic tricks in order to reconcile all of these circumstances with the cherished idea that their God is nevertheless all-loving and omnibenevolent.  I used to share this view as well, though now I see things quite differently.  What I see from these texts is exactly what I would expect to find from a religion invented by human beings (a “higher” evolved primate) living in a primitive, patriarchal, and relatively uneducated culture in the middle east.  Most noteworthy however, is the fact that we see their morals evolve along with other aspects of their culture over time.  Just as we see all cultures and their morals evolve and change over time in response to various cultural driving forces, whether they are the interests of those in power or those seeking power, and/or the ratcheting effect of accumulating knowledge of the consequences of our actions accompanied by centuries of ethical and other philosophical discourse and deep contemplation.  Man was not made in God’s image — clearly, God was made in man’s image.  This easily explains why this God often acts sexist, petty, violent, callous, narcissistic, selfish, jealous, and overwhelmingly egotistical, and it also explains why this God changes over time into one that begins to promote at least some of the fruits gained in philosophy as well as some level of altruism and love.  After all, these are all characteristics of human beings, not only as we venture from a morally immature childhood to a more morally mature adulthood, but also as we’ve morally evolved over time, both biologically and culturally.
.
Omniscience and Omnipotence
.
Next, I mentioned that Christians believe their God to be omniscient and omnipotent.  First of all, these are mutually exclusive properties.  If their God is omniscient, this generally is taken to mean that he knows everything there is to know, including the future.  Not only does he know the future of all time within our universe, this god likely knows the future of all it’s actions and intentions.  If this is the case, then we have several problems posed for Christian theology.  If God knows the future of his own actions, then he is unable to change them, and therefore fails to be omnipotent.  Christians may argue that God only desires to do what he does, and therefore has no need to change his mind.  Nevertheless, he is still entirely unable to do so, even if he never desires to do so.  I’ll also point out that in order for God to know what it feels like to sin, including thinking blasphemous thoughts about himself, he ceases to remain morally pure.  Obviously a Christian can use the same types of arguments that are used to condone God’s heinous actions in the Bible (specifically the Old Testament), namely that whatever God does is good and morally perfect, although we can see the double-standard here quite clearly.
.
The second and perhaps more major problem for Christian theology regarding the attribute of omniscience is the problem of free will and the entire biblical narrative.  If God knows exactly what is going to happen before it does, then the biblical narrative is basically just a story made up by God, and all of history has been like a sort of cosmic or divinely created “movie” that is merely being played out and couldn’t have happened any other way.  If God knows everything that is going to happen, then he alone is the only one that could change such a fate.  However, once again, he is unable to do so if he knows what he is going to do before he does it.  God, in this case, must have knowingly created the Devil and all of the evil in the world.  God knows who on Earth is going to heaven and who is going to hell, and thus our belief or disbelief in God or our level of obedience to God is pre-determined before we are even born.  Overall, we can have no free will if God is omniscient and knows what we will do before we do it.
.
My worldview, which is consistent with science, already negates free will as free will can’t exist with the laws of physics governing everything as they do (regardless of any quantum randomness).  So my view regarding free will is actually coherent with the idea of a God that knows the future, so this isn’t a problem for me.  It is however, a problem for Christians because they want to have their proverbial cake and eat it too.  To add to this dilemma, even if God was not omniscient, that still wouldn’t negate the fact that the only two logical possibilities that exist regarding the future are that it is either predetermined or random (even if God doesn’t know that future).  In either logical possibility, humans still couldn’t have free will, and thus the entire biblical narrative and the entire religion for that matter are invalid regardless of the problem of omniscience.  The only way for humans to have free will is if two requirements are met.  First, God couldn’t have omniscience for the logically necessary reasons already mentioned, and second, humans would have to possess the physically and logically impossible property of self-caused actions and behaviors — where our intentional actions and behaviors would have to be free of any prior causes contributing to said intentions (i.e. our intentions couldn’t be caused by our brain chemistry, our genes, our upbringing and environment, the laws of physics which govern all of these processes, etc.).  Thus, unless we concede that God isn’t omniscient, and that humans possess the impossible ability of causa sui intentions, then all of history, beginning with the supposed “Fall of Man” in the Garden of Eden would have either been predetermined or would have resulted from random causes.  This entails that we would all be receiving a punishment due to an original sin that either God himself instantiated with his own deterministic physical laws, or that was instantiated by random physical laws that God instantiated (even if they appear to be random to God as well) which would have likewise been out of our control.
.
Omnipresence
.
The Christian God is also described as being omnipresent.  What exactly could this mean?  It certainly doesn’t mean that God is physically omnipresent in any natural way that we can detect.  Rather it seems to mean that God’s omnipresence is almost always invisible to us (though not always, e.g., the burning bush), thus transcending the physical realm with an assumption of a supernatural or metaphysical realm outside of the physical universe, yet somehow able to intervene or act within it.  This to me seems like a contradiction of terms as well since the attribute of omnipresence implied by Christians doesn’t seem to include the natural realm (at least not all the time), but only a transcendent type (all the time).  Christians may argue that God is omnipresent in the natural world, however this defense could only work by changing the definition of “natural” to mean something other than the universe that we can universally and unmistakably detect, and therefore the argument falls short.  However, I only see this as a minor problem for the Christian theology, and since it isn’t as central a precept nor as important a precept as the others, I won’t discuss it further.
.
Love, Worship, and Fear
.
Though Christians may say that they don’t need to be forced to love, or worship God because they do so willingly, let’s examine the situation here.  The bible instructs Christians to fear God within dozens of different verses throughout.  In fact, Deuteronomy 10:12 reads: “And now, Israel, what does the Lord your God require of you, but to fear the Lord your God, to walk in all his ways, to love him, to serve the Lord your God with all your heart and with all your soul.”
.
It’s difficult for me to avoid noticing how this required love, servitude, worship, and fear of God resembles the effective requirements of some kind of celestial dictator, and one that implements a totalitarian ideology with explicit moral codes and a total solution for how one is to conduct themselves throughout their entire lives.  This dictator is similar to that which I’ve heard of in Orwell’s Nineteen Eighty-Four, even sharing common principles such as “thought crimes” which a person can never hide given an omniscient God (in the Christian view) that gives them no such privacy.  Likewise with the Orwellian dystopia, the totalitarian attempt to mold us as it does largely forces itself against our human nature (in many ways at least), and we can see the consequences of this clash throughout history, whether relating to our innate predisposition for maintaining certain human rights and freedoms (including various forms of individuality and free expression), maintaining our human sexuality, and other powerful aspects of who we are as a species given our evolutionary history.
.
If you were to put many of these attributes of God into a person on Earth, we would no doubt see that person as a dictator, and we would no doubt see the total solution implemented as nothing short of totalitarian.  Is there any escape from this totalitarian implementation?  In the Christian view, you can’t ever escape (though they’d never use the word “escape”) from the authority of this God, even after you die.  In fact, it is after a person dies that the fun really begins, with a fate of either eternal torture or eternal paradise (with the latter only attainable if you’ve met the arbitrary obligations of this God).  While Christians’ views of God may differ markedly from the perspective I’ve described here, so would the perspective of a slave that has been brainwashed by their master through the use of fear among other psychological motivations (whether that person is conscious of their efficacy or not).  They would likely not see themselves as a slave at all, even though an outsider looking at them would make no mistake in making such an assertion.
.
Vicarious Redemption
.
Another controversial concept within Christianity is that of vicarious redemption or “substitutionary atonement”.  There are a number of Christian models that have been formulated over the years to interpret the ultimate meaning of this doctrine, but they all involve some aspect of Jesus Christ undergoing a passion, crucifixion and ultimately death in order to save mankind from their sins.  This was necessary in the Christian view because after the “Fall of Man” beginning with Adam and Eve in the Garden of Eden (after they were deceived and tempted to disobey God by a talking snake), all of humanity was supposedly doomed to both physical and spiritual death.  Thankfully, say the Christians, in his grace and mercy, God provided a way out of this dilemma, specifically, the shedding of blood from his perfect son.  So through a human sacrifice, mankind is saved.  This concept seems to have started with ancient Judaic law, specifically within the Law of Moses, where God’s chosen people (the Jews) could pay for their sins or become “right in God’s eyes” through an atonement accomplished through animal sacrifice.
.
Looking in the Old Testament of the Bible at the book of Leviticus (16:1-34) we see where this vicarious redemption or substitutionary atonement began, which I will now paraphrase.  Starting with Moses (another likely mythological being according to many scholars), we read that God told him that his brother Aaron (who recently had two of his own sons die when they “drew too close to the presence of the Lord”) could only enter the shrine if he bathed, wore certain garments, and then brought with him a bull for a sin offering and a ram for a burnt offering.  He was also to take two goats from the Israelite community to make expiation for himself and for his household.  Then Aaron was to take the two goats and let them stand before the Lord at the entrance of the “Tent of Meeting” and place lots upon the goats (i.e. to randomly determine each of the goat’s fate in the ritual), one marked for the Lord and the other marked for “Azazel” (i.e. marked as a “scapegoat”).  He was to bring forward the goat designated by lot for the Lord, which he is to offer as a sin offering, while the goat designated by lot for “Azazel” shall be left standing alive before the Lord, to make expiation with it and to send it off to the wilderness to die.  Then he was to offer his “bull of sin offering” by slaughtering it, followed by taking some of the blood of the bull and sprinkling it with his finger several times.  Then he was to slaughter the “people’s goat of sin offering”, and do the same thing with the goat’s blood as was done with the bull’s.  Then a bit later he was to take some more blood of the bull and of the goat and apply it to each of the horns of the alter and then sprinkle the rest of the blood with his finger seven times (this was meant to “purge the shrine of uncleanness”).  Afterward, the live goat was to be brought forward and Aaron was to lay his hands upon the head of the goat and confess over it all the iniquities and transgressions of the Israelites, whatever their sins, which was meant to put those sins on the head of the goat.  Then the goat was to be sent off into the desert to die.  Then Aaron was to offer his burnt offering and the burnt offering of the people, making expiation for himself as well as for the people.  The fat of the sin offering was to be “turned into smoke” on the altar.  Then the “bull of sin offering” and “goat of sin offering” (whose blood was brought in to purge the shrine) were to be removed from their camp, and their hides, flesh, and dung were to be consumed in a fire.  And this became a law for the Israelites to atone for their sins, by performing this ritual once a year.
.
Animal sacrifice has been a long practiced ritual in most religions (at one time or another), where it has been primarily used as a form of blood magic to appease the gods of that particular culture and religion, and has in fact been found in the history of almost all cultures throughout the world.  So it’s not surprising in the sense that this barbaric practice had a precedent and was near ubiquitous in a number of cultures, however it was most prevalent in the primitive cultures of the past.  Not surprisingly, these primitive cultures had far less knowledge about the world available to comprise their worldview.  As a result, they invoked the supernatural and a number of incredibly odd rituals and behaviors.  We can see some obviously questionable ethics involved here with this type of practice: an innocent animal suffers and/or dies in order to compensate for the guilty animal’s transgressions.  Does this sound like the actions of a morally “good” God?  Does this sound like a moral philosophy that involves personal responsibility, altruism, and love?  I’ll leave that to the reader to decide for themselves, but I think the answer is quite clear.
.
It didn’t stop there though, unfortunately.  This requirement of God was apparently only temporary and eventually a human sacrifice was needed, and this came into fruition in the New Testament of the Christian Bible with the stories and myths of a Jewish man (a mix between an apocalyptic itinerant rabbi and a demigod) named Jesus Christ (Joshua/Yeshua) — a man who Christians claim was their messiah, the son of God (and “born of a virgin” as most mythic heroes were), and he was a man who most Christians claim was also God (despite the obvious logical contradiction of this all-human/all-divine duality, which is amplified further in the Trinitarian doctrine).  Along with this new vicarious redemption sacrifice was the creation of a “New Covenant” — a new relationship between humans and God mediated by Jesus Christ.  It should be noted that the earliest manuscripts in the New Testament actually suggest that Jesus was likely originally believed to be a sort of cosmic archangel solely communicating to apostles through divine revelation, dreams, visions, and through hidden messages in the scripture.  Then it appears that Jesus was later euhemerized (i.e. placed into stories in history) later on, most notably in the allegorical and other likely fictions found within the Gospels — although most Christians are unaware of this, as the specific brands of Christianity that have survived to this day mutually assume a historical Jesus.  For more information regarding recent scholarship pertaining to this, please read this previous post.
.
Generally, Christians claim that this “New Covenant” was instituted at the “Last Supper” as part of the “Eucharist”.  To digress briefly, the Eucharist was a ritual considered by most Christians to be a sacrament.  During this ritual, Jesus is claimed to have given his disciples bread and wine, asking them to “do this in memory of me,” while referring to the bread as his “body” and the wine as his “blood” (many Christians think that this was exclusively symbolic).  Some Christians (Catholics, Orthodox, and members of The Church of the East) however believe in transubstantiation, where the bread and wine that they are about to eat literally becomes the body and blood of Jesus Christ.  So to add to the aforementioned controversial religious precepts, we have some form of pseudo or quasi-cannibalism of the purported savior of mankind, the Christian god (along with the allegorical content or intentions, e.g., eating Jesus on the 15th day of the month as Jews would normally have done with the Passover lamb).  The practice of the Eucharist had a precedent in other Hellenistic mystery religions where members of those religions would have feasts/ceremonies where they would symbolically eat the flesh and drink the blood of their god(s) as well, to confer to them eternal life from there (oft) “dying-and-rising” savior gods.  So just like the animal sacrifice mentioned earlier, practices that were the same as or incredibly similar to the Eucharist (including the reward of eternal life) arose prior to Christianity and were likely influential during Christianity’s origin and development.  Overall, Christianity is basically just a syncretism between Judaism and Hellenism anyway, which explains the large number of similarities and overlap between the two belief systems, as these cultures were able to share, mix, and modify these religious ideas over time.
.
Returning back to the vicarious redemption, this passion — this incredible agony, suffering, and eventual death of Jesus Christ along with the blood he spilled, we hear, was to reverse humanity’s spiritual death (and forgive us for our sins) once and for all with no more yearly atonement needed.  This was after all the “perfect” sacrifice, and according to Christians, this fate of their supposed messiah was prophesied in their scriptural texts, and thus was inevitable to occur.  So what exactly are we to make of this vicarious redemption through human sacrifice, that God required to happen?  Or the pseudo-cannibalism in the Eucharist for that matter?  There are certainly some good symbolic intentions and implications regarding the idea of a person selflessly performing an altruistic act to save others, and I completely recognize that fact as well.  However, it is something else entirely to assume that one’s transgressions can be born onto another person so that the transgressions are “made clean” or nullified or “canceled out” in some way, let alone through the act of torturing and executing a human being.  In the modern world, if we saw such a thing, we would be morally obligated to stop it.  Not only to prevent that human being from suffering needlessly, but to give that person (if they are willingly submitting themselves to the torture and execution) the proper mental health resources to protect themselves and hopefully to repair whatever psychological ailment that caused such a lapse in their sanity in the first place.
.
As for the Eucharist, there’s definitely something to be said about the idea of literally or symbolically eating another human being.  While a symbolic version is generally speaking significantly less controversial, the idea altogether seems to be yet another form of barbaric blood magic (just like the crucifixion, and the non-human animal sacrifice that preceded it).  However, the act of remembrance of Jesus via the Eucharist is an admirable intention and allegory for becoming one with Jesus, and if the food and wine were seen to represent his teachings and message only (and explicitly so), then it wouldn’t be nearly as controversial.  However, given that there is a very obvious intention (in the Gospel according to Mark for example, 14:16-24) to eat Jesus in place of the Passover lamb, the Eucharist is at the very least a controversial allegory, and if it isn’t supposed to be allegorical (or entirely allegorical), this would explain why the belief in transubstantiation (held by many Christians) is as important as it is.
.
There are two final things I’d like to mention regarding Jesus Christ’s role in this vicarious redemption.  Let’s assume for the moment, that this act isn’t immoral and view it from a Christian perspective (however irrational that may be).  For one, Jesus knew perfectly well that he was going to be resurrected afterward, and so ultimately he didn’t die the same way you or I would die if we had made the same sacrifice, for we would die forever.  He knew that his death would only be temporary and physical (though many Christians think he was physically as opposed to spiritually resurrected).  If I knew that I would not be resurrected, that would be a much more noble sacrifice for I would be giving up my life indefinitely.  Furthermore, with knowledge of the afterlife, and if I accept that heaven and hell exist, then an even greater sacrifice would be for Jesus to die and go to hell for all eternity, as that would be the greatest possible self-sacrifice imaginable.  However, this wasn’t what happened.  Instead, Jesus suffered (horribly no less), and then died, but only for 3 days, and then he basically became invincible and impervious to any further pain or suffering.  Regardless of how Christians would respond to this point, the fact remains that a more noble sacrifice was possible, but didn’t occur.  The second and last point I’ll mention is the promise offered from the action — that true believers are now granted an eternity in heaven.  One little issue here for Christians is the fact that they don’t know for a fact whether or not God would keep his end of the bargain.  God can do whatever he wants, and whatever he does is morally good in the Christian view — even if that means that he changes his mind and reneges on his promise.  If Christians argue that God would never do this, they are making the assumption that they know with 100% certainty what God will (or will not) do, and this is outside of their available knowledge (again according to the Christian view of not knowing what God is thinking or what he will do).  Just as the rest of the religion goes, the truth of the promise of eternal life itself is based on faith, and believers may end up in hell anyway (it would be up to God either way).  Furthermore, even if all believers did go to heaven, could they not rebel just as Lucifer did when he was an angel in heaven?  Once again, Christians may deny this, but there’s nothing in their scriptures to suggest that this is impossible, especially given the precedent of God’s highest angel doing so in the past.
.
Final Thoughts
.
I will say that there are a lot of good principles found within the Christian religion, including that of forgiveness and altruism, but there are morally reprehensible things as well, and we should expect that a religion designed by men living long ago carries both barbaric and enlightened human ideas, with more enlightened ideas coming later as the religion co-developed with the culture around it.  Many of the elements we admire (such as altruism and forgiveness for example) exist in large part because they are simply some of the aspects of human nature that evolution favored (since cooperation and social relationships help us to survive many environmental pressures), and this fact also explains why they are seen cross-culturally and do not depend on any particular religion.  Having said that, I will add that on top of our human nature, we also learn new and advantageous ideas over time including those pertaining to morals and ethics, and it is philosophical contemplation and discourse that we owe our thanks to, not any particular religion, even if they endorse those independent ideas.  One of the main problems with religion, especially one such as Christianity, is that it carries with it so many absurd assumptions and beliefs about reality and our existence, that the good philosophical fruits that accompany it are often tainted with dangerous dogma and irrational faith in the unfalsifiable, and this is a serious problem that humanity has been battling for millennia.
I found a quote describing Christianity from a particular perspective that, while offensive to Christians, does shed some light on the overall ludicrous nature of their (and my former) belief system.
.
Christianity was described as:
.
“The belief that a cosmic Jewish zombie who was his own father can make you live forever if you symbolically eat his flesh and drink his blood and telepathically tell him you accept him as your master, so he can remove an evil force from your soul that is present in humanity because a rib-woman was convinced by a talking snake to eat from a magical tree”
.
Honestly, that quote says it all.  Regarding the Adam and Eve myth, most Christians (let alone most people in general) don’t realize that Eve received more of the blame than Adam did (and women supposedly received the punishment of painful childbirth as a result), based on fallacious reasoning from God.  She received more blame than Adam did, as she received a punishment that Adam did not have to endure and not vice versa (since both men and women would have to endure the punishment meant “for Adam”, that is, difficulty farming and producing food).  It seems that Eve was punished more, presumably because she ate the fruit of the tree of knowledge first (or because she was a woman rather than a man), however, what is quite clear to me from reading this story is that Eve was deceived by a highly skilled deceiver (the serpent) that was hell-bent on getting her to eat from the tree.  Adam however, was duped by his wife, a woman made from an insignificant part of his body (his rib), and a woman that was not a highly skilled deceiver as the serpent was.  It seems to me that this gives the expression “Fall of Man” a whole new meaning, as in this case, it seems that women would have become the “head of the household” instead.  Yet, what we see is a double standard here, and it appears that the sexist, patriarchal authors illustrated their true colors quite well when they were devising this myth, and their motivations for doing so were obvious considering the way they had God create Eve (from an insignificant part of Adam, and for the purpose of keeping him company as a subservient wife), and the way they portray her is clearly meant to promote patriarchal dominance.  This is even further illustrated by the implication that God is a male, referred to as “He”, etc., despite the fact that all living animals on Earth are born from a “life producing” or “life creating” female.  It’s nothing but icing on the cake I guess.

Learning About Our Cognitive Biases: An Antidote to Irrational & Dogmatic Thinking (Part 3 of 3)

leave a comment »

This is part 3 of 3 of this post.  Click here to read part 2.

Reactive Devaluation

Studies have shown that if a claim is believed to have come from a friend or ally, the person receiving it will be more likely to think that it has merit and is truthful.  Likewise, if it is believed to have come from an enemy or some member of the opposition, it is significantly devalued.  This bias is known as reactive devaluation.  A few psychologists actually determined that the likeabillity of the source of the message is one of the key factors involved with this effect, and people will actually value information coming from a likeable source more than a source they don’t like, with the same order of magnitude that they would value information coming from an expert versus a non-expert.  This is quite troubling.  We’ve often heard about how political campaigns and certain debates are seen more as popularity contests than anything else, and this bias is likely a primary reason for why this is often the case.

Unfortunately, a large part of society has painted a nasty picture of atheism, skepticism, and the like.  As it turns out, people who do not believe in a god (mainly the Christian god) are the least trusted minority in America, according to a sociological study performed a few years ago (Johanna Olexy and Lee Herring, “Atheists Are Distrusted: Atheists Identified as America’s Most Distrusted Minority, According to Sociological Study,” American Sociological Association News, May 3, 2006).  Regarding the fight against dogmatism, the rational thinker needs to carefully consider how they can improve their likeability to the recipients of their message, if nothing else, by going above and beyond to remain kind, respectful, and maintain a positive and uplifting attitude during their argument presentation.  In any case, the person arguing will always be up against any societal expectations and preferences that may reduce that person’s likeability, so there’s also a need to remind others as well as ourselves about what is most important when discussing an issue — the message rather than the messenger.

The Backfire Effect & Psychological Reactance

There have already been several cognitive biases mentioned that interfere with people’s abilities to accept new evidence that contradicts their beliefs.  To make matters worse, psychologists have discovered that people often react to disconfirming evidence by actually strengthening their beliefs.  This is referred to as the backfire effect.  This can make refutations futile most of the time, but there are some strategies that help to combat this bias.  The bias is largely exacerbated by a person’s emotional involvement and the fact that people don’t want to appear to be unintelligent or incorrect in front of others.  If the debating parties let tempers cool down before resuming the debate, this can be quite helpful regarding the emotional element.  Additionally, if a person can show their opponent how accepting the disconfirming evidence will benefit them, they’ll be more likely to accept it.  There are times when some of the disconfirming evidence mentioned is at least partially absorbed by the person, and they just need some more time in a less tense environment to think things over a bit.  If the situation gets too heated, or if a person risks losing face with peers around them, the ability to persuade that person only decreases.

Another similar cognitive bias is referred to as reactance, which is basically a motivational reaction to instances of perceived infringement of behavioral freedoms.  That is, if a person feels that someone is taking away their choices or limiting the number of them, such as in cases where they are being heavily pressured into accepting a different point of view, they will often respond defensively.  They may feel that freedoms are being taken away from them, and it is only natural for a person to defend whatever freedoms they see themselves having or potentially losing.  Whenever one uses reverse psychology, they are in fact playing on at least an implicit knowledge of this reactance effect.  Does this mean that rational thinkers should use reverse psychology to persuade dogmatists to accept reason and evidence?  I personally don’t think that this is the right approach because this could also backfire and I would prefer to be honest and straightforward, even if this results in less efficacy.  At the very least however, one needs to be aware of this bias and needs to be careful with how they phrase their arguments, how they present the evidence, and to maintain a calm and respectful attitude during these discussions, so that the other person doesn’t feel the need to defend themselves with the cost of ignoring evidence.

Pareidolia & Other Perceptual Illusions

Have you ever seen faces of animals or people while looking at clouds, shadows, or other similar situations?  The brain has many pattern recognition modules and will often respond erroneously to vague or even random stimuli, such that we perceive something familiar, even when it isn’t actually there.  This is called pareidolia, and is a type of apophenia (i.e. seeing patterns in random data).  Not surprisingly, many people have reported instances of seeing various religious imagery and so forth, most notably the faces of prominent religious figures, from ordinary phenomena.  People have also erroneously heard “hidden messages” in records when playing them in reverse, and similar illusory perceptions.  This results from the fact that the brain often fills in gaps and if one expects to see a pattern (even if unconsciously driven by some emotional significance), then the brain’s pattern recognition modules can often “over-detect” or result in false positives by too much sensitivity and by conflating unconscious imagery with one’s actual sensory experiences, leading to a lot of distorted perceptions.  It goes without saying that this cognitive bias is perfect for reinforcing superstitious or supernatural beliefs, because what people tend to see or experience from this effect are those things that are most emotionally significant to them.  After it occurs, people believe that what they saw or heard must have been real and therefore significant.

Most people are aware that hallucinations occur in some people from time to time, under certain neurological conditions (generally caused by an imbalance of neurotransmitters).  A bias like pareidolia can effectively produce similar experiences, but without requiring the more specific neurological conditions that a textbook hallucination requires.  That is, the brain doesn’t need to be in any drug-induced state nor does it need to be physically abnormal in any way for this to occur, making it much more common than hallucinations.  Since pareidolia is also compounded with one’s confirmation bias, and since the brain is constantly implementing cognitive dissonance reduction mechanisms in the background (as mentioned earlier), this can also result in groups of people having the same illusory experience, specifically if the group shares the same unconscious emotional motivations for “seeing” or experiencing something in particular.  These circumstances along with the power of suggestion from even just one member of the group can lead to what are known as collective hallucinations.  Since collective hallucinations like these inevitably lead to a feeling of corroboration and confirmation between the various people experiencing it (say, seeing a “spirit” or “ghost” of someone significant), they can reinforce one another’s beliefs, despite the fact that the experience was based on a cognitive illusion largely induced by powerful human emotions.  It’s likely no coincidence that this type of group phenomenon seems to only occur with members of a religion or cult, specifically those that are in a highly emotional and synchronized psychological state.  There have been many cases of large groups of people from various religions around the world claiming to see some prominent religious figure or other supernatural being, illustrating just how universal this phenomena is within a highly emotional group context.

Hyperactive Agency Detection

There is a cognitive bias that is somewhat related to the aforementioned perceptual illusion biases which is known as hyperactive agency detection.  This bias refers to our tendency to erroneously assign agency to events that happen without an agent causing them.  Human beings all have a theory of mind that is based in part on detecting agency, where we assume that other people are causal agents in the sense that they have intentions and goals just like we do.  From an evolutionary perspective, we can see that our ability to recognize that others are intentional agents allows us to better predict another person’s behavior which is extremely beneficial to our survival (notably in cases where we suspect others are intending to harm us).

Unfortunately our brain’s ability to detect agency is hyperactive in that it errs on the side of caution by over-detecting possible agency, as opposed to under-detecting (which could negatively affect our survival prospects).  As a result, people will often ascribe agency to things and events in nature that have no underlying intentional agent.  For example, people often anthropomorphize (or implicitly assume agency to) machines and get angry when they don’t work properly (such as an automobile that breaks down while driving to work), often with the feeling that the machine is “out to get us”, is “evil”, etc.  Most of the time, if we have feelings like this, they are immediately checked and balanced by our rational minds that remind us that “it’s only a car”, or “it’s not conscious”, etc.  Similarly, when natural disasters or other similar events occur, our hyperactive agency detection snaps into gear, trying to find “someone to blame”.  This cognitive bias explains quite well, at least as a contributing factor as to why so many ancient cultures assumed that there were gods that caused earthquakes, lightning, thunder, volcanoes, hurricanes, etc.  Quite simply, they needed to ascribe the events as having been caused by someone.

Likewise, if we are walking in a forest and hear some strange sounds in the bushes, we may often assume that some intentional agent is present (whether another person or other animal), even though it may simply be the wind that is causing the noise.  Once again, we can see in this case the evolutionary benefit of this agency detection, even with the occasional “false positive”, the noise in the bushes could very well be a predator or other source of danger, so it is usually better for our brains to assume that the noise is coming from an intentional agent.  It’s also entirely possible that even in cases where the source of sound was determined to be the wind, early cultures may have ascribed an intentional stance to the wind itself (e.g. a “wind” god, etc.).  In all of these cases, we can see how our hyperactive agency detection can lead to irrational, theistic and other supernatural belief systems, and we need to be aware of such a bias when evaluating our intuitions of how the world works.

Risk Compensation

It is only natural that the more a person feels protected from various harms, the less careful they tend to behave, and vice versa.  This apparent tendency, sometimes referred to as risk compensation, is often a harmless approach because generally speaking, if one is well protected from harm, they do have less to worry about than one who is not, and thus they can take greater relative risks as a result of that larger cushion of safety.  Where this tends to cause a problem however, is in cases when the perceived level of protection is far higher than it actually is.  The worst case scenario that comes to mind is if a person that appeals to the supernatural thinks that it doesn’t matter what happens to them or that no harm can come to them because of some belief in a supernatural source of protection, let alone one that they believe provides the most protection possible.  In this scenario, their level of risk compensation is fully biased to a negative extreme, and they will be more likely to behave irresponsibly because they don’t have realistic expectations of the consequences of their actions.  In a post-enlightenment society, we’ve acquired a higher responsibility to heed the knowledge gained from scientific discoveries so that we can behave more rationally as we learn more about the consequences of our actions.

It’s not at all difficult to see how the effects of various religious dogma on one’s level of risk compensation can inhibit that society from making rational, responsible decisions.  We have several serious issues plaguing modern society including those related to environmental sustainability, climate change, pollution, war, etc.  If believers in the supernatural have an attitude that everything is “in God’s hands” or that “everything will be okay” because of their belief in a protective god, they are far less likely to take these kinds of issues seriously because they fail to acknowledge the obvious harm on everyone in society.  What’s worse is that not only are dogmatists reducing their own safety, but they’re also reducing the safety of their children, and of everyone else in society that is trying to behave rationally with realistic expectations.  If we had two planets, one for rational thinkers, and one for dogmatists, this would no longer be a problem for everyone.  The reality is that we share one planet and thus we all suffer the consequences of any one person’s irresponsible actions.  If we want to make more rational decisions, especially those related to the safety of our planet’s environment, society, and posterity, people need to adjust their risk compensation such that it is based on empirical evidence using a rational, proven scientific methodology.  This point clearly illustrates the need to phase out supernatural beliefs since false beliefs that don’t correspond with reality can be quite dangerous to everybody regardless of who carries the beliefs.  In any case, when one is faced with the choice between knowledge versus ignorance, history has demonstrated which choice is best.

Final thoughts

The fact that we are living in an information age with increasing global connectivity, and the fact that we are living in a society that is becoming increasingly reliant on technology, will likely help to combat the ill effects of these cognitive biases.  These factors are making it increasingly difficult to hide one’s self from the evidence and the proven efficacy of using the scientific method in order to form accurate beliefs regarding the world around us.  Societal expectations are also changing as a result of these factors, and it is becoming less and less socially acceptable to be irrational, dogmatic, and to not take scientific methodology more seriously.  In all of these cases, having knowledge about these cognitive biases will help people to combat them.  Even if we can’t eliminate the biases, we can safeguard ourselves by preparing for any potential ill effects that those biases may produce.  This is where reason and science come into play, providing us with the means to counter our cognitive biases by applying a rational skepticism to all of our beliefs, and by using a logical and reliable methodology based on empirical evidence to arrive at a belief that we can be justifiably confident (though never certain) in.  As Darwin once said, “Ignorance more frequently begets confidence than does knowledge; it is those who know little, and not those who know much, who so positively assert that this or that problem will never be solved by science.”  Science has constantly been ratcheting our way toward a better understanding of the universe we live in, despite the fact that the results we obtain from science are often at odds with our own intuitions about how we think the world is.  Only relatively recently has science started to provide us with information regarding why our intuitions are often at odds with the scientific evidence.

We’ve made huge leaps within neuroscience, cognitive science, and psychology, and these discoveries are giving us the antidote that we need in order to reduce or eliminate irrational and dogmatic thinking.  Furthermore, those that study logic have an additional advantage in that they are more likely to spot fallacious arguments that are used to support this or that belief, and thus they are less likely to be tricked or persuaded by arguments that often appear to be sound and strong, but actually aren’t.  People fall prey to fallacious arguments all the time, and it is mostly because they haven’t learned how to spot them (which courses in logic and other reasoning can help to mitigate).  Thus, overall I believe it is imperative that we integrate knowledge concerning our cognitive biases into our children’s educational curriculum, starting at a young age (in a format that is understandable and engaging of course).  My hope is that if we start to integrate cognitive education (and complementary courses in logic and reasoning) as a part of our foundational education curriculum, we will see a cascade effect of positive benefits that will guide our society more safely into the future.