The Open Mind

Cogito Ergo Sum

Posts Tagged ‘Ontology

Is Death Bad For You? A Response to Shelly Kagan

leave a comment »

I’ve enjoyed reading and listening to the philosopher Shelly Kagan, both in debate, lectures, and various articles.  One topic he’s well known for is that of death, specifically the fear of death, and trying to understand the details behind, and justification for, the general attitude people have toward the concept of death.  I’ve written a little about the fear of death long ago, but coming across an article of Kagan’s reignited my interest in the topic.  He wrote an article a few years ago in The Chronicle, where he expounds on some of the ontological puzzles related to the concept of death.  I thought I’d briefly summarize the article’s main points and give a response to it here.

Can Death Be Bad For Us?

Kagan begins with the assumption that the death of a person’s body results in the end of that person’s existence.  This is certainly a reasonable assumption as there’s no evidence to the contrary, that is, that persons can exist without a living body.  Simple enough.  Then he asks the question, if death is the end of our existence, then how can being dead be bad for us?  While some would say that death is particularly bad for the survivors of the deceased since they miss the person who’s died and the relationship they once had with that person.  But it seems more complicated than that, because we could likewise have an experience where a cherished friend or family member leaves us and goes somewhere far away such that we can be confident that we’ll never see that person ever again.

Both the death of that person, and the alternative of their leaving forever to go somewhere such that we’ll never have contact with them again, result in the same loss of relationship.  Yet most people would say that if we knew about their dying instead of simply leaving forever, there’s more to be sad about in terms of death being bad for them, not simply bad for us.  And this sadness results from more than simply knowing how they died — the process of death itself — which could have been unpleasant, but also could have been entirely benign (such as dying peacefully in one’s sleep).  Similarly, Kagan tells us, the prospect of dying can be unpleasant as well, but he asserts, this only seems to make sense if death itself is bad for us.

Kagan suggests:

Maybe nonexistence is bad for me, not in an intrinsic way, like pain, and not in an instrumental way, like unemployment leading to poverty, which in turn leads to pain and suffering, but in a comparative way—what economists call opportunity costs. Death is bad for me in the comparative sense, because when I’m dead I lack life—more particularly, the good things in life. That explanation of death’s badness is known as the deprivation account.

While the deprivation account seems plausible, Kagan thinks that accepting it results in a couple of potential problems.  He argues, if something is true, it seems as if there must be some time when it’s true.  So when would it be true that death is bad for us?  Not now, he says.  Because we’re not dead now.  Not after we’re dead either, because then we no longer exist so nothing can be bad for a being that no longer exists.  This seems to lead to the conclusion that either death isn’t bad for anyone after all, or alternatively, that not all facts are datable.  He gives us another possible example of an undatable fact.  If Kagan shoots “John” today such that John slowly bleeds to death after two days, but Kagan dies tomorrow (before John dies) then after John dies, can we say that Kagan killed John?  If Kagan did kill John, when did he kill him?  Kagan no longer existed when John died so how can we say that Kagan killed John?

I think we could agree with this and say that while it’s true that Kagan didn’t technically kill John, a trivial response to this supposed conundrum is to say that Kagan’s actions led to John’s death.  This seems to solve that conundrum by working within the constraints of language, while highlighting the fact that when we say someone killed X what we really mean is that someone’s actions led to the death of X, thus allowing us to be consistent with our conceptions of existence, causality, killing, blame, etc.

Existence Requirement, Non-Existential Asymmetry, & It’s Implications

In any case, if all facts are datable (or at least facts like these), then we should be able to say when exactly death is bad for us.  Can things only be bad for us when we exist?  If so, this is what Kagan refers to as the existence requirement.  If we don’t accept such a requirement — that one must exist in order for things to be bad for us — that produces other problems, like being able to say for example that non-existence could be bad for someone who has never existed but that could have possibly existed.  This seems to be a pretty strange claim to hold to.  So if we refuse to accept that it’s a tragedy for possibly existent people to never come into existence, then we’d have to accept the existence requirement, which I would contend is a more plausible assumption to accept.  But if we do so, then it seems that we have to accept that death isn’t in fact bad for us.

Kagan suggests that we may be able to reinterpret the existence requirement, and he does this by distinguishing between two versions, a modest version which asserts that something can be bad for you only if you exist at some time or another, and a bold version which asserts that something can be bad for you only if you exist at the same time as that thing.  Accepting the modest version seems to allow us a way out of the problems posed here, but that it too has some counter-intuitive implications.

He illustrates this with another example:

Suppose that somebody’s got a nice long life. He lives 90 years. Now, imagine that, instead, he lives only 50 years. That’s clearly worse for him. And if we accept the modest existence requirement, we can indeed say that, because, after all, whether you live 50 years or 90 years, you did exist at some time or another. So the fact that you lost the 40 years you otherwise would have had is bad for you. But now imagine that instead of living 50 years, the person lives only 10 years. That’s worse still. Imagine he dies after one year. That’s worse still. An hour? Worse still. Finally, imagine I bring it about that he never exists at all. Oh, that’s fine.

He thinks this must be accepted if we accept the modest version of the existence requirement, but how can this be?  If one’s life is shortened relative to what they would have had, this is bad, and gets progressively worse as the life is hypothetically shortened, until a life span of zero is reached, in which case they no longer meet the modest existence requirement and thus can’t have anything be bad for them.  So it’s as if it gets infinitely worse as the potential life span approaches the limit of zero, and then when zero is reached, becomes benign and is no longer an issue.

I think a reasonable response to this scenario is to reject the claim that hypothetically shrinking the life span to zero is suddenly no longer an issue.  What seems to be glossed over in this example is the fact that this is a set of comparisons of one hypothetical life to another hypothetical life (two lives with different non-zero life spans), resulting in a final comparison between one hypothetical life and no life at all (a life span of zero).  This example illustrates whether or not something is better or worse in comparison, not whether something is good or bad intrinsically speaking.  The fact that somebody lived for as long as 90 years or only for 10 years isn’t necessarily good or bad but only better or worse in comparison to somebody who’s lived for a different length of time.

The Intrinsic Good of Existence & Intuitions On Death

However, I would go further and say that there is an intrinsic good to existing or being alive, and that most people would agree with such a claim (and that the strong will to live that most of us possess is evidence of our acknowledging such a good).  That’s not to say that never having lived is bad, but only to say that living is good.  If not living is neither good nor bad but considered a neutral or inconsequential state, then we can hold the position that living is better than not living, even if not living isn’t bad at all (after all it’s neutral, neither good nor bad).  Thus we can still maintain our modest existence requirement while consistently holding these views.  We can say that not living is neither good nor bad, that living 10 years is good (and better than not living), that living 50 years is even better, and that living 90 years is even better yet (assuming, for the sake of argument, that the quality of life is equivalently good in every year of one’s life).  What’s important to note here is that not having lived in the first place doesn’t involve the loss of a good, because there was never any good to begin with.  On the other hand, extending the life span involves increasing the quantity of the good, by increasing it’s duration.

Kagan seems to agree overall with the deprivation account of why we believe death is bad for us, but that some puzzles like those he presented still remain.  I think one of the important things to take away from this article is the illustration that we have obvious limitations in the language that we use to describe our ontological conceptions.  These scenarios and our intuitions about them also seem to show that we all generally accept that living or existence is intrinsically good.  It may also highlight the fact that many people intuit that some part of us (such as a soul) continues to exist after death such that death can be bad for us after all (since our post-death “self” would still exist).  While the belief in souls is irrational, it may help to explain some common intuitions about death.

Dying vs. Death, & The Loss of An Intrinsic Value

Remember that Kagan began his article by distinguishing between how one dies, the prospect of dying and death itself.  He asked us, how can the prospect of dying be bad if death itself (which is only true when we no longer exist) isn’t bad for us. Well, perhaps we should consider that when people say that death is bad for us they tend to mean that dying itself is bad for us.  That is to say, the prospect of dying isn’t unpleasant because death is bad for us, but rather because dying itself is bad for us.  If dying occurs while we’re still alive, resulting in one’s eventual loss of life, then dying can be bad for us even if we accepted the bold existence requirement — that something can only be bad for us if we exist at the same time as that thing.  So if the “thing” we’re referring to is our dying rather than our death, this would be consistent with the deprivation account of death, would allow us to put a date (or time interval) on such an event, and would seem to resolve the aforementioned problems.

As for Kagan’s opening question, when is death bad for us?  If we accept my previous response that dying is what’s bad for us, rather than death, then it would stand to reason that death itself isn’t ever bad for us (or doesn’t have to be), but rather what is bad for us is the loss of life that occurs as we die.  If I had to identify exactly when the “badness” that we’re actually referring to occurs, I suppose I would choose an increment of time before one’s death occurs (with an exclusive upper bound set to the time of death).  If time is quantized, as per quantum mechanics, then that means that the smallest interval of time is one Planck second.  So I would argue that at the very least, the last Planck second of our life (if not a longer interval), marks the event or time interval of our dying.

It is this last interval of time ticking away that is bad for us because it leads to our loss of life, which is a loss of an intrinsic good.  So while I would argue that never having received an intrinsic good in the first place isn’t bad (such as never having lived), the loss of (or the process of losing) an intrinsic good is bad.  So I agree with Kagan that the deprivation account is on the right track, but I also think the problems he’s posed are resolvable by thinking more carefully about the terminology we use when describing these concepts.

Neurological Configuration & the Prospects of an Innate Ontology

with 2 comments

After a brief discussion on another blog pertaining to whether or not humans possess some kind of an innate ontology or other forms of what I would call innate knowledge, I decided to expand on my reply to that blog post.

While I agree that at least most of our knowledge is acquired through learning, specifically through the acquisition and use of memorized patterns of perception (as this is generally how I would define knowledge), I also believe that there are at least some innate forms of knowledge, including some that would likely result from certain aspects of our brain’s innate neurological configuration and implementation strategy.  This proposed form of innate knowledge would seem to bestow a foundation for later acquiring the bulk of our knowledge that is accomplished through learning.  This foundation would perhaps be best described as a fundamental scaffold of our ontology and thus an innate aspect that our continually developing ontology is based on.

My basic contention is that the hierarchical configuration of neuronal connections in our brains is highly analogous to the hierarchical relationships utilized to produce our conceptualization of reality.  In order for us to make sense of the world, our brains seem to fracture reality into many discrete elements, properties, concepts, propositions, etc., which are all connected to each other through various causal relationships or what some might call semantic hierarchies.  So it seems plausible if not likely that the brain is accomplishing a fundamental aspect of our ontology by our utilizing an innate hardware schema that involves neurological branching.

As the evidence in the neurosciences suggests, it certainly appears that our acquisition of knowledge through learning what those discrete elements, properties, concepts, propositions, etc., are, involves synaptogenesis followed by pruning, modifying, and reshaping a hierarchical neurological configuration, in order to end up with a more specific hierarchical neurological arrangement, and one that more accurately correlates with the reality we are interacting with and learning about through our sensory organs.  Since the specific arrangement that eventually forms couldn’t have been entirely coded for in our DNA (due to it’s extremely high level of complexity and information density), it ultimately had to be fine-tuned to this level of complexity after it’s initial pre-sensory configuration developed.  Nevertheless, the DNA sequences that were naturally selected for to produce the highly capable brains of human beings (as opposed to the DNA that guides the formation of the brain of a much less intelligent animal), clearly have encoded increasingly more effective hardware implementation strategies than our evolutionary ancestors.  These naturally selected neurological strategies seem to control what particular types of causal patterns the brain is theoretically capable of recognizing (including some upper limit of complexity), and they also seem to control how the brain stores and organizes these patterns for later use.  So overall, my contention is that these naturally selected strategies in themselves are a type of knowledge, because they seem to provide the very foundation for our initial ontology.

Based on my understanding, after many of the initial activity-independent mechanisms for neural development have occurred in some region of the developing brain such as cellular differentiation, cellular migration, axon guidance, and some amount of synapse formation, then the activity-dependent mechanisms for neuronal development (such as neural activity caused by the sensory organs in the process of learning), finally begin to modify those synapses and axons into a new hierarchical arrangement.  It is especially worth noting that even though much of the synapse formation during neural development is mediated by activity-dependent mechanisms, such as the aforementioned neural activity produced by the sensory organs during perceptual development and learning, there is also spontaneous neural activity forming many of these synapses even before any sensory input is present, thus contributing to the innate neurological configuration (i.e. that which is formed before any sensation or learning has occurred).

Thus, the subsequent hierarchy formed through neural/sensory stimulation via learning appears to begin from a parent hierarchical starting point based on neural developmental processes that are coded for in our DNA as well as synaptogenic mechanisms involving spontaneous pre-sensory neural activity.  So our brain’s innate (i.e. pre-sensory) configuration likely contributes to our making sense of the world by providing a starting point that reflects the fundamental hierarchical nature of reality that all subsequent knowledge is built off of.  In other words, it seems that if our mature conceptualization of reality involves a very specific type of hierarchy, then an innate/pre-sensory hierarchical schema of neurons would be a plausible if not expected physical foundation for it (see Edelman’s Theory of Neuronal Group Selection within this link for more empirical support of these points).

Additionally, if the brain’s wiring has evolved in order to see dimensions of difference in the world (unique sensory/perceptual patterns that is, such as quantity, colors, sounds, tastes, smells, etc.), then it would make sense that the brain can give any particular pattern an identity by having a unique schema of hardware or unique use of said hardware to perceive such a pattern and distinguish it from other patterns.  After the brain does this, the patterns are then arguably organized by the logical absolutes.  For example, if the hardware scheme or process used to detect a particular pattern “A” exists and all other patterns we perceive have or are given their own unique hardware-based identity (i.e. “not-A” a.k.a. B, C, D, etc.), then the brain would effectively be wired such that pattern “A” = pattern “A” (law of identity), any other pattern which we can call “not-A” does not equal pattern “A” (law of non-contradiction), and any pattern must either be “A” or some other pattern even if brand new, which we can also call “not-A” (law of the excluded middle).  So by the brain giving a pattern a physical identity (i.e. a specific type of hardware configuration in our brain that when activated, represents a detection of one specific pattern), our brains effectively produce the logical absolutes by nature of the brain’s innate wiring strategy which it uses to distinguish one pattern from another.  So although it may be true that there can’t be any patterns stored in the brain until after learning begins (through sensory experience), the fact that the DNA-mediated brain wiring strategy inherently involves eventually giving a particular learned pattern a unique neurological hardware identity to distinguish it from other stored patterns, suggests that the logical absolutes themselves are an innate and implicit property of how the brain stores recognized patterns.

In short, if it is true that any and all forms of reasoning as well as the ability to accumulate knowledge simply requires logic and the recognition of causal patterns, and if the brain’s innate neurological configuration schema provides the starting foundation for both, then it would seem reasonable to conclude that the brain has at least some types of innate knowledge.