After reading and commenting on a post at “A Philosopher’s Take” by James DiGiovanna titled Responsibility, Identity, and Artificial Beings: Persons, Supra-persons and Para-persons, I decided to expand on the topic of personal identity.
Personal Identity Concepts & Criteria
I think when most people talk about personal identity, they are referring to how they see themselves and how they see others in terms of personality and some assortment of (usually prominent) cognitive and behavioral traits. Basically, they see it as what makes a person unique and in some way distinguishable from another person. And even this rudimentary concept can be broken down into at least two parts, namely, how we see ourselves (self-ascribed identity) and how others see us (which we could call the inferred identity of someone else), since they are likely going to differ. While most people tend to think of identity in these ways, when philosophers talk about personal identity, they are usually referring to the unique numerical identity of a person. Roughly speaking, this amounts to basically whatever conditions or properties that are both necessary and sufficient such that a person at one point in time and a person at another point in time can be considered the same person — with a temporal continuity between those points in time.
Usually the criterion put forward for this personal identity is supposed to be some form of spatiotemporal and/or psychological continuity. I certainly wouldn’t be the first person to point out that the question of which criterion is correct has already framed the debate with the assumption that a personal (numerical) identity exists in the first place and even if it did exist, it also assumes that the criterion is something that would be determinable in some way. While it is not unfounded to believe that some properties exist that we could ascribe to all persons (simply because of what we find in common with all persons we’ve interacted with thus far), I think it is far too presumptuous to believe that there is a numerical identity underlying our basic conceptions of personal identity and a determinable criterion for it. At best, I think if one finds any kind of numerical identity for persons that persist over time, it is not going to be compatible with our intuitions nor is it going to be applicable in any pragmatic way.
As I mention pragmatism, I am sympathetic to Parfit’s views in the sense that regardless of what one finds the criteria for numerical personal identity to be (if it exists), the only thing that really matters to us is psychological continuity anyway. So despite the fact that Locke’s view — that psychological continuity (via memory) was the criterion for personal identity — was in fact shown to be based on circular and illogical arguments (per Butler, Reid and others), nevertheless I give applause to his basic idea. Locke seemed to be on the right track, in that psychological continuity (in some sense involving memory and consciousness) is really the essence of what we care about when defining persons, even if it can’t be used as a valid criterion in the way he proposed.
(Non) Persistence & Pragmatic Use of a Personal Identity Concept
I think that the search for, and long debates over, what the best criterion for personal identity is, has illustrated that what people have been trying to label as personal identity should probably be relabeled as some sort of pragmatic pseudo-identity. The pragmatic considerations behind the common and intuitive conceptions of personal identity have no doubt steered the debate pertaining to any possible criteria for helping to define it, and so we can still value those considerations even if a numerical personal identity doesn’t really exist (that is, even if it is nothing more than a pseudo-identity) and even if a diachronic numerical personal identity does exist but isn’t useful in any way.
If the object/subject that we refer to as “I” or “me” is constantly changing with every passing moment of time both physically and psychologically, then I tend to think that the self (that many people ascribe as the “agent” of our personal identity) is an illusion of some sort. I tend to side more with Hume on this point (or at least James Giles’ fair interpretation of Hume) in that my views seem to be some version of a no-self or eliminativist theory of personal identity. As Hume pointed out, even though we intuitively ascribe a self and thereby some kind of personal identity, there is no logical reason supported by our subjective experience to think it is anything but a figment of the imagination. This illusion results from our perceptions flowing from one to the next, with a barrage of changes taking place with this “self” over time that we simply don’t notice taking place — at least not without critical reflection on our past experiences of this ever-changing “self”. The psychological continuity that Locke described seems to be the main driving force behind this illusory self since there is an overlap in the memories of the succession of persons.
I think one could say that if there is any numerical identity that is associated with the term “I” or “me”, it only exists for a short moment of time in one specific spatio-temporal slice, and then as the next perceivable moment elapses, what used to be “I” will become someone else, even if the new person that comes into being is still referred to as “I” or “me” by a person that possesses roughly the same configuration of matter in its body and brain as the previous person. Since the neighboring identities have an overlap in accessible memory including autobiographical memories, memories of past experiences generally, and the memories pertaining to the evolving desires that motivate behavior, we shouldn’t expect this succession of persons to be noticed or perceived by the illusory self because each identity has access to a set of memories that is sufficiently similar to the set of memories accessible to the previous or successive identity. And this sufficient degree of similarity in those identities’ memories allow for a seemingly persistent autobiographical “self” with goals.
As for the pragmatic reasons for considering all of these “I”s and “me”s to be the same person and some singular identity over time, we can see that there is a causal dependency between each member of this “chain of spatio-temporal identities” that I think exists, and so treating that chain of interconnected identities as one being is extremely intuitive and also incredibly useful for accomplishing goals (which is likely the reason why evolution would favor brains that can intuit this concept of a persistent “self” and the near uni-directional behavior that results from it). There is a continuity of memory and behaviors (even though both change over time, both in terms of the number of memories and their accuracy) and this continuity allows for a process of conditioning to modify behavior in ways that actively rely on those chains of memories of past experiences. We behave as if we are a single person moving through time and space (and as if we are surrounded by other temporally extended single person’s behaving in similar ways) and this provides a means of assigning ethical and causal responsibility to something or more specifically to some agent. Quite simply, by having those different identities referenced under one label and physically attached to or instantiated by something localized, that allows for that pragmatic pseudo-identity to persist over time in order for various goals (whether personal or interpersonal/societal) to be accomplished.
“The Persons Problem” and a “Speciation” Analogy
I came up with an analogy that I thought was very fitting to this concept. One could analogize this succession of identities that get clumped into one bulk pragmatic-pseudo-identity with the evolutionary concept of speciation. For example, a sequence of identities somehow constitute an intuitively persistent personal identity, just as a sequence of biological generations somehow constitute a particular species due to the high degree of similarity between them all. The apparent difficulty lies in the fact that, at some point after enough identities have succeeded one another, even the intuitive conception of a personal identity changes markedly to the point of being unrecognizable from its ancestral predecessor, just as enough biological generations transpiring eventually leads to what we call a new species. It’s difficult to define exactly when that speciation event happens (hence the species problem), and we have a similar problem with personal identity I think. Where does it begin and end? If personal identity changes over the course of a lifetime, when does one person become another? I could think of “me” as the same “me” that existed one year ago, but if I go far enough back in time, say to when I was five years old, it is clear that “I” am a completely different person now when compared to that five year old (different beliefs, goals, worldview, ontology, etc.). There seems to have been an identity “speciation” event of some sort even though it is hard to define exactly when that was.
Biologists have tried to solve their species problem by coming up with various criteria to help for taxonomical purposes at the very least, but what they’ve wound up with at this point is several different criteria for defining a species that are each effective for different purposes (e.g. biological-species concept, morpho-species concept, phylogenetic-species concept, etc.), and without any single “correct” answer since they are all situationally more or less useful. Similarly, some philosophers have had a persons problem that they’ve been trying to solve and I gather that it is insoluble for similar “fuzzy boundary” reasons (indeterminate properties, situationally dependent properties, etc.).
The Role of Information in a Personal Identity Concept
Anyway, rather than attempt to solve the numerical personal identity problem, I think that philosophers need to focus more on the importance of the concept of information and how it can be used to try and arrive at a more objective and pragmatic description of the personal identity of some cognitive agent (even if it is not used as a criterion for numerical identity, since information can be copied and the copies can be distinguished from one another numerically). I think this is especially true once we take some of the concerns that James DiGiovanna brought up concerning the integration of future AI into our society.
If all of the beliefs, behaviors, and causal driving forces in a cognitive agent can be represented in terms of information, then I think we can implement more universal conditioning principles within our ethical and societal framework since they will be based more on the information content of the person’s identity without putting as much importance on numerical identity nor as much importance on our intuitions of persisting people (since they will be challenged by several kinds of foreseeable future AI scenarios).
To illustrate this point, I’ll address one of James DiGiovanna’s conundrums. James asks us:
To give some quick examples: suppose an AI commits a crime, and then, judging its actions wrong, immediately reforms itself so that it will never commit a crime again. Further, it makes restitution. Would it make sense to punish the AI? What if it had completely rewritten its memory and personality, so that, while there was still a physical continuity, it had no psychological content in common with the prior being? Or suppose an AI commits a crime, and then destroys itself. If a duplicate of its programming was started elsewhere, would it be guilty of the crime? What if twelve duplicates were made? Should they each be punished?
In the first case, if the information constituting the new identity of the AI after reprogramming is such that it no longer needs any kind of conditioning, then it would be senseless to punish the AI — other than to appease humans that may be angry that they couldn’t themselves avoid punishment in this way, due to having a much slower and less effective means of reprogramming themselves. I would say that the reprogrammed AI is guilty of the crime, but only if its reprogrammed memory still included information pertaining to having performed those past criminal behaviors. However, if those “criminal memories” are now gone via the reprogramming then I’d say that the AI is not guilty of the crime because the information constituting its identity doesn’t match that of the criminal AI. It would have no recollection of having committed the crime and so “it” would not have committed the crime since that “it” was lost in the reprogramming process due to the dramatic change in information that took place.
In the latter scenario, if the information constituting the identity of the destroyed AI was re-instantiated elsewhere, then I would say that it is in fact guilty of the crime — though it would not be numerically guilty of the crime but rather qualitatively guilty of the crime (to differentiate between the numerical and qualitative personal identity concepts that are embedded in the concept of guilt). If twelve duplicates of this information were instantiated into new AI hardware, then likewise all twelve of those cognitive agents would be qualitatively guilty of the crime. What actions should be taken based on qualitative guilt? I think it means that the AI should be punished or more specifically that the judicial system should perform the reconditioning required to modify their behavior as if it had committed the crime (especially if the AI believes/remembers that it has committed the crime), for the better of society. If this can be accomplished through reprogramming, then that would be the most rational thing to do without any need for traditional forms of punishment.
We can analogize this with another thought experiment with human beings. If we imagine a human that has had its memories changed so that it believes it is Charles Manson, has all of Charles Manson’s memories and intentions, then that person should be treated as if they are Charles Manson and thus incarcerated/punished accordingly to rehabilitate them or protect the other members of society. This is assuming of course that we had reliable access to that kind of mind-reading knowledge. If we did, the information constituting the identity of that person would be what is most important — not what the actual previous actions of the person were — because the “previous person” was someone else, due to that gross change in information.
Your discussion of the illusion of personal identity is very interesting and thought provoking. Thank you for it.
Being an engineer, I naturally tried to frame the concept mathematically. At any time T, a “self” can be considered to be formed from a weighted integral over time from 0 to T of all previous selves (each with thoughts and physical experiences). The weighting would be inversely proportional to time such that the importance at a split second before time T would be quite large, dominating the result – but not completely dominating it. Parts of the integrand from decades ago still matter at time T. Now the next question in my mind is: what is time t = 0? All thoughts and physical influences of the parents matter in the formation of a self, and those of the grandparents to the parents, and so forth. So t = 0 would really be the beginning of time (big bang) and one’s “self” at time T is in some way influenced by all parts of the evolutionary universe (including all predecessor species to humans).
And then, an instant after time T, a new self exists that is slightly different than the self that previously existed at time T. In fact, every part of the universe is slightly different than it was a split second earlier. Even at the atomic level things change (large atoms decay, small atoms fuse, radiation changes electron states, etc.). The personal self is continually changing with the passage of time along with all parts of the universe. As you discuss, personal and societal goals are attained by change. This concept of how the self continually changes with time is quite profound to me. The thoughts and physical experiences of a self at time T will have an influence in shaping what kind of self will exist at time T+∆T. As long as a goal is realistic (which may be hard to determine) an individual can shape how the “self” changes over time by choosing the right thoughts, actions and physical experiences. Of course, in any endeavor, knowing what to think and do at any time to help shape a desired change is not easy. But not only is change possible, as attributed to Heraclitus, it is the only constant in life.
Thank you again. I’m looking forward to your next discussion.
Thanks for reading it and commenting!
While I like your thoughts and analysis on the concept of self as it pertains to integrating change over time, I would say that the self starts as soon as the brain forms and begins to process neural information at least to a point where a concept of self exists. Thus, it would start long after conception, with some minimal level of neuronal development required. So I wouldn’t put T=0 back to the Big Bang but rather back to when a person’s brain first started processing information about its “self”. This is to keep the concept of “identity” and the “self” more pragmatically useful based on what we tend to focus on when using those terms.
I do agree though that within your description, the weighting is highest the closer one is to T-final (i.e. farther from T=0).
My personal views on the matter have to take classical free will into account where there’s no logical possibility for our being able to freely chose to have done anything differently in the past, less randomness. So we can’t choose our thoughts at any moment even though environmental inputs and neurological evolution continues to be modified over time and can be programmed in ways that allow us prioritize some action or thought over others. But in these cases, there’s no free choice in the matter because it’s all either determined or random at the most fundamental level. Fascinating nevertheless.
Indeed. As long as time exists, change will exist because it is a necessary component of time, the passage of time, etc.
I absolutely agree with you that the “self” starts when an individual brain reaches some threshold in its ability to process information. Initially, I thought that event should be time = 0 in my integral. But the particular self that emerges within a brain depends on its particular ability to process information. That ability depends on its structural organization, which in turn depends on its particular evolutionary history. So thinking about the evolutionary history of any particular brain I started pushing the lower time limit back. And once I started pushing it back, there was no stopping! Overthinking an idea is dangerous.
I have read some of your thoughts on free will elsewhere in your web site (which is marvelous, by the way). I can’t quite grasp the concept of having no free will, so that will take some pondering and re-reading. Thanks for your comments.
I do think its important to recognize the causal chain that stretches all the way back to the Big Bang (for a number of reasons), so it’s definitely useful in terms of the over-arching principles that govern the evolution of the entire universe (even if it doesn’t have as much relevance when it comes to personal identity).
As for free will, it just so happens that the causal chain beginning with the Big Bang is exactly that which negates free will as per my previous comments. One can deduce from logical syllogism that free will in the libertarian sense (i.e. the classical causai sui sense) is impossible. First one must recognize that the universe must either be ontologically deterministic or indeterministic as this is a logical dichotomy. Libertarian free will can’t exist if the universe is either deterministic or indeterministic/random (ontologically speaking), therefore libertarian free will can’t exist.
Now philosophers like Dennett will say that they are compatibilists, and that free will exists, but they don’t mean “free will” in the libertarian sense. Instead they believe that free will exists in the only sense that is worth talking about in everyday language, that is, that we are autonomous human beings that are capable of making decisions and therefore have moral responsibility and so forth. I agree with Dennett, which is why I tend to have a more elaborate definition of free will that I use when I use the term in a compatabilist sense. In that case, I define it as “decision making processes that have a minimum threshold of information processing complexity and the behavior degrees of freedom that follow from it”. This kind of definition then allows us to quantify the degrees of free will that different people have. So a psychopath that has a brain tumor near his amygdala that inclines him or her to perform a mass shooting of children at a school can be said to have less free will than you or I (presumably) as his or her brain’s programming has limited the degrees of freedom that they have given some set of inputs. That is to say, the psychopath in this example is less morally responsible for their actions than you or I because they have less of an ability to reprogram themselves to behave differently and their decision making complexity is relatively lower, more easily predictable, etc. The same can be said of children and a majority of non-human animals such as a dog, where they are less able to control emotional responses, and have less complex decision making processes underlying their behavior. Therefore we hold them less morally responsible for their actions because we know the limitations of their cognitive programming and so forth. So free will exists in the sense that we have decision making algorithms with some (as yet undefined) minimal amount of complexity, and in the sense that we can be reprogrammed with incentives such as punishment and reward. Therefore we should implement whatever behavioral modifications are necessary to accomplish reprogramming those with any degree of free will. It seems to me that this sort of folk psychological term of free will that we use in the everyday sense or in a court of law becomes useful as we implement correctional strategies for criminals (as ineffective as they can be), tax penalties and incentives for citizens, and other forms of punishment/reward systems that are effective at reprogramming people to behave in ways that conform to societal expectations. If their decision making complexity is high enough and if they can be reprogrammed with punishment and reward, for example, then we tend to describe them as having free will.
But to return back to the beginning, when religious proponents and others use the term “free will”, they tend to mean it in the libertarian sense, believing that we “could have chosen to behave differently in the past, given the exact same initial conditions” and therefore we should be punished or rewarded for eternity based on those actions (even though reprogramming isn’t a goal in that case). It is this kind of free will that is logically impossible. It’s hard for some people to accept this, but if they break down the concept logically, the conclusion is inevitable. When they’re reminded that nihilism isn’t going to result from this since they can still accept that a type of “free will” exists as per Dennett and my previous definition, then it’s easier to accept. But it is counter-intuitive to say the least, because we feel that we make every decision in a way that is free of a causal chain and authored by us in some magic causa sui way. That’s not the first time our basic human intuitions have led many of us down a path of delusion and irrational beliefs, and it certainly won’t be the last. Sometimes our intuitions are wrong and we must strive to combat our cognitive biases and adjust our beliefs accordingly in light of logical arguments and evidence. To attempt to believe as many true things and as few false things as possible. That should be everyone’s goal right? I wish it was. Anyway, always a fascinating topic and I appreciate the discussion and look forward to continuing it or having new ones in the future Jim.
I agree that “libertarian” free will makes no sense. The decisions we make are in large part dominated by the way our brains are programmed to process information. The phrasing of my original comment about shaping the “self” by “choosing the right thoughts, actions and physical experiences” does sound like libertarian free will. So your comment is well taken. What I was thinking about was the ability of the brain to think prospectively: to imagine a future reward and then make the necessary decisions to attain it. This is nothing more than setting a goal and working to meet that goal; and the goals that we set and the decisions we make are largely determined by how our brains are programmed. I think that fits with your characterization of the every-day decision-making type of free will, not the libertarian type. It also fits with your discussion of using rewards/punishment to help reprogram the brain of a criminal. We want that individual to think more prospectively.
What I consider profound in this discussion is the “FACT OF” constant, incremental change. Like it or not, in T+10 years I WILL have a new personal identity. It might be very similar to my current personal identity, but it will be different. Many of the differences of my “self” at T+10Y may be due to processes outside my control, such as aging of cells, random accidents, sudden onset of illness, etc. But some of the differences will be due to every-day decision making (what foodstuffs do I eat, what activities should I do, etc.). Thinking prospectively and setting goals has an influence on what decisions we make on a day-to-day basis. The accumulated effects of all the little decisions we make over the course of time could lead to a significantly changed personal identity. The rub in all this is that quantifying any change in personal identity is subjective and susceptible to the imperfections of memory. A perceived change in personal identity may be as illusory as a perceived constancy in personal identity.
Anyway, I agree that this is a very interesting subject: part philosophy, part brain science, part psychology; all subjects that I know little about. But that never stops me from having an opinion!
I agree with all these points, for sure. And yes it is interesting to say the least! I enjoy dabbling in all these subjects. It’s always rewarding to find new epistemological connections between one field and another to see how they fit together in the big picture.
Oops, I misspoke in my earlier comment. I meant that the weighting function would be proportional to time, not inversely proportional. As time increases closer and closer to T, the weighting on the kernal also increases. Sorry about that.
No worries. I’m glad you are meticulous in clarifying your point of view in any case. Cheers!