The WikiLeaks Conundrum

I’ve been thinking a lot about WikiLeaks over the last year, especially given the relevant consequences that have ensued with respect to the 2016 presidential election.  In particular, I’ve been thinking about the trade-offs that underlie any type of platform that centers around publishing secret or classified information, news leaks, and the like.  I’m torn over the general concept in terms of whether these kinds of platforms provide a net good for society and so I decided to write a blog post about it to outline my concerns through a brief analysis.

Make no mistake that I appreciate the fact that there are people in the world that work hard and are often taking huge risks to their own safety in order to deliver any number of secrets to the general public, whether governmental, political, or corporate.  And this is by no means exclusive to Wikileaks, but also applies to similar organizations and even individual whistle-blowers like Edward Snowden.  In many cases, the information that is leaked to the public is vitally important to inform us about some magnate’s personal corruption, various forms of systemic corruption, or even outright violations of our constitutional rights (such as the NSA violating our right to privacy as outlined in the fourth amendment).

While the public tends to highly value the increased transparency that these kinds of leaks offer, they also open us up to a number of vulnerabilities.  One prominent example that illustrates some of these vulnerabilities is the influence on the 2016 presidential election, resulting from the Clinton email leaks and the leaks pertaining to the DNC.  One might ask how exactly could those leaks have been a bad thing for the public?  After all it just increased transparency and gave the public information that most of us felt we had a right to know.  Unfortunately, it’s much more complicated than that as it can be difficult to know where to draw the line in terms of what should or should not be public knowledge.

To illustrate this point, imagine that you are a foreign or domestic entity that is highly capable of hacking.  Now imagine that you stand to gain an immense amount of land, money, or power if a particular political candidate in a foreign or domestic election is elected, because you know about their current reach of power and their behavioral tendencies, their public or private ties to other magnates, and you know the kinds of policies that they are likely to enact based on their public pronouncements in the media and their advertised campaign platform.  Now if you have the ability to hack into private information from every pertinent candidate and/or political party involved in that election, then you likely have the ability to not only know secrets about the candidate that can benefit you from their winning (including their perspective of you as a foreign or domestic entity, and/or damning things about them that you can use as leverage to bribe them later on after being elected), but you also likely know about damning things that could cripple the opposing candidate’s chances at being elected.

This point illustrates the following conundrum:  while WikiLeaks can deliver important information to the public, it can also be used as a platform for malicious entities to influence our elections, to jeopardize our national or international security, or to cause any number of problems based on “selective” sharing.  That is to say, they may have plenty of information that would be damning to both opposing political parties, but they may only choose to deliver half the story because of an underlying agenda to influence the election outcome.  This creates an obvious problem, not least because the public doesn’t consider the amount of hacked or leaked information that they didn’t get.  Instead they think they’ve just become better informed concerning a political candidate or some policy issue, when in fact their judgment has now been compromised because they’ve just received a hyper-biased leak and one that was given to them intentionally to mislead them, even though the contents of the leak may in fact be true.  But when people aren’t able to put the new information in the proper context or perspective, then new information can actually make them less informed.  That is to say, the new information can become an epistemological liability, because it unknowingly distorts the facts, leading people to behave in ways that they otherwise would not have if they only had a few more pertinent details.

So now we have to ask ourselves, what can we do about this?  Should we just scrap WikiLeaks?  I don’t think that’s necessary, nor do I think it’s feasible to do even if we wanted to since it would likely just be replaced by any number of other entities that would accomplish the same ends (or it would become delocalized and go back to a bunch of disconnected sources).  Should we assume all leaked information has been leaked to serve some malicious agenda?

Well, a good dose of healthy skepticism could be a part of the solution.  We don’t want to be irrationally skeptical of any and all leaks, but it would make sense to have more scrutiny when it’s apparent that the leak could serve a malicious purpose.  This means that we need to be deeply concerned about this unless or until we reach a point in time where hacking is so common that the number of leaks reaches a threshold where it’s no longer pragmatically possible to selectively share them to accomplish these kinds of half-truth driven political agendas.  Until that point is reached, if it’s ever reached, given the arms race between encryption and hacking, we will have to question every seemingly important leak and work hard to make the public at large understand these concerns and to take them seriously.  It’s too easy for the majority to be distracted by the proverbial carrot dangling in front of them, such that they fail to realize that it may be some form of politically motivated bait.  In the mean time, we need to open up the conversation surrounding this issue, and look into possible solutions to help mitigate our concerns.  Perhaps we’ll start seeing organizations that can better vet the sources of these leaks, or that can better analyze their immediate effects on the global economy, elections, etc., before deciding whether or not they should release the information to the public.  This won’t be an easy task.

This brings me to my last point which is to say that I don’t think people have a fundamental right to know every piece of information that’s out there.  If someone found a way to make a nuclear bomb using household ingredients, should that be public information?  Don’t people understand that many pieces of information are kept private or classified because that’s the only way some organizations can function?  Including organizations that strive to maintain or increase national and international security?  Do people want all information to be public even if it comes at the expense of creating humanitarian crises, or the further consolidation of power by select plutocrats?  There’s often debate over the trade-offs between giving up our personal privacy to increase our safety.  Now the time has come to ask whether our giving up some forms of privacy or secrecy on larger scales (whether we like it or not) is actually detracting from our safety or putting our democracy in jeopardy.

Advertisement

The illusion of Persistent Identity & the Role of Information in Identity

After reading and commenting on a post at “A Philosopher’s Take” by James DiGiovanna titled Responsibility, Identity, and Artificial Beings: Persons, Supra-persons and Para-persons, I decided to expand on the topic of personal identity.

Personal Identity Concepts & Criteria

I think when most people talk about personal identity, they are referring to how they see themselves and how they see others in terms of personality and some assortment of (usually prominent) cognitive and behavioral traits.  Basically, they see it as what makes a person unique and in some way distinguishable from another person.  And even this rudimentary concept can be broken down into at least two parts, namely, how we see ourselves (self-ascribed identity) and how others see us (which we could call the inferred identity of someone else), since they are likely going to differ.  While most people tend to think of identity in these ways, when philosophers talk about personal identity, they are usually referring to the unique numerical identity of a person.  Roughly speaking, this amounts to basically whatever conditions or properties that are both necessary and sufficient such that a person at one point in time and a person at another point in time can be considered the same person — with a temporal continuity between those points in time.

Usually the criterion put forward for this personal identity is supposed to be some form of spatiotemporal and/or psychological continuity.  I certainly wouldn’t be the first person to point out that the question of which criterion is correct has already framed the debate with the assumption that a personal (numerical) identity exists in the first place and even if it did exist, it also assumes that the criterion is something that would be determinable in some way.  While it is not unfounded to believe that some properties exist that we could ascribe to all persons (simply because of what we find in common with all persons we’ve interacted with thus far), I think it is far too presumptuous to believe that there is a numerical identity underlying our basic conceptions of personal identity and a determinable criterion for it.  At best, I think if one finds any kind of numerical identity for persons that persist over time, it is not going to be compatible with our intuitions nor is it going to be applicable in any pragmatic way.

As I mention pragmatism, I am sympathetic to Parfit’s views in the sense that regardless of what one finds the criteria for numerical personal identity to be (if it exists), the only thing that really matters to us is psychological continuity anyway.  So despite the fact that Locke’s view — that psychological continuity (via memory) was the criterion for personal identity — was in fact shown to be based on circular and illogical arguments (per Butler, Reid and others), nevertheless I give applause to his basic idea.  Locke seemed to be on the right track, in that psychological continuity (in some sense involving memory and consciousness) is really the essence of what we care about when defining persons, even if it can’t be used as a valid criterion in the way he proposed.

(Non) Persistence & Pragmatic Use of a Personal Identity Concept

I think that the search for, and long debates over, what the best criterion for personal identity is, has illustrated that what people have been trying to label as personal identity should probably be relabeled as some sort of pragmatic pseudo-identity. The pragmatic considerations behind the common and intuitive conceptions of personal identity have no doubt steered the debate pertaining to any possible criteria for helping to define it, and so we can still value those considerations even if a numerical personal identity doesn’t really exist (that is, even if it is nothing more than a pseudo-identity) and even if a diachronic numerical personal identity does exist but isn’t useful in any way.

If the object/subject that we refer to as “I” or “me” is constantly changing with every passing moment of time both physically and psychologically, then I tend to think that the self (that many people ascribe as the “agent” of our personal identity) is an illusion of some sort.  I tend to side more with Hume on this point (or at least James Giles’ fair interpretation of Hume) in that my views seem to be some version of a no-self or eliminativist theory of personal identity.  As Hume pointed out, even though we intuitively ascribe a self and thereby some kind of personal identity, there is no logical reason supported by our subjective experience to think it is anything but a figment of the imagination.  This illusion results from our perceptions flowing from one to the next, with a barrage of changes taking place with this “self” over time that we simply don’t notice taking place — at least not without critical reflection on our past experiences of this ever-changing “self”.  The psychological continuity that Locke described seems to be the main driving force behind this illusory self since there is an overlap in the memories of the succession of persons.

I think one could say that if there is any numerical identity that is associated with the term “I” or “me”, it only exists for a short moment of time in one specific spatio-temporal slice, and then as the next perceivable moment elapses, what used to be “I” will become someone else, even if the new person that comes into being is still referred to as “I” or “me” by a person that possesses roughly the same configuration of matter in its body and brain as the previous person.  Since the neighboring identities have an overlap in accessible memory including autobiographical memories, memories of past experiences generally, and the memories pertaining to the evolving desires that motivate behavior, we shouldn’t expect this succession of persons to be noticed or perceived by the illusory self because each identity has access to a set of memories that is sufficiently similar to the set of memories accessible to the previous or successive identity.  And this sufficient degree of similarity in those identities’ memories allow for a seemingly persistent autobiographical “self” with goals.

As for the pragmatic reasons for considering all of these “I”s and “me”s to be the same person and some singular identity over time, we can see that there is a causal dependency between each member of this “chain of spatio-temporal identities” that I think exists, and so treating that chain of interconnected identities as one being is extremely intuitive and also incredibly useful for accomplishing goals (which is likely the reason why evolution would favor brains that can intuit this concept of a persistent “self” and the near uni-directional behavior that results from it).  There is a continuity of memory and behaviors (even though both change over time, both in terms of the number of memories and their accuracy) and this continuity allows for a process of conditioning to modify behavior in ways that actively rely on those chains of memories of past experiences.  We behave as if we are a single person moving through time and space (and as if we are surrounded by other temporally extended single person’s behaving in similar ways) and this provides a means of assigning ethical and causal responsibility to something or more specifically to some agent.  Quite simply, by having those different identities referenced under one label and physically attached to or instantiated by something localized, that allows for that pragmatic pseudo-identity to persist over time in order for various goals (whether personal or interpersonal/societal) to be accomplished.

“The Persons Problem” and a “Speciation” Analogy

I came up with an analogy that I thought was very fitting to this concept.  One could analogize this succession of identities that get clumped into one bulk pragmatic-pseudo-identity with the evolutionary concept of speciation.  For example, a sequence of identities somehow constitute an intuitively persistent personal identity, just as a sequence of biological generations somehow constitute a particular species due to the high degree of similarity between them all.  The apparent difficulty lies in the fact that, at some point after enough identities have succeeded one another, even the intuitive conception of a personal identity changes markedly to the point of being unrecognizable from its ancestral predecessor, just as enough biological generations transpiring eventually leads to what we call a new species.  It’s difficult to define exactly when that speciation event happens (hence the species problem), and we have a similar problem with personal identity I think.  Where does it begin and end?  If personal identity changes over the course of a lifetime, when does one person become another?  I could think of “me” as the same “me” that existed one year ago, but if I go far enough back in time, say to when I was five years old, it is clear that “I” am a completely different person now when compared to that five year old (different beliefs, goals, worldview, ontology, etc.).  There seems to have been an identity “speciation” event of some sort even though it is hard to define exactly when that was.

Biologists have tried to solve their species problem by coming up with various criteria to help for taxonomical purposes at the very least, but what they’ve wound up with at this point is several different criteria for defining a species that are each effective for different purposes (e.g. biological-species concept, morpho-species concept, phylogenetic-species concept, etc.), and without any single “correct” answer since they are all situationally more or less useful.  Similarly, some philosophers have had a persons problem that they’ve been trying to solve and I gather that it is insoluble for similar “fuzzy boundary” reasons (indeterminate properties, situationally dependent properties, etc.).

The Role of Information in a Personal Identity Concept

Anyway, rather than attempt to solve the numerical personal identity problem, I think that philosophers need to focus more on the importance of the concept of information and how it can be used to try and arrive at a more objective and pragmatic description of the personal identity of some cognitive agent (even if it is not used as a criterion for numerical identity, since information can be copied and the copies can be distinguished from one another numerically).  I think this is especially true once we take some of the concerns that James DiGiovanna brought up concerning the integration of future AI into our society.

If all of the beliefs, behaviors, and causal driving forces in a cognitive agent can be represented in terms of information, then I think we can implement more universal conditioning principles within our ethical and societal framework since they will be based more on the information content of the person’s identity without putting as much importance on numerical identity nor as much importance on our intuitions of persisting people (since they will be challenged by several kinds of foreseeable future AI scenarios).

To illustrate this point, I’ll address one of James DiGiovanna’s conundrums.  James asks us:

To give some quick examples: suppose an AI commits a crime, and then, judging its actions wrong, immediately reforms itself so that it will never commit a crime again. Further, it makes restitution. Would it make sense to punish the AI? What if it had completely rewritten its memory and personality, so that, while there was still a physical continuity, it had no psychological content in common with the prior being? Or suppose an AI commits a crime, and then destroys itself. If a duplicate of its programming was started elsewhere, would it be guilty of the crime? What if twelve duplicates were made? Should they each be punished?

In the first case, if the information constituting the new identity of the AI after reprogramming is such that it no longer needs any kind of conditioning, then it would be senseless to punish the AI — other than to appease humans that may be angry that they couldn’t themselves avoid punishment in this way, due to having a much slower and less effective means of reprogramming themselves.  I would say that the reprogrammed AI is guilty of the crime, but only if its reprogrammed memory still included information pertaining to having performed those past criminal behaviors.  However, if those “criminal memories” are now gone via the reprogramming then I’d say that the AI is not guilty of the crime because the information constituting its identity doesn’t match that of the criminal AI.  It would have no recollection of having committed the crime and so “it” would not have committed the crime since that “it” was lost in the reprogramming process due to the dramatic change in information that took place.

In the latter scenario, if the information constituting the identity of the destroyed AI was re-instantiated elsewhere, then I would say that it is in fact guilty of the crime — though it would not be numerically guilty of the crime but rather qualitatively guilty of the crime (to differentiate between the numerical and qualitative personal identity concepts that are embedded in the concept of guilt).  If twelve duplicates of this information were instantiated into new AI hardware, then likewise all twelve of those cognitive agents would be qualitatively guilty of the crime.  What actions should be taken based on qualitative guilt?  I think it means that the AI should be punished or more specifically that the judicial system should perform the reconditioning required to modify their behavior as if it had committed the crime (especially if the AI believes/remembers that it has committed the crime), for the better of society.  If this can be accomplished through reprogramming, then that would be the most rational thing to do without any need for traditional forms of punishment.

We can analogize this with another thought experiment with human beings.  If we imagine a human that has had its memories changed so that it believes it is Charles Manson, has all of Charles Manson’s memories and intentions, then that person should be treated as if they are Charles Manson and thus incarcerated/punished accordingly to rehabilitate them or protect the other members of society.  This is assuming of course that we had reliable access to that kind of mind-reading knowledge.  If we did, the information constituting the identity of that person would be what is most important — not what the actual previous actions of the person were — because the “previous person” was someone else, due to that gross change in information.