The Open Mind

Cogito Ergo Sum

Archive for the ‘Information’ Category

The WikiLeaks Conundrum

leave a comment »

I’ve been thinking a lot about WikiLeaks over the last year, especially given the relevant consequences that have ensued with respect to the 2016 presidential election.  In particular, I’ve been thinking about the trade-offs that underlie any type of platform that centers around publishing secret or classified information, news leaks, and the like.  I’m torn over the general concept in terms of whether these kinds of platforms provide a net good for society and so I decided to write a blog post about it to outline my concerns through a brief analysis.

Make no mistake that I appreciate the fact that there are people in the world that work hard and are often taking huge risks to their own safety in order to deliver any number of secrets to the general public, whether governmental, political, or corporate.  And this is by no means exclusive to Wikileaks, but also applies to similar organizations and even individual whistle-blowers like Edward Snowden.  In many cases, the information that is leaked to the public is vitally important to inform us about some magnate’s personal corruption, various forms of systemic corruption, or even outright violations of our constitutional rights (such as the NSA violating our right to privacy as outlined in the fourth amendment).

While the public tends to highly value the increased transparency that these kinds of leaks offer, they also open us up to a number of vulnerabilities.  One prominent example that illustrates some of these vulnerabilities is the influence on the 2016 presidential election, resulting from the Clinton email leaks and the leaks pertaining to the DNC.  One might ask how exactly could those leaks have been a bad thing for the public?  After all it just increased transparency and gave the public information that most of us felt we had a right to know.  Unfortunately, it’s much more complicated than that.  Beyond the fact that it can be difficult to know where to draw the line in terms of what should or should not be public knowledge.

To illustrate this point, imagine that you are a foreign or domestic entity that is highly capable of hacking.  Now imagine that you stand to gain an immense amount of land, money, or power if a particular political candidate in a foreign or domestic election is elected, because you know about their current reach of power and their behavioral tendencies, their public or private ties to other magnates, and you know the kinds of policies that they are likely to enact based on their public pronouncements in the media and their advertised campaign platform.  Now if you have the ability to hack into private information from every pertinent candidate and/or political party involved in that election, then you likely have the ability to not only know secrets about the candidate that can benefit you from their winning (including their perspective of you as a foreign or domestic entity, and/or damning things about them that you can use as leverage to bribe them later on after being elected), but you also likely know about damning things that could cripple the opposing candidate’s chances at being elected.

This point illustrates the following conundrum:  while WikiLeaks can deliver important information to the public, it can also be used as a platform for malicious entities to influence our elections, to jeopardize our national or international security, or to cause any number of problems based on “selective” sharing.  That is to say, they may have plenty of information that would be damning to both opposing political parties, but they may only choose to deliver half the story because of an underlying agenda to influence the election outcome.  This creates an obvious problem, not least because the public doesn’t consider the amount of hacked or leaked information that they didn’t get.  Instead they think they’ve just become better informed concerning a political candidate or some policy issue, when in fact their judgment has now been compromised because they’ve just received a hyper-biased leak and one that was given to them intentionally to mislead them, even though the contents of the leak may in fact be true.  But when people aren’t able to put the new information in the proper context or perspective, then new information can actually make them less informed.  That is to say, the new information can become an epistemological liability, because it unknowingly distorts the facts, leading people to behave in ways that they otherwise would not have if they only had a few more pertinent details.

So now we have to ask ourselves, what can we do about this?  Should we just scrap WikiLeaks?  I don’t think that’s necessary, nor do I think it’s feasible to do even if we wanted to since it would likely just be replaced by any number of other entities that would accomplish the same ends (or it would become delocalized and go back to a bunch of disconnected sources).  Should we assume all leaked information has been leaked to serve some malicious agenda?

Well, a good dose of healthy skepticism could be a part of the solution.  We don’t want to be irrationally skeptical of any and all leaks, but it would make sense to have more scrutiny when it’s apparent that the leak could serve a malicious purpose.  This means that we need to be deeply concerned about this unless or until we reach a point in time where hacking is so common that the number of leaks reaches a threshold where it’s no longer pragmatically possible to selectively share them to accomplish these kinds of half-truth driven political agendas.  Until that point is reached, if it’s ever reached, given the arms race between encryption and hacking, we will have to question every seemingly important leak and work hard to make the public at large understand these concerns and to take them seriously.  It’s too easy for the majority to be distracted by the proverbial carrot dangling in front of them, such that they fail to realize that it may be some form of politically motivated bait.  In the mean time, we need to open up the conversation surrounding this issue, and look into possible solutions to help mitigate our concerns.  Perhaps we’ll start seeing organizations that can better vet the sources of these leaks, or that can better analyze their immediate effects on the global economy, elections, etc., before deciding whether or not they should release the information to the public.  This won’t be an easy task.

This brings me to my last point which is to say that I don’t think people have a fundamental right to know every piece of information that’s out there.  If someone found a way to make a nuclear bomb using household ingredients, should that be public information?  Don’t people understand that many pieces of information are kept private or classified because that’s the only way some organizations can function?  Including organizations that strive to maintain or increase national and international security?  Do people want all information to be public even if it comes at the expense of creating humanitarian crises, or the further consolidation of power by select plutocrats?  There’s often debate over the trade-offs between giving up our personal privacy to increase our safety.  Now the time has come to ask whether our giving up some forms of privacy or secrecy on larger scales (whether we like it or not) is actually detracting from our safety or putting our democracy in jeopardy.

The illusion of Persistent Identity & the Role of Information in Identity

with 8 comments

After reading and commenting on a post at “A Philosopher’s Take” by James DiGiovanna titled Responsibility, Identity, and Artificial Beings: Persons, Supra-persons and Para-persons, I decided to expand on the topic of personal identity.

Personal Identity Concepts & Criteria

I think when most people talk about personal identity, they are referring to how they see themselves and how they see others in terms of personality and some assortment of (usually prominent) cognitive and behavioral traits.  Basically, they see it as what makes a person unique and in some way distinguishable from another person.  And even this rudimentary concept can be broken down into at least two parts, namely, how we see ourselves (self-ascribed identity) and how others see us (which we could call the inferred identity of someone else), since they are likely going to differ.  While most people tend to think of identity in these ways, when philosophers talk about personal identity, they are usually referring to the unique numerical identity of a person.  Roughly speaking, this amounts to basically whatever conditions or properties that are both necessary and sufficient such that a person at one point in time and a person at another point in time can be considered the same person — with a temporal continuity between those points in time.

Usually the criterion put forward for this personal identity is supposed to be some form of spatiotemporal and/or psychological continuity.  I certainly wouldn’t be the first person to point out that the question of which criterion is correct has already framed the debate with the assumption that a personal (numerical) identity exists in the first place and even if it did exist, it also assumes that the criterion is something that would be determinable in some way.  While it is not unfounded to believe that some properties exist that we could ascribe to all persons (simply because of what we find in common with all persons we’ve interacted with thus far), I think it is far too presumptuous to believe that there is a numerical identity underlying our basic conceptions of personal identity and a determinable criterion for it.  At best, I think if one finds any kind of numerical identity for persons that persist over time, it is not going to be compatible with our intuitions nor is it going to be applicable in any pragmatic way.

As I mention pragmatism, I am sympathetic to Parfit’s views in the sense that regardless of what one finds the criteria for numerical personal identity to be (if it exists), the only thing that really matters to us is psychological continuity anyway.  So despite the fact that Locke’s view — that psychological continuity (via memory) was the criterion for personal identity — was in fact shown to be based on circular and illogical arguments (per Butler, Reid and others), nevertheless I give applause to his basic idea.  Locke seemed to be on the right track, in that psychological continuity (in some sense involving memory and consciousness) is really the essence of what we care about when defining persons, even if it can’t be used as a valid criterion in the way he proposed.

(Non) Persistence & Pragmatic Use of a Personal Identity Concept

I think that the search for, and long debates over, what the best criterion for personal identity is, has illustrated that what people have been trying to label as personal identity should probably be relabeled as some sort of pragmatic pseudo-identity. The pragmatic considerations behind the common and intuitive conceptions of personal identity have no doubt steered the debate pertaining to any possible criteria for helping to define it, and so we can still value those considerations even if a numerical personal identity doesn’t really exist (that is, even if it is nothing more than a pseudo-identity) and even if a diachronic numerical personal identity does exist but isn’t useful in any way.

If the object/subject that we refer to as “I” or “me” is constantly changing with every passing moment of time both physically and psychologically, then I tend to think that the self (that many people ascribe as the “agent” of our personal identity) is an illusion of some sort.  I tend to side more with Hume on this point (or at least James Giles’ fair interpretation of Hume) in that my views seem to be some version of a no-self or eliminativist theory of personal identity.  As Hume pointed out, even though we intuitively ascribe a self and thereby some kind of personal identity, there is no logical reason supported by our subjective experience to think it is anything but a figment of the imagination.  This illusion results from our perceptions flowing from one to the next, with a barrage of changes taking place with this “self” over time that we simply don’t notice taking place — at least not without critical reflection on our past experiences of this ever-changing “self”.  The psychological continuity that Locke described seems to be the main driving force behind this illusory self since there is an overlap in the memories of the succession of persons.

I think one could say that if there is any numerical identity that is associated with the term “I” or “me”, it only exists for a short moment of time in one specific spatio-temporal slice, and then as the next perceivable moment elapses, what used to be “I” will become someone else, even if the new person that comes into being is still referred to as “I” or “me” by a person that possesses roughly the same configuration of matter in its body and brain as the previous person.  Since the neighboring identities have an overlap in accessible memory including autobiographical memories, memories of past experiences generally, and the memories pertaining to the evolving desires that motivate behavior, we shouldn’t expect this succession of persons to be noticed or perceived by the illusory self because each identity has access to a set of memories that is sufficiently similar to the set of memories accessible to the previous or successive identity.  And this sufficient degree of similarity in those identities’ memories allow for a seemingly persistent autobiographical “self” with goals.

As for the pragmatic reasons for considering all of these “I”s and “me”s to be the same person and some singular identity over time, we can see that there is a causal dependency between each member of this “chain of spatio-temporal identities” that I think exists, and so treating that chain of interconnected identities as one being is extremely intuitive and also incredibly useful for accomplishing goals (which is likely the reason why evolution would favor brains that can intuit this concept of a persistent “self” and the near uni-directional behavior that results from it).  There is a continuity of memory and behaviors (even though both change over time, both in terms of the number of memories and their accuracy) and this continuity allows for a process of conditioning to modify behavior in ways that actively rely on those chains of memories of past experiences.  We behave as if we are a single person moving through time and space (and as if we are surrounded by other temporally extended single person’s behaving in similar ways) and this provides a means of assigning ethical and causal responsibility to something or more specifically to some agent.  Quite simply, by having those different identities referenced under one label and physically attached to or instantiated by something localized, that allows for that pragmatic pseudo-identity to persist over time in order for various goals (whether personal or interpersonal/societal) to be accomplished.

“The Persons Problem” and a “Speciation” Analogy

I came up with an analogy that I thought was very fitting to this concept.  One could analogize this succession of identities that get clumped into one bulk pragmatic-pseudo-identity with the evolutionary concept of speciation.  For example, a sequence of identities somehow constitute an intuitively persistent personal identity, just as a sequence of biological generations somehow constitute a particular species due to the high degree of similarity between them all.  The apparent difficulty lies in the fact that, at some point after enough identities have succeeded one another, even the intuitive conception of a personal identity changes markedly to the point of being unrecognizable from its ancestral predecessor, just as enough biological generations transpiring eventually leads to what we call a new species.  It’s difficult to define exactly when that speciation event happens (hence the species problem), and we have a similar problem with personal identity I think.  Where does it begin and end?  If personal identity changes over the course of a lifetime, when does one person become another?  I could think of “me” as the same “me” that existed one year ago, but if I go far enough back in time, say to when I was five years old, it is clear that “I” am a completely different person now when compared to that five year old (different beliefs, goals, worldview, ontology, etc.).  There seems to have been an identity “speciation” event of some sort even though it is hard to define exactly when that was.

Biologists have tried to solve their species problem by coming up with various criteria to help for taxonomical purposes at the very least, but what they’ve wound up with at this point is several different criteria for defining a species that are each effective for different purposes (e.g. biological-species concept, morpho-species concept, phylogenetic-species concept, etc.), and without any single “correct” answer since they are all situationally more or less useful.  Similarly, some philosophers have had a persons problem that they’ve been trying to solve and I gather that it is insoluble for similar “fuzzy boundary” reasons (indeterminate properties, situationally dependent properties, etc.).

The Role of Information in a Personal Identity Concept

Anyway, rather than attempt to solve the numerical personal identity problem, I think that philosophers need to focus more on the importance of the concept of information and how it can be used to try and arrive at a more objective and pragmatic description of the personal identity of some cognitive agent (even if it is not used as a criterion for numerical identity, since information can be copied and the copies can be distinguished from one another numerically).  I think this is especially true once we take some of the concerns that James DiGiovanna brought up concerning the integration of future AI into our society.

If all of the beliefs, behaviors, and causal driving forces in a cognitive agent can be represented in terms of information, then I think we can implement more universal conditioning principles within our ethical and societal framework since they will be based more on the information content of the person’s identity without putting as much importance on numerical identity nor as much importance on our intuitions of persisting people (since they will be challenged by several kinds of foreseeable future AI scenarios).

To illustrate this point, I’ll address one of James DiGiovanna’s conundrums.  James asks us:

To give some quick examples: suppose an AI commits a crime, and then, judging its actions wrong, immediately reforms itself so that it will never commit a crime again. Further, it makes restitution. Would it make sense to punish the AI? What if it had completely rewritten its memory and personality, so that, while there was still a physical continuity, it had no psychological content in common with the prior being? Or suppose an AI commits a crime, and then destroys itself. If a duplicate of its programming was started elsewhere, would it be guilty of the crime? What if twelve duplicates were made? Should they each be punished?

In the first case, if the information constituting the new identity of the AI after reprogramming is such that it no longer needs any kind of conditioning, then it would be senseless to punish the AI — other than to appease humans that may be angry that they couldn’t themselves avoid punishment in this way, due to having a much slower and less effective means of reprogramming themselves.  I would say that the reprogrammed AI is guilty of the crime, but only if its reprogrammed memory still included information pertaining to having performed those past criminal behaviors.  However, if those “criminal memories” are now gone via the reprogramming then I’d say that the AI is not guilty of the crime because the information constituting its identity doesn’t match that of the criminal AI.  It would have no recollection of having committed the crime and so “it” would not have committed the crime since that “it” was lost in the reprogramming process due to the dramatic change in information that took place.

In the latter scenario, if the information constituting the identity of the destroyed AI was re-instantiated elsewhere, then I would say that it is in fact guilty of the crime — though it would not be numerically guilty of the crime but rather qualitatively guilty of the crime (to differentiate between the numerical and qualitative personal identity concepts that are embedded in the concept of guilt).  If twelve duplicates of this information were instantiated into new AI hardware, then likewise all twelve of those cognitive agents would be qualitatively guilty of the crime.  What actions should be taken based on qualitative guilt?  I think it means that the AI should be punished or more specifically that the judicial system should perform the reconditioning required to modify their behavior as if it had committed the crime (especially if the AI believes/remembers that it has committed the crime), for the better of society.  If this can be accomplished through reprogramming, then that would be the most rational thing to do without any need for traditional forms of punishment.

We can analogize this with another thought experiment with human beings.  If we imagine a human that has had its memories changed so that it believes it is Charles Manson, has all of Charles Manson’s memories and intentions, then that person should be treated as if they are Charles Manson and thus incarcerated/punished accordingly to rehabilitate them or protect the other members of society.  This is assuming of course that we had reliable access to that kind of mind-reading knowledge.  If we did, the information constituting the identity of that person would be what is most important — not what the actual previous actions of the person were — because the “previous person” was someone else, due to that gross change in information.

Conscious Realism & The Interface Theory of Perception

with 6 comments

A few months ago I was reading an interesting article in The Atlantic about Donald Hoffman’s Interface Theory of Perception.  As a person highly interested in consciousness studies, cognitive science, and the mind-body problem, I found the basic concepts of his theory quite fascinating.  What was most interesting to me was the counter-intuitive connection between evolution and perception that Hoffman has proposed.  Now it is certainly reasonable and intuitive to assume that evolutionary natural selection would favor perceptions that are closer to “the truth” or closer to the objective reality that exists independent of our minds, simply because of the idea that perceptions that are more accurate will be more likely to lead to survival than perceptions that are not accurate.  As an example, if I were to perceive lions as inert objects like trees, I would be more likely to be naturally selected against and eaten by a lion when compared to one who perceives lions as a mobile predator that could kill them.

While this is intuitive and reasonable to some degree, what Hoffman actually shows, using evolutionary game theory, is that with respect to organisms with comparable complexity, those with perceptions that are closer to reality are never going to be selected for nearly as much as those with perceptions that are tuned to fitness instead.  More so, truth in this case will be driven to extinction when it is up against perceptual models that are tuned to fitness.  That is to say, evolution will select for organisms that perceive the world in a way that is less accurate (in terms of the underlying reality) as long as the perception is tuned for survival benefits.  The bottom line is that given some specific level of complexity, it is more costly to process more information (costing more time and resources), and so if a “heuristic” method for perception can evolve instead, one that “hides” all the complex information underlying reality and instead provides us with a species-specific guide to adaptive behavior, that will always be the preferred choice.

To see this point more clearly, let’s consider an example.  Let’s imagine there’s an animal that regularly eats some kind of insect, such as a beetle, but it needs to eat a particular sized beetle or else it has a relatively high probability of eating the wrong kind of beetle (and we can assume that the “wrong” kind of beetle would be deadly to eat).  Now let’s imagine two possible types of evolved perception: it could have really accurate perceptions about the various sizes of beetles that it encounters so it can distinguish many different sizes from one another (and then choose the proper size range to eat), or it could evolve less accurate perceptions such that all beetles that are either too small or too large appear as indistinguishable from one another (maybe all the wrong-sized beetles whether too large or too small look like indistinguishable red-colored blobs) and perhaps all the beetles that are in the ideal size range for eating appear as green-colored blobs (that are again, indistinguishable from one another).  So the only discrimination in this latter case of perception is between red and green colored blobs.

Both types of perception would solve the problem of which beetles to eat or not eat, but the latter type (even if much less accurate) would bestow a fitness advantage over the former type, by allowing the animal to process much less information about the environment by not focusing on relatively useless information (like specific beetle size).  In this case, with beetle size as the only variable under consideration for survival, evolution would select for the organism that knows less total information about beetle size, as long as it knows what is most important about distinguishing the edible beetles from the poisonous beetles.  Now we can imagine that in some cases, the fitness function could align with the true structure of reality, but this is not what we ever expect to see generically in the world.  At best we may see some kind of overlap between the two but if there doesn’t have to be any then truth will go extinct.

Perception is Analogous to a Desktop Computer Interface

Hoffman analogizes this concept of a “perception interface” with the desktop interface of a personal computer.  When we see icons of folders on the desktop and drag one of those icons to the trash bin, we shouldn’t take that interface literally, because there isn’t literally a folder being moved to a literal trash bin but rather it is simply an interface that hides most if not all of what is really going on in the background — all those various diodes, resistors and transistors that are manipulated in order to modify stored information that is represented in binary code.

The desktop interface ultimately provides us with an easy and intuitive way of accomplishing these various information processing tasks because trying to do so in the most “truthful” way — by literally manually manipulating every diode, resistor, and transistor to accomplish the same task — would be far more cumbersome and less effective than using the interface.  Therefore the interface, by hiding this truth from us, allows us to “navigate” through that computational world with more fitness.  In this case, having more fitness simply means being able to accomplish information processing goals more easily, with less resources, etc.

Hoffman goes on to say that even though we shouldn’t take the desktop interface literally, obviously we should still take it seriously, because moving that folder to the trash bin can have direct implications on our lives, by potentially destroying months worth of valuable work on a manuscript that is contained in that folder.  Likewise we should take our perceptions seriously, even if we don’t take them literally.  We know that stepping in front of a moving train will likely end our conscious experience even if it is for causal reasons that we have no epistemic access to via our perception, given the species-specific “desktop interface” that evolution has endowed us with.

Relevance to the Mind-body Problem

The crucial point with this analogy is the fact that if our knowledge was confined to the desktop interface of the computer, we’d never be able to ascertain the underlying reality of the “computer”, because all that information that we don’t need to know about that underlying reality is hidden from us.  The same would apply to our perception, where it would be epistemically isolated from the underlying objective reality that exists.  I want to add to this point that even though it appears that we have found the underlying guts of our consciousness, i.e., the findings in neuroscience, it would be mistaken to think that this approach will conclusively answer the mind-body problem because the interface that we’ve used to discover our brains’ underlying neurobiology is still the “desktop” interface.

So while we may think we’ve found the underlying guts of “the computer”, this is far from certain, given the possibility of and support for this theory.  This may end up being the reason why many philosophers claim there is a “hard problem” of consciousness and one that can’t be solved.  It could be that we simply are stuck in the desktop interface and there’s no way to find out about the underlying reality that gives rise to that interface.  All we can do is maximize our knowledge of the interface itself and that would be our epistemic boundary.

Predictions of the Theory

Now if this was just a fancy idea put forward by Hoffman, that would be interesting in its own right, but the fact that it is supported by evolutionary game theory and genetic algorithm simulations shows that the theory is more than plausible.  Even better, the theory is actually a scientific theory (and not just a hypothesis), because it has made falsifiable predictions as well.  It predicts that “each species has its own interface (with some similarities between phylogenetically related species), almost surely no interface performs reconstructions (read the second link for more details on this), each interface is tailored to guide adaptive behavior in the relevant niche, much of the competition between and within species exploits strengths and limitations of interfaces, and such competition can lead to arms races between interfaces that critically influence their adaptive evolution.”  The theory predicts that interfaces are essential to understanding evolution and the competition between organisms, whereas the reconstruction theory makes such understanding impossible.  Thus, evidence of interfaces should be widespread throughout nature.

In his paper, he mentions the Jewel beetle as a case in point.  This beetle has a perceptual category, desirable females, which works well in its niche, and it uses it to choose larger females because they are the best mates.  According to the reconstructionist thesis, the male’s perception of desirable females should incorporate a statistical estimate of the true sizes of the most fertile females, but it doesn’t do this.  Instead, it has a category based on “bigger is better” and although this bestows a high fitness behavior for the male beetle in its evolutionary niche, if it comes into contact with a “stubbie” beer bottle, it falls into an infinite loop by being drawn to this supernormal stimuli since it is smooth, brown, and extremely large.  We can see that the “bigger is better” perceptual category relies on less information about the true nature of reality and instead chooses an “informational shortcut”.  The evidence of supernormal stimuli which have been found with many species further supports the theory and is evidence against the reconstructionist claim that perceptual categories estimate the statistical structure of the world.

More on Conscious Realism (Consciousness is all there is?)

This last link provided here shows the mathematical formalism of Hoffman’s conscious realist theory as proved by Chetan Prakash.  It contains a thorough explanation of the conscious realist theory (which goes above and beyond the interface theory of perception) and it also provides answers to common objections put forward by other scientists and philosophers on this theory.

DNA & Information: A Response to an Old ID Myth

with 35 comments

A common myth that goes around in Intelligent Design (creationist) circles is the idea that DNA can only degrade over time, and thus any and all mutations are claimed to be harmful and only serve to reduce “information” stored in that DNA.  The claim is specifically meant to suggest that evolution from a common ancestor is impossible by naturalistic processes because DNA wouldn’t have been able to form in the first place and/or it wouldn’t be able to grow or change to allow for speciation.  Thus, the claim implies that either an intelligent designer had to intervene and guide evolution every step of the way (by creating DNA, fixing mutations as they occurred or preventing them from happening, and then ceasing this intervention as soon as scientists began studying genetics), or it implies that all organisms must have been created all at once by an intelligent designer with DNA that was “intelligently” designed to fail and degrade over time (thus questioning the intelligence of this designer).

These claims have been refuted a number of times over the years by the scientific community with a consensus that’s been drawn from years of research in evolutionary biology among other disciplines, and the claims seem to be mostly a result of fundamental misunderstandings of biology (or intentional misrepresentations of the facts) and also the result of an improper application of information theory to biological processes.  What’s unfortunate is that these claims are still circulating around, largely because the propagators aren’t interested in reason, evidence, or anything that may threaten their beliefs in the supernatural, and so they simply repeat this non-sense to others without fact checking them and without any consideration as to whether the claims even appear to be rational or logically sound at all.

After having recently engaged in a discussion with a Christian that made this very claim (among many other unsubstantiated, faith-based assertions), I figured it would be useful to demonstrate why this claim is so easily refutable based on some simple thought experiments as well as some explanations and evidence found in the actual biological sciences.  First, let’s consider a strand of DNA with the following 12 nucleotide sequence (split into triplets for convenience):

ACT-GAC-TGA-CAG

If a random mutation occurs in this strand during replication, say, at the end of the strand, thus turning Guanine (G) to Adenine (A), then we’d have:

ACT-GAC-TGA-CAA

If another random mutation occurs in this string during replication, say, at the end of the string once again, thus turning Adenine (A) back to Guanine (G), then we’d have the original nucleotide sequence once again.  This shows how two random mutations could lead to the same original strand of genetic information, thus showing how it can lose its original information and have it re-created once again.  It’s also relevant to note that because there are 64 possible codons produced from the four available nucleotides (4^3 = 64), and since only 20 amino acids are needed to make proteins, there are actually several codons that code for any individual amino acid.

In the case given above, the complementary RNA sequence produced for the two sequences (before and after mutation) would be:

UGA-CUG-ACU-GUC (before mutation)
UGA-CUG-ACU-GUU (after mutation)

It turns out that GUC and GUU (the last triplets in these sequences) are both codons that code for the same amino acid (Valine), thus showing how a silent mutation can occur as well, where a silent mutation is one in which there are no changes to the amino acids or subsequent proteins that the sequence codes for (and thus no functional change in the organism at all).  The fact that silent mutations even exist also shows how mutations don’t necessarily result in a loss or change of information at all.  So in this case, as a result of the two mutations, the end result was no change in the information at all.  Had the two strands been different such that they actually coded for different proteins after the initial mutation, then the second mutation would have reversed this problem anyway thus re-creating the original information that was lost.  So this demonstration in itself already refutes the claim that DNA can only lose information over time, or that mutations necessarily lead to a loss of information.  All one needs are random mutations, and there will always be a chance that some information is lost and then re-created.  Furthermore, if we had started with a strand that didn’t code for any amino acid at all in the last triplet, and then the random mutation changed it such that it did code for an amino acid (such as Valine), this would be an increase in information regardless (since a new amino acid was expressed that was previously absent), although this depends on how we define information (more on that in a minute).

Now we could ask, is the mutation valuable, that is, conducive to the survival of the organism?  That would entirely depend on the internal/external environment of that organism.  If we changed the diet of the organism or the other conditions in which it lived, we could arrive at opposite conclusions.  Which goes to show that of the mutations that aren’t neutral (most mutations are neutral), those that are harmful or beneficial are often so because of the specific internal/external environment under consideration. If an organism is able to digest lactose exclusively and it undergoes a mutation that provides some novel ability of digesting sucrose at the expense of digesting lactose a little less effectively than before, this would be a harmful mutation if the organism lived in an environment with lactose as the only available sugar.  If however, the organism was already in an environment that had more sucrose than lactose available, then the mutation would obviously be beneficial for now the organism could exploit the most available food source.  This would likely lead to that mutation being naturally selected for and increasing its frequency in the gene pool of that organism’s local population.

Another thing that is often glossed over with the Intelligent Design (ID) claims about genetic information being lost is the fact that they first have to define what exactly information is necessarily before presenting the rest of their argument.  Whether or not information is gained or lost requires knowing how to measure information in the first place.  This is where other problems begin to surface with ID claims like these because they tend to leave this definition either poorly defined, ambiguous or conveniently malleable to serve the interests of their argument.  What we need is a clear and consistent definition of information, and then we need to check that the particular definition given is actually applicable to biological systems, and then we can check to see if the claim is true.  I have yet to see this actually demonstrated successfully.  I was able to avoid this problem in my example above, because no matter how information is defined, it was shown that two mutations can lead to the original nucleotide sequence (whatever amount of genetic “information” that may have been).  If the information had been lost, it was recreated, and if it wasn’t technically lost at all during the mutation, then it shows that not all mutations lead to a loss of information.

I would argue that a fairly useful and consistent way to define information in terms of its application to describing the evolving genetics of biological organisms would be to describe it as any positive correlation between the functionality that the genetic sequences code for and the attributes of the environment that the organism is contained in.  This is useful because it represents the relationship between the genes and the environment and it seems to fit in line with the most well-established models in evolutionary biology, including the fundamental concept of natural selection leading to favored genotypes.

If an organism has a genetic sequence such that it can digest lactose (as per my previous example), and it is within an environment that has a supply of lactose available, then whatever genes are responsible for that functionality are effectively a form of information that describes or represents some real aspects of the organism’s environment (sources of energy, chemical composition, etc.).  The more genes that do this, that is, the more complex and specific the correlation, the more information there is in the organism’s genome.  So for example, if we consider the aforementioned mutation that caused the organism to develop a novel ability to digest sucrose in addition to lactose, then if it is in an environment that has both lactose and sucrose, this genome has even more environmental information stored within it because of the increased correlation between that genome and the environment.  If the organism can most efficiently digest a certain proportion of lactose versus sucrose, then if this optimized proportion evolves to approach the actual proportion of sugars in the environment around that organism (e.g. 30% lactose, 70% sucrose), then once again we have an increase in the amount of environmental information contained within its genome due to the increase in specificity.

Defining information in this way allows us to measure degrees of how well-adapted a particular organism is (even if only one trait or attribute at a time) to its current environment as well as its past environment (based on what the convergent evidence suggests) and it also provides at least one way to measure how genetically complex the organism is.

So not only are the ID claims about genetic information easily refuted with the inherent nature of random mutations and natural selection, but we can also see that the claims are further refuted once we define genetic information such that it encompasses the fundamental relationship between genes and the environment they evolve in.