The Open Mind

Cogito Ergo Sum

Posts Tagged ‘Evolution

Predictive Processing: Unlocking the Mysteries of Mind & Body (Part VI)

leave a comment »

This is the last post I’m going to write for this particular post-series on Predictive Processing (PP).  Here’s the links to parts 1, 2, 3, 4, and 5.  I’ve already explored a bit on how a PP framework can account for folk psychological concepts like beliefs, desires, and emotions, how it accounts for action, language and ontology, knowledge, and also perception, imagination, and reasoning.  In this final post for this series, I’m going to explore consciousness itself and how some theories of consciousness fit very nicely within a PP framework.

Consciousness as Prediction (Predicting The Self)

Earlier in this post-series I explored how PP treats perception (whether online or an offline form like imagination) as simply predictions pertaining to incoming visual information with varying degrees of precision weighting assigned to the resulting prediction error.  In a sense then, consciousness just is prediction.  At the very least, it is a subset of the predictions, likely those that are higher up in the predictive hierarchy.  This is all going to depend on which aspect or level of consciousness we are trying to explain, and as philosophers and cognitive scientists well know, consciousness is difficult to pin down and define in any way.

By consciousness, we could mean any kind of awareness at all, or we could limit this term to only apply to a system that is aware of itself.  Either way we have to be careful here if we’re looking to distinguish between consciousness generally speaking (consciousness in any form, which may be unrecognizable to us) and the unique kind of consciousness that we as human beings experience.  If an ant is conscious, it doesn’t likely have any of the richness that we have in our experience nor is it likely to have self-awareness like we do (even though a dolphin, which has a large neocortex and prefrontal cortex, is far more likely to).  So we have to keep these different levels of consciousness in mind in order to properly assess their being explained by any cognitive framework.

Looking through a PP lens, we can see that what we come to know about the world and our own bodily states is a matter of predictive models pertaining to various inferred causal relations.  These inferred causal relations ultimately stem from bottom-up sensory input.  But when this information is abstracted at higher and higher levels, eventually one can (in principle) get to a point where those higher level models begin to predict the existence of a unified prediction engine.  In other words, a subset of the highest-level predictive models may eventually predict itself as a self.  We might describe this process as the emergence of some level of self-awareness, even if higher levels of self-awareness aren’t possible unless particular kinds of higher level models have been generated.

What kinds of predictions might be involved with this kind of emergence?  Well, we might expect that predictions pertaining to our own autobiographical history, which is largely composed of episodic memories of our past experience, would contribute to this process (e.g. “I remember when I went to that amusement park with my friend Mary, and we both vomited!”).  If we begin to infer what is common or continuous between those memories of past experiences (even if only 1 second in the past), we may discover that there is a form of psychological continuity or identity present.  And if this psychological continuity (this type of causal relation) is coincident with an incredibly stable set of predictions pertaining to (especially internal) bodily states, then an embodied subject or self can plausibly emerge from it.

This emergence of an embodied self is also likely fueled by predictions pertaining to other objects that we infer existing as subjects.  For instance, in order to develop a theory of mind about other people, that is, in order to predict how other people will behave, we can’t simply model their external behavior as we can for something like a rock falling down a hill.  This can work up to a point, but eventually it’s just not good enough as behaviors become more complex.  Animal behavior, most especially that of humans, is far more complex than that of inanimate objects and as such it is going to be far more effective to infer some internal hidden causes for that behavior.  Our predictions would work well if they involved some kind of internal intentionality and goal-directedness operating within any animate object, thereby transforming that object into a subject.  This should be no less true for how we model our own behavior.

If this object that we’ve now inferred to be a subject seems to behave in ways that we see ourselves as behaving (especially if it’s another human being), then we can begin to infer some kind of equivalence despite being separate subjects.  We can begin to infer that they too have beliefs, desires, and emotions, and thus that they have an internal perspective that we can’t directly access just as they aren’t able to access ours.  And we can also see ourselves from a different perspective based on how we see those other subjects from an external perspective.  Since I can’t easily see myself from an external perspective, when I look at others and infer that we are similar kinds of beings, then I can begin to see myself as having both an internal and external side.  I can begin to infer that others see me similar to the way that I see them, thus further adding to my concept of self and the boundaries that define that self.

Multiple meta-cognitive predictions can be inferred based on all of these interactions with others, and from the introspective interactions with our brain’s own models.  Once this happens, a cognitive agent like ourselves may begin to think about thinking and think about being a thinking being, and so on and so forth.  All of these cognitive moves would seem to provide varying degrees of self-hood or self-awareness.  And these can all be thought of as the brain’s best guesses that account for its own behavior.  Either way, it seems that the level of consciousness that is intrinsic to an agent’s experience is going to be dependent on what kinds of higher level models and meta-models are operating within the agent.

Consciousness as Integrated Information

One prominent theory of consciousness is the Integrated Information Theory of Consciousness, otherwise known as IIT.  This theory, initially formulated by Giulio Tononi back in 2004 and which has undergone development ever since, posits that consciousness is ultimately dependent on the degree of information integration that is inherent in the causal properties of some system.  Another way of saying this is that the causal system specified is unified such that every part of the system must be able to affect and be affected by the rest of the system.  If you were to physically isolate one part of a system from the rest of it (and if this part was the least significant to the rest of the system), then the resulting change in the cause-effect structure of the system would quantify the degree of integration.  A large change in the cause-effect structure based on this part’s isolation from the system (that is, by having introduced what is called a minimum partition to the system) would imply a high degree of information integration and vice versa.  And again, a high degree of integration implies a high degree of consciousness.

Notice how this information integration axiom in IIT posits that a cognitive system that is entirely feed-forward will not be conscious.  So if our brain processed incoming sensory information from the bottom up and there was no top-down generative model feeding downward through the system, then IIT would predict that our brain wouldn’t be able to produce consciousness.  PP on the other hand, posits a feedback system (as opposed to feed-forward) where the bottom-up sensory information that flows upward is met with a downward flow of top-down predictions trying to explain away that sensory information.  The brain’s predictions cause a change in the resulting prediction error, and this prediction error serves as feedback to modify the brain’s predictions.  Thus, a cognitive architecture like that suggested by PP is predicted to produce consciousness according to the most fundamental axiom of IIT.

Additionally, PP posits cross-modal sensory features and functionality where the brain integrates (especially lower level) predictions spanning various spatio-temporal scales from different sensory modalities, into a unified whole.  For example, if I am looking at and petting a black cat lying on my lap and hearing it purr, PP posits that my perceptual experience and contextual understanding of that experience are based on having integrated the visual, tactile, and auditory expectations that I’ve associated to constitute such an experience of a “black cat”.  It is going to be contingent on a conjunction of predictions that are occurring simultaneously in order to produce a unified experience rather than a barrage of millions or billions of separate causal relations (let alone those which stem from different sensory modalities) or having millions or billions of separate conscious experiences (which would seem to necessitate separate consciousnesses if they are happening at the same time).

Evolution of Consciousness

Since IIT identifies consciousness with integrated information, it can plausibly account for why it evolved in the first place.  The basic idea here is that a brain that is capable of integrating information is more likely to exploit and understand an environment that has a complex causal structure on multiple time scales than a brain that has informationally isolated modules.  This idea has been tested and confirmed to some degree by artificial life simulations (animats) where adaptation and integration are both simulated.  The organism in these simulations was a Braitenberg-like vehicle that had to move through a maze.  After 60,000 generations of simulated brains evolving through natural selection, it was found that there was a monotonic relationship between their ability to get through the maze and the amount of simulated information integration in their brains.

This increase in adaptation was the result of an effective increase in the number of concepts that the organism could make use of given the limited number of elements and connections possible in its cognitive architecture.  In other words, given a limited number of connections in a causal system (such as a group of neurons), you can pack more functions per element if the level of integration with respect to those connections is high, thus giving an evolutionary advantage to those with higher integration.  Therefore, when all else is equal in terms of neural economy and resources, higher integration gives an organism the ability to take advantage of more regularities in their environment.

From a PP perspective, this makes perfect sense because the complex causal structure of the environment is described as being modeled at many different levels of abstraction and at many different spatio-temporal scales.  All of these modeled causal relations are also described as having a hierarchical structure with models contained within models, and with many associations existing between various models.  These associations between models can be accounted for by a cognitive architecture that re-uses certain sets of neurons in multiple models, so the association is effectively instantiated by some literal degree of neuronal overlap.  And of course, these associations between multiply-leveled predictions allows the brain to exploit (and create!) as many regularities in the environment as possible.  In short, both PP and IIT make a lot of practical sense from an evolutionary perspective.

That’s All Folks!

And this concludes my post-series on the Predictive Processing (PP) framework and how I see it as being applicable to a far more broad account of mentality and brain function, than it is generally assumed to be.  If there’s any takeaways from this post-series, I hope you can at least appreciate the parsimony and explanatory scope of predictive processing and viewing the brain as a creative and highly capable prediction engine.

Advertisements

Predictive Processing: Unlocking the Mysteries of Mind & Body (Part IV)

leave a comment »

In the previous post which was part 3 in this series (click here for parts 1 and 2) on Predictive Processing (PP), I discussed how the PP framework can be used to adequately account for traditional and scientific notions of knowledge, by treating knowledge as a subset of all the predicted causal relations currently at our brain’s disposal.  This subset of predictions that we tend to call knowledge has the special quality of especially high confidence levels (high Bayesian priors).  Within a scientific context, knowledge tends to have an even stricter definition (and even higher confidence levels) and so we end up with a smaller subset of predictions which have been further verified through comparing them with the inferred predictions of others and by testing them with external means of instrumentation and some agreed upon conventions for analysis.

However, no amount of testing or verification is going to give us direct access to any knowledge per se.  Rather, the creation or discovery of knowledge has to involve the application of some kind of reasoning to explain the causal inputs, and only after this reasoning process can the resulting predicted causal relations be validated to varying degrees by testing it (through raw sensory data, external instruments, etc.).  So getting an adequate account of reasoning within any theory or framework of overall brain function is going to be absolutely crucial and I think that the PP framework is well-suited for the job.  As has already been mentioned throughout this post-series, this framework fundamentally relies on a form of Bayesian inference (or some approximation) which is a type of reasoning.  It is this inferential strategy then, combined with a hierarchical neurological structure for it to work upon, that would allow our knowledge to be created in the first place.

Rules of Inference & Reasoning Based on Hierarchical-Bayesian Prediction Structure, Neuronal Selection, Associations, and Abstraction

While PP tends to focus on perception and action in particular, I’ve mentioned that I see the same general framework as being able to account for not only the folk psychological concepts of beliefs, desires, and emotions, but also that the hierarchical predictive structure it entails should plausibly be able to account for language and ontology and help explain the relationship between the two.  It seems reasonable to me that the associations between all of these hierarchically structured beliefs or predicted causal relations at varying levels of abstraction, can provide a foundation for our reasoning as well, whether intuitive or logical forms of reasoning.

To illustrate some of the importance of associations between beliefs, consider an example like the belief in object permanence (i.e. that objects persist or continue to exist even when I can no longer see them).  This belief of ours has an extremely high prior because our entire life experience has only served to support this prediction in a large number of ways.  This means that it’s become embedded or implicit in a number of other beliefs.  If I didn’t predict that object permanence was a feature of my reality, then an enormous number of everyday tasks would become difficult if not impossible to do because objects would be treated as if they are blinking into and out of existence.

We have a large number of beliefs that require object permanence (and which are thus associated with object permanence), and so it is a more fundamental lower-level prediction (though not as low level as sensory information entering the visual cortex) and we use this lower-level prediction to build upon into any number of higher-level predictions in the overall conceptual/predictive hierarchy.  When I put money in a bank, I expect to be able to spend it even if I can’t see it anymore (such as with a check or debit card).  This is only possible if my money continues to exist even when out of view (regardless of if the money is in a paper/coin or electronic form).  This is just one of many countless everyday tasks that depend on this belief.  So it’s no surprise that this belief (this set of predictions) would have an incredibly high Bayesian prior, and therefore I would treat it as a non-negotiable fact about reality.

On the other hand, when I was a newborn infant, I didn’t have this belief of object permanence (or at best, it was a very weak belief).  Most psychologists estimate that our belief in object permanence isn’t acquired until after several months of brain development and experience.  This would translate to our having a relatively low Bayesian prior for this belief early on in our lives, and only once a person begins to form predictions based on these kinds of recognized causal relations can we begin to increase that prior and perhaps eventually reach a point that results in a subjective experience of a high degree in certainty for this particular belief.  From that point on, we are likely to simply take that belief for granted, no longer questioning it.  The most important thing to note here is that the more associations made between beliefs, the higher their effective weighting (their priors), and thus the higher our confidence in those beliefs becomes.

Neural Implementation, Spontaneous or Random Neural Activity & Generative Model Selection

This all seems pretty reasonable if a neuronal implementation worked to strengthen Bayesian priors as a function of the neuronal/synaptic connectivity (among other factors), where neurons that fire together are more likely to wire together.  And connectivity strength will increase the more often this happens.  On the flip-side, the less often this happens or if it isn’t happening at all then the connectivity is likely to be weakened or non-existent.  So if a concept (or a belief composed of many conceptual relations) is represented by some cluster of interconnected neurons and their activity, then it’s applicability to other concepts increases its chances of not only firing but also increasing the strength of wiring with those other clusters of neurons, thus plausibly increasing the Bayesian priors for the overlapping concept or belief.

Another likely important factor in the Bayesian inferential process, in terms of the brain forming new generative models or predictive hypotheses to test, is the role of spontaneous or random neural activity and neural cluster generation.  This random neural activity could plausibly provide a means for some randomly generated predictions or random changes in the pool of predictive models that our brain is able to select from.  Similar to the role of random mutation in gene pools which allows for differential reproductive rates and relative fitness of offspring, some amount of randomness in neural activity and the generative models that result would allow for improved models to be naturally selected based on those which best minimize prediction error.  The ability to minimize prediction error could be seen as a direct measure of the fitness of the generative model, within this evolutionary landscape.

This idea is related to the late Gerald Edelman’s Theory of Neuronal Group Selection (NGS), also known as Neural Darwinism, which I briefly explored in a post I wrote long ago.  I’ve long believed that this kind of natural selection process is applicable to a number of different domains (aside from genetics), and I think any viable version of PP is going to depend on it to at least some degree.  This random neural activity (and the naturally selected products derived from them) could be thought of as contributing to a steady supply of new generative models to choose from and thus contributing to our overall human creativity as well whether for reasoning and problem solving strategies or simply for artistic expression.

Increasing Abstraction, Language, & New Rules of Inference

This kind of use it or lose it property of brain plasticity combined with dynamic associations between concepts or beliefs and their underlying predictive structure, would allow for the brain to accommodate learning by extracting statistical inferences (at increasing levels of abstraction) as they occur and modifying or eliminating those inferences by changing their hierarchical associative structure as prediction error is encountered.  While some form of Bayesian inference (or an approximation to it) underlies this process, once lower-level inferences about certain causal relations have been made, I believe that new rules of inference can be derived from this basic Bayesian foundation.

To see how this might work, consider how we acquire a skill like learning how to speak and write in some particular language.  The rules of grammar, the syntactic structure and so forth which underlie any particular language are learned through use.  We begin to associate words with certain conceptual structures (see part 2 of this post-series for more details on language and ontology) and then we build up the length and complexity of our linguistic expressions by adding concepts built on higher levels of abstraction.  To maximize the productivity and specificity of our expressions, we also learn more complex rules pertaining to the order in which we speak or write various combinations of words (which varies from language to language).

These grammatical rules can be thought of as just another higher-level abstraction, another higher-level causal relation that we predict will convey more specific information to whomever we are speaking to.  If it doesn’t seem to do so, then we either modify what we have mistakenly inferred to be those grammatical rules, or depending on the context, we may simply assume that the person we’re talking to hasn’t conformed to the language or grammar that my community seems to be using.

Just like with grammar (which provides a kind of logical structure to our language), we can begin to learn new rules of inference built on the same probabilistic predictive bedrock of Bayesian inference.  We can learn some of these rules explicitly by studying logic, induction, deduction, etc., and consciously applying those rules to infer some new piece of knowledge, or we can learn these kinds of rules implicitly based on successful predictions (pertaining to behaviors of varying complexity) that happen to result from stumbling upon this method of processing causal relations within various contexts.  As mentioned earlier, this would be accomplished in part by the natural selection of randomly-generated neural network changes that best reduce the incoming prediction error.

However, language and grammar are interesting examples of an acquired set of rules because they also happen to be the primary tool that we use to learn other rules (along with anything we learn through verbal or written instruction), including (as far as I can tell) various rules of inference.  The logical structure of language (though it need not have an exclusively logical structure), its ability to be used for a number of cognitive short-cuts, and it’s influence on our thought complexity and structure, means that we are likely dependent on it during our reasoning processes as well.

When we perform any kind of conscious reasoning process, we are effectively running various mental simulations where we can intentionally manipulate our various generative models to test new predictions (new models) at varying levels of abstraction, and thus we also manipulate the linguistic structure associated with those generative models as well.  Since I have a lot more to say on reasoning as it relates to PP, including more on intuitive reasoning in particular, I’m going to expand on this further in my next post in this series, part 5.  I’ll also be exploring imagination and memory including how they relate to the processes of reasoning.

Predictive Processing: Unlocking the Mysteries of Mind & Body (Part I)

leave a comment »

I’ve been away from writing for a while because I’ve had some health problems relating to my neck.  A few weeks ago I had double-cervical-disc replacement surgery and so I’ve been unable to write and respond to comments and so forth for a little while.  I’m in the second week following my surgery now and have finally been able to get back to writing, which feels very good given that I’m unable to lift or resume martial arts for the time being.  Anyway, I want to resume my course of writing beginning with a post-series that pertains to Predictive Processing (PP) and the Bayesian brain.  I’ve written one post on this topic a little over a year ago (which can be found here) as I’ve become extremely interested in this topic for the last several years now.

The Predictive Processing (PP) theory of perception shows a lot of promise in terms of finding an overarching schema that can account for everything that the brain seems to do.  While its technical application is to account for the acts of perception and active inference in particular, I think it can be used more broadly to account for other descriptions of our mental life such as beliefs (and knowledge), desires, emotions, language, reasoning, cognitive biases, and even consciousness itself.  I want to explore some of these relationships as viewed through a PP lens more because I think it is the key framework needed to reconcile all of these aspects into one coherent picture, especially within the evolutionary context of an organism driven to survive.  Let’s begin this post-series by first looking at how PP relates to perception (including imagination), beliefs, emotions, and desires (and by extension, the actions resulting from particular desires).

Within a PP framework, beliefs can be best described as simply the set of particular predictions that the brain employs which encompass perception, desires, action, emotion, etc., and which are ultimately mediated and updated in order to reduce prediction errors based on incoming sensory evidence (and which approximates a Bayesian form of inference).  Perception then, which is constituted by a subset of all our beliefs (with many of them being implicit or unconscious beliefs), is more or less a form of controlled hallucination in the sense that what we consciously perceive is not the actual sensory evidence itself (not even after processing it), but rather our brain’s “best guess” of what the causes for the incoming sensory evidence are.

Desires can be best described as another subset of one’s beliefs, and a set of beliefs which has the special characteristic of being able to drive action or physical behavior in some way (whether driving internal bodily states, or external ones that move the body in various ways).  Finally, emotions can be thought of as predictions pertaining to the causes of internal bodily states and which may be driven or changed by changes in other beliefs (including changes in desires or perceptions).

When we believe something to be true or false, we are basically just modeling some kind of causal relationship (or its negation) which is able to manifest itself into a number of highly-weighted predicted perceptions and actions.  When we believe something to be likely true or likely false, the same principle applies but with a lower weight or precision on the predictions that directly corresponds to the degree of belief or disbelief (and so new sensory evidence will more easily sway such a belief).  And just like our perceptions, which are mediated by a number of low and high-level predictions pertaining to incoming sensory data, any prediction error that the brain encounters results in either updating the perceptual predictions to new ones that better reduce the prediction error and/or performing some physical action that reduces the prediction error (e.g. rotating your head, moving your eyes, reaching for an object, excreting hormones in your body, etc.).

In all these cases, we can describe the brain as having some set of Bayesian prior probabilities pertaining to the causes of incoming sensory data, and these priors changing over time in response to prediction errors arising from new incoming sensory evidence that fails to be “explained away” by the predictive models currently employed.  Strong beliefs are associated with high prior probabilities (highly-weighted predictions) and therefore need much more counterfactual sensory evidence to be overcome or modified than for weak beliefs which have relatively low priors (low-weighted predictions).

To illustrate some of these concepts, let’s consider a belief like “apples are a tasty food”.  This belief can be broken down into a number of lower level, highly-weighted predictions such as the prediction that eating a piece of what we call an “apple” will most likely result in qualia that accompany the perception of a particular satisfying taste, the lower level prediction that doing so will also cause my perception of hunger to change, and the higher level prediction that it will “give me energy” (with these latter two predictions stemming from the more basic category of “food” contained in the belief).  Another prediction or set of predictions is that these expectations will apply to not just one apple but a number of apples (different instances of one type of apple, or different types of apples altogether), and a host of other predictions.

These predictions may even result (in combination with other perceptions or beliefs) in an actual desire to eat an apple which, under a PP lens could be described as the highly weighted prediction of what it would feel like to find an apple, to reach for an apple, to grab it, to bite off a piece of it, to chew it, and to swallow it.  If I merely imagine doing such things, then the resulting predictions will necessarily carry such a small weight that they won’t be able to influence any actual motor actions (even if these imagined perceptions are able to influence other predictions that may eventually lead to some plan of action).  Imagined perceptions will also not carry enough weight (when my brain is functioning normally at least) to trick me into thinking that they are actual perceptions (by “actual”, I simply mean perceptions that correspond to incoming sensory data).  This low-weighting attribute of imagined perceptual predictions thus provides a viable way for us to have an imagination and to distinguish it from perceptions corresponding to incoming sensory data, and to distinguish it from predictions that directly cause bodily action.  On the other hand, predictions that are weighted highly enough (among other factors) will be uniquely capable of affecting our perception of the real world and/or instantiating action.

This latter case of desire and action shows how the PP model takes the organism to be an embodied prediction machine that is directly influencing and being influenced by the world that its body interacts with, with the ultimate goal of reducing any prediction error encountered (which can be thought of as maximizing Bayesian evidence).  In this particular example, the highly-weighted prediction of eating an apple is simply another way of describing a desire to eat an apple, which produces some degree of prediction error until the proper actions have taken place in order to reduce said error.  The only two ways of reducing this prediction error are to change the desire (or eliminate it) to one that no longer involves eating an apple, and/or to perform bodily actions that result in actually eating an apple.

Perhaps if I realize that I don’t have any apples in my house, but I realize that I do have bananas, then my desire will change to one that predicts my eating a banana instead.  Another way of saying this is that my higher-weighted prediction of satisfying hunger supersedes my prediction of eating an apple specifically, thus one desire is able to supersede another.  However, if the prediction weight associated with my desire to eat an apple is high enough, it may mean that my predictions will motivate me enough to avoid eating the banana, and instead to predict what it is like to walk out of my house, go to the store, and actually get an apple (and therefore, to actually do so).  Furthermore, it may motivate me to predict actions that lead me to earn the money such that I can purchase the apple (if I don’t already have the money to do so).  To do this, I would be employing a number of predictions having to do with performing actions that lead to me obtaining money, using money to purchase goods, etc.

This is but a taste of what PP has to offer, and how we can look at basic concepts within folk psychology, cognitive science, and theories of mind in a new light.  Associated with all of these beliefs, desires, emotions, and actions (which again, are simply different kinds of predictions under this framework), is a number of elements pertaining to ontology (i.e. what kinds of things we think exist in the world) and pertaining to language as well, and I’d like to explore this relationship in my next post.  This link can be found here.

Conscious Realism & The Interface Theory of Perception

with 6 comments

A few months ago I was reading an interesting article in The Atlantic about Donald Hoffman’s Interface Theory of Perception.  As a person highly interested in consciousness studies, cognitive science, and the mind-body problem, I found the basic concepts of his theory quite fascinating.  What was most interesting to me was the counter-intuitive connection between evolution and perception that Hoffman has proposed.  Now it is certainly reasonable and intuitive to assume that evolutionary natural selection would favor perceptions that are closer to “the truth” or closer to the objective reality that exists independent of our minds, simply because of the idea that perceptions that are more accurate will be more likely to lead to survival than perceptions that are not accurate.  As an example, if I were to perceive lions as inert objects like trees, I would be more likely to be naturally selected against and eaten by a lion when compared to one who perceives lions as a mobile predator that could kill them.

While this is intuitive and reasonable to some degree, what Hoffman actually shows, using evolutionary game theory, is that with respect to organisms with comparable complexity, those with perceptions that are closer to reality are never going to be selected for nearly as much as those with perceptions that are tuned to fitness instead.  More so, truth in this case will be driven to extinction when it is up against perceptual models that are tuned to fitness.  That is to say, evolution will select for organisms that perceive the world in a way that is less accurate (in terms of the underlying reality) as long as the perception is tuned for survival benefits.  The bottom line is that given some specific level of complexity, it is more costly to process more information (costing more time and resources), and so if a “heuristic” method for perception can evolve instead, one that “hides” all the complex information underlying reality and instead provides us with a species-specific guide to adaptive behavior, that will always be the preferred choice.

To see this point more clearly, let’s consider an example.  Let’s imagine there’s an animal that regularly eats some kind of insect, such as a beetle, but it needs to eat a particular sized beetle or else it has a relatively high probability of eating the wrong kind of beetle (and we can assume that the “wrong” kind of beetle would be deadly to eat).  Now let’s imagine two possible types of evolved perception: it could have really accurate perceptions about the various sizes of beetles that it encounters so it can distinguish many different sizes from one another (and then choose the proper size range to eat), or it could evolve less accurate perceptions such that all beetles that are either too small or too large appear as indistinguishable from one another (maybe all the wrong-sized beetles whether too large or too small look like indistinguishable red-colored blobs) and perhaps all the beetles that are in the ideal size range for eating appear as green-colored blobs (that are again, indistinguishable from one another).  So the only discrimination in this latter case of perception is between red and green colored blobs.

Both types of perception would solve the problem of which beetles to eat or not eat, but the latter type (even if much less accurate) would bestow a fitness advantage over the former type, by allowing the animal to process much less information about the environment by not focusing on relatively useless information (like specific beetle size).  In this case, with beetle size as the only variable under consideration for survival, evolution would select for the organism that knows less total information about beetle size, as long as it knows what is most important about distinguishing the edible beetles from the poisonous beetles.  Now we can imagine that in some cases, the fitness function could align with the true structure of reality, but this is not what we ever expect to see generically in the world.  At best we may see some kind of overlap between the two but if there doesn’t have to be any then truth will go extinct.

Perception is Analogous to a Desktop Computer Interface

Hoffman analogizes this concept of a “perception interface” with the desktop interface of a personal computer.  When we see icons of folders on the desktop and drag one of those icons to the trash bin, we shouldn’t take that interface literally, because there isn’t literally a folder being moved to a literal trash bin but rather it is simply an interface that hides most if not all of what is really going on in the background — all those various diodes, resistors and transistors that are manipulated in order to modify stored information that is represented in binary code.

The desktop interface ultimately provides us with an easy and intuitive way of accomplishing these various information processing tasks because trying to do so in the most “truthful” way — by literally manually manipulating every diode, resistor, and transistor to accomplish the same task — would be far more cumbersome and less effective than using the interface.  Therefore the interface, by hiding this truth from us, allows us to “navigate” through that computational world with more fitness.  In this case, having more fitness simply means being able to accomplish information processing goals more easily, with less resources, etc.

Hoffman goes on to say that even though we shouldn’t take the desktop interface literally, obviously we should still take it seriously, because moving that folder to the trash bin can have direct implications on our lives, by potentially destroying months worth of valuable work on a manuscript that is contained in that folder.  Likewise we should take our perceptions seriously, even if we don’t take them literally.  We know that stepping in front of a moving train will likely end our conscious experience even if it is for causal reasons that we have no epistemic access to via our perception, given the species-specific “desktop interface” that evolution has endowed us with.

Relevance to the Mind-body Problem

The crucial point with this analogy is the fact that if our knowledge was confined to the desktop interface of the computer, we’d never be able to ascertain the underlying reality of the “computer”, because all that information that we don’t need to know about that underlying reality is hidden from us.  The same would apply to our perception, where it would be epistemically isolated from the underlying objective reality that exists.  I want to add to this point that even though it appears that we have found the underlying guts of our consciousness, i.e., the findings in neuroscience, it would be mistaken to think that this approach will conclusively answer the mind-body problem because the interface that we’ve used to discover our brains’ underlying neurobiology is still the “desktop” interface.

So while we may think we’ve found the underlying guts of “the computer”, this is far from certain, given the possibility of and support for this theory.  This may end up being the reason why many philosophers claim there is a “hard problem” of consciousness and one that can’t be solved.  It could be that we simply are stuck in the desktop interface and there’s no way to find out about the underlying reality that gives rise to that interface.  All we can do is maximize our knowledge of the interface itself and that would be our epistemic boundary.

Predictions of the Theory

Now if this was just a fancy idea put forward by Hoffman, that would be interesting in its own right, but the fact that it is supported by evolutionary game theory and genetic algorithm simulations shows that the theory is more than plausible.  Even better, the theory is actually a scientific theory (and not just a hypothesis), because it has made falsifiable predictions as well.  It predicts that “each species has its own interface (with some similarities between phylogenetically related species), almost surely no interface performs reconstructions (read the second link for more details on this), each interface is tailored to guide adaptive behavior in the relevant niche, much of the competition between and within species exploits strengths and limitations of interfaces, and such competition can lead to arms races between interfaces that critically influence their adaptive evolution.”  The theory predicts that interfaces are essential to understanding evolution and the competition between organisms, whereas the reconstruction theory makes such understanding impossible.  Thus, evidence of interfaces should be widespread throughout nature.

In his paper, he mentions the Jewel beetle as a case in point.  This beetle has a perceptual category, desirable females, which works well in its niche, and it uses it to choose larger females because they are the best mates.  According to the reconstructionist thesis, the male’s perception of desirable females should incorporate a statistical estimate of the true sizes of the most fertile females, but it doesn’t do this.  Instead, it has a category based on “bigger is better” and although this bestows a high fitness behavior for the male beetle in its evolutionary niche, if it comes into contact with a “stubbie” beer bottle, it falls into an infinite loop by being drawn to this supernormal stimuli since it is smooth, brown, and extremely large.  We can see that the “bigger is better” perceptual category relies on less information about the true nature of reality and instead chooses an “informational shortcut”.  The evidence of supernormal stimuli which have been found with many species further supports the theory and is evidence against the reconstructionist claim that perceptual categories estimate the statistical structure of the world.

More on Conscious Realism (Consciousness is all there is?)

This last link provided here shows the mathematical formalism of Hoffman’s conscious realist theory as proved by Chetan Prakash.  It contains a thorough explanation of the conscious realist theory (which goes above and beyond the interface theory of perception) and it also provides answers to common objections put forward by other scientists and philosophers on this theory.

Darwin’s Big Idea May Be The Biggest Yet

with 13 comments

Back in 1859, Charles Darwin released his famous theory of evolution by natural selection whereby inherent variations in the individual members of some population of organisms under consideration would eventually lead to speciation events due to those variations producing a differential in survival and reproductive success and thus leading to the natural selection of some subset of organisms within that population.  As Darwin explained in his On The Origin of Species:

If during the long course of ages and under varying conditions of life, organic beings vary at all in the several parts of their organisation, and I think this cannot be disputed; if there be, owing to the high geometrical powers of increase of each species, at some age, season, or year, a severe struggle for life, and this certainly cannot be disputed; then, considering the infinite complexity of the relations of all organic beings to each other and to their conditions of existence, causing an infinite diversity in structure, constitution, and habits, to be advantageous to them, I think it would be a most extraordinary fact if no variation ever had occurred useful to each being’s own welfare, in the same way as so many variations have occurred useful to man. But if variations useful to any organic being do occur, assuredly individuals thus characterised will have the best chance of being preserved in the struggle for life; and from the strong principle of inheritance they will tend to produce offspring similarly characterised. This principle of preservation, I have called, for the sake of brevity, Natural Selection.

While Darwin’s big idea completely transformed biology in terms of it providing (for the first time in history) an incredibly robust explanation for the origin of the diversity of life on this planet, his idea has since inspired other theories pertaining to perhaps the three largest mysteries that humans have ever explored: the origin of life itself (not just the diversity of life after it had begun, which was the intended scope of Darwin’s theory), the origin of the universe (most notably, why the universe is the way it is and not some other way), and also the origin of consciousness.

Origin of Life

In order to solve the first mystery (the origin of life itself), geologists, biologists, and biochemists are searching for plausible models of abiogenesis, whereby the general scheme of these models would involve chemical reactions (pertaining to geology) that would have begun to incorporate certain kinds of energetically favorable organic chemistries such that organic, self-replicating molecules eventually resulted.  Now, where Darwin’s idea of natural selection comes into play with life’s origin is in regard to the origin and evolution of these self-replicating molecules.  First of all, in order for any molecule at all to build up in concentration requires a set of conditions such that the reaction leading to the production of the molecule in question is more favorable than the reverse reaction where the product transforms back into the initial starting materials.  If merely one chemical reaction (out of a countless number of reactions occurring on the early earth) led to a self-replicating product, this would increasingly favor the production of that product, and thus self-replicating molecules themselves would be naturally selected for.  Once one of them was produced, there would have been a cascade effect of exponential growth, at least up to the limit set by the availability of the starting materials and energy sources present.

Now if we assume that at least some subset of these self-replicating molecules (if not all of them) had an imperfect fidelity in the copying process (which is highly likely) and/or underwent even a slight change after replication by reacting with other neighboring molecules (also likely), this would provide them with a means of mutation.  Mutations would inevitably lead to some molecules becoming more effective self-replicators than others, and then evolution through natural selection would take off, eventually leading to modern RNA/DNA.  So not only does Darwin’s big idea account for the evolution of diversity of life on this planet, but the basic underlying principle of natural selection would also account for the origin of self-replicating molecules in the first place, and subsequently the origin of RNA and DNA.

Origin of the Universe

Another grand idea that is gaining heavy traction in cosmology is that of inflationary cosmology, where this theory posits that the early universe underwent a period of rapid expansion, and due to quantum mechanical fluctuations in the microscopically sized inflationary region, seed universes would have resulted with each one having slightly different properties, one of which that would have expanded to be the universe that we live in.  Inflationary cosmology is currently heavily supported because it has led to a number of predictions, many of which that have already been confirmed by observation (it explains many large-scale features of our universe such as its homogeneity, isotropy, flatness, and other features).  What I find most interesting with inflationary theory is that it predicts the existence of a multiverse, whereby we are but one of an extremely large number of other universes (predicted to be on the order of 10^500, if not an infinite number), with each one having slightly different constants and so forth.

Once again, Darwin’s big idea, when applied to inflationary cosmology, would lead to the conclusion that our universe is the way it is because it was naturally selected to be that way.  The fact that its constants are within a very narrow range such that matter can even form, would make perfect sense, because even if an infinite number of universes exist with different constants, we would only expect to find ourselves in one that has the constants within the necessary range in order for matter, let alone life to exist.  So any universe that harbors matter, let alone life, would be naturally selected for against all the other universes that didn’t have the right properties to do so, including for example, universes that had too high or too low of a cosmological constant (such as those that would have instantly collapsed into a Big Crunch or expanded into a heat death far too quickly for any matter or life to have formed), or even universes that didn’t have the proper strong nuclear force to hold atomic nuclei together, or any other number of combinations that wouldn’t work.  So any universe that contains intelligent life capable of even asking the question of their origins, must necessarily have its properties within the required range (often referred to as the anthropic principle).

After our universe formed, the same principle would also apply to each galaxy and each solar system within those galaxies, whereby because variations exist in each galaxy and within each substituent solar system (differential properties analogous to different genes in a gene pool), then only those that have an acceptable range of conditions are capable of harboring life.  With over 10^22 stars in the observable universe (an unfathomably large number), and billions of years to evolve different conditions within each solar system surrounding those many stars, it isn’t surprising that eventually the temperature and other conditions would be acceptable for liquid water and organic chemistries to occur in many of those solar systems.  Even if there was only one life permitting planet per galaxy (on average), that would add up to over 100 billion life permitting planets in the observable universe alone (with many orders of magnitude more life permitting planets in the non-observable universe).  So given enough time, and given some mechanism of variation (in this case, differences in star composition and dynamics), natural selection in a sense can also account for the evolution of some solar systems that do in fact have life permitting conditions in a universe such as our own.

Origin of Consciousness

The last significant mystery I’d like to discuss involves the origin of consciousness.  While there are many current theories pertaining to different aspects of consciousness, and while there has been much research performed in the neurosciences, cognitive sciences, psychology, etc., pertaining to how the brain works and how it correlates to various aspects of the mind and consciousness, the brain sciences (though neuroscience in particular) are in their relative infancy and so there are still many questions that haven’t been answered yet.  One promising theory that has already been shown to account for many aspects of consciousness is Gerald Edelman’s theory of neuronal group selection (NGS) otherwise known as neural Darwinism (ND), which is a large scale theory of brain function.  As one might expect from the name, the mechanism of natural selection is integral to this theory.  In ND, the basic idea consists of three parts as read on the Wiki:

  1. Anatomical connectivity in the brain occurs via selective mechanochemical events that take place epigenetically during development.  This creates a diverse primary neurological repertoire by differential reproduction.
  2. Once structural diversity is established anatomically, a second selective process occurs during postnatal behavioral experience through epigenetic modifications in the strength of synaptic connections between neuronal groups.  This creates a diverse secondary repertoire by differential amplification.
  3. Re-entrant signaling between neuronal groups allows for spatiotemporal continuity in response to real-world interactions.  Edelman argues that thalamocortical and corticocortical re-entrant signaling are critical to generating and maintaining conscious states in mammals.

In a nutshell, the basic differentiated structure of the brain that forms in early development is accomplished through cellular proliferation, migration, distribution, and branching processes that involve selection processes operating on random differences in the adhesion molecules that these processes use to bind one neuronal cell to another.  These crude selection processes result in a rough initial configuration that is for the most part fixed.  However, because there are a diverse number of sets of different hierarchical arrangements of neurons in various neuronal groups, there are bound to be functionally equivalent groups of neurons that are not equivalent in structure, but are all capable of responding to the same types of sensory input.  Because some of these groups should in theory be better than others at responding to some particular type of sensory stimuli, this creates a form of neuronal/synaptic competition in the brain, whereby those groups of neurons that happen to have the best synaptic efficiency for the stimuli in question are naturally selected over the others.  This in turn leads to an increased probability that the same network will respond to similar or identical signals in the future.  Each time this occurs, synaptic strengths increase in the most efficient networks for each particular type of stimuli, and this would account for a relatively quick level of neural plasticity in the brain.

The last aspect of the theory involves what Edelman called re-entrant signaling whereby a sampling of the stimuli from functionally different groups of neurons occurring at the same time leads to a form of self-organizing intelligence.  This would provide a means for explaining how we experience spatiotemporal consistency in our experience of sensory stimuli.  Basically, we would have functionally different parts of the brain, such as various maps in the visual centers that pertain to color versus others that pertain to orientation or shape, that would effectively amalgamate the two (previously segregated) regions such that they can function in parallel and thus correlate with one another producing an amalgamation of the two types of neural maps.  Once this re-entrant signaling is accomplished between higher order or higher complexity maps in the brain, such as those pertaining to value-dependent memory storage centers, language centers, and perhaps back to various sensory cortical regions, this would create an even richer level of synchronization, possibly leading to consciousness (according to the theory).  In all of the aspects of the theory, the natural selection of differentiated neuronal structures, synaptic connections and strengths and eventually that of larger re-entrant connections would be responsible for creating the parallel and correlated processes in the brain believed to be required for consciousness.  There’s been an increasing amount of support for this theory, and more evidence continues to accumulate in support of it.  In any case, it is a brilliant idea and one with a lot of promise in potentially explaining one of the most fundamental aspects of our existence.

Darwin’s Big Idea May Be the Biggest Yet

In my opinion, Darwin’s theory of evolution through natural selection was perhaps the most profound theory ever discovered.  I’d even say that it beats Einstein’s theory of Relativity because of its massive explanatory scope and carryover to other disciplines, such as cosmology, neuroscience, and even the immune system (see Edelman’s Nobel work on the immune system, where he showed how the immune system works through natural selection as well, as opposed to some type of re-programming/learning).  Based on the basic idea of natural selection, we have been able to provide a number of robust explanations pertaining to many aspects of why the universe is likely to be the way it is, how life likely began, how it evolved afterward, and it may possibly be the answer to how life eventually evolved brains capable of being conscious.  It is truly one of the most fascinating principles I’ve ever learned about and I’m honestly awe struck by its beauty, simplicity, and explanatory power.

DNA & Information: A Response to an Old ID Myth

with 35 comments

A common myth that goes around in Intelligent Design (creationist) circles is the idea that DNA can only degrade over time, and thus any and all mutations are claimed to be harmful and only serve to reduce “information” stored in that DNA.  The claim is specifically meant to suggest that evolution from a common ancestor is impossible by naturalistic processes because DNA wouldn’t have been able to form in the first place and/or it wouldn’t be able to grow or change to allow for speciation.  Thus, the claim implies that either an intelligent designer had to intervene and guide evolution every step of the way (by creating DNA, fixing mutations as they occurred or preventing them from happening, and then ceasing this intervention as soon as scientists began studying genetics), or it implies that all organisms must have been created all at once by an intelligent designer with DNA that was “intelligently” designed to fail and degrade over time (thus questioning the intelligence of this designer).

These claims have been refuted a number of times over the years by the scientific community with a consensus that’s been drawn from years of research in evolutionary biology among other disciplines, and the claims seem to be mostly a result of fundamental misunderstandings of biology (or intentional misrepresentations of the facts) and also the result of an improper application of information theory to biological processes.  What’s unfortunate is that these claims are still circulating around, largely because the propagators aren’t interested in reason, evidence, or anything that may threaten their beliefs in the supernatural, and so they simply repeat this non-sense to others without fact checking them and without any consideration as to whether the claims even appear to be rational or logically sound at all.

After having recently engaged in a discussion with a Christian that made this very claim (among many other unsubstantiated, faith-based assertions), I figured it would be useful to demonstrate why this claim is so easily refutable based on some simple thought experiments as well as some explanations and evidence found in the actual biological sciences.  First, let’s consider a strand of DNA with the following 12 nucleotide sequence (split into triplets for convenience):

ACT-GAC-TGA-CAG

If a random mutation occurs in this strand during replication, say, at the end of the strand, thus turning Guanine (G) to Adenine (A), then we’d have:

ACT-GAC-TGA-CAA

If another random mutation occurs in this string during replication, say, at the end of the string once again, thus turning Adenine (A) back to Guanine (G), then we’d have the original nucleotide sequence once again.  This shows how two random mutations could lead to the same original strand of genetic information, thus showing how it can lose its original information and have it re-created once again.  It’s also relevant to note that because there are 64 possible codons produced from the four available nucleotides (4^3 = 64), and since only 20 amino acids are needed to make proteins, there are actually several codons that code for any individual amino acid.

In the case given above, the complementary RNA sequence produced for the two sequences (before and after mutation) would be:

UGA-CUG-ACU-GUC (before mutation)
UGA-CUG-ACU-GUU (after mutation)

It turns out that GUC and GUU (the last triplets in these sequences) are both codons that code for the same amino acid (Valine), thus showing how a silent mutation can occur as well, where a silent mutation is one in which there are no changes to the amino acids or subsequent proteins that the sequence codes for (and thus no functional change in the organism at all).  The fact that silent mutations even exist also shows how mutations don’t necessarily result in a loss or change of information at all.  So in this case, as a result of the two mutations, the end result was no change in the information at all.  Had the two strands been different such that they actually coded for different proteins after the initial mutation, then the second mutation would have reversed this problem anyway thus re-creating the original information that was lost.  So this demonstration in itself already refutes the claim that DNA can only lose information over time, or that mutations necessarily lead to a loss of information.  All one needs are random mutations, and there will always be a chance that some information is lost and then re-created.  Furthermore, if we had started with a strand that didn’t code for any amino acid at all in the last triplet, and then the random mutation changed it such that it did code for an amino acid (such as Valine), this would be an increase in information regardless (since a new amino acid was expressed that was previously absent), although this depends on how we define information (more on that in a minute).

Now we could ask, is the mutation valuable, that is, conducive to the survival of the organism?  That would entirely depend on the internal/external environment of that organism.  If we changed the diet of the organism or the other conditions in which it lived, we could arrive at opposite conclusions.  Which goes to show that of the mutations that aren’t neutral (most mutations are neutral), those that are harmful or beneficial are often so because of the specific internal/external environment under consideration. If an organism is able to digest lactose exclusively and it undergoes a mutation that provides some novel ability of digesting sucrose at the expense of digesting lactose a little less effectively than before, this would be a harmful mutation if the organism lived in an environment with lactose as the only available sugar.  If however, the organism was already in an environment that had more sucrose than lactose available, then the mutation would obviously be beneficial for now the organism could exploit the most available food source.  This would likely lead to that mutation being naturally selected for and increasing its frequency in the gene pool of that organism’s local population.

Another thing that is often glossed over with the Intelligent Design (ID) claims about genetic information being lost is the fact that they first have to define what exactly information is necessarily before presenting the rest of their argument.  Whether or not information is gained or lost requires knowing how to measure information in the first place.  This is where other problems begin to surface with ID claims like these because they tend to leave this definition either poorly defined, ambiguous or conveniently malleable to serve the interests of their argument.  What we need is a clear and consistent definition of information, and then we need to check that the particular definition given is actually applicable to biological systems, and then we can check to see if the claim is true.  I have yet to see this actually demonstrated successfully.  I was able to avoid this problem in my example above, because no matter how information is defined, it was shown that two mutations can lead to the original nucleotide sequence (whatever amount of genetic “information” that may have been).  If the information had been lost, it was recreated, and if it wasn’t technically lost at all during the mutation, then it shows that not all mutations lead to a loss of information.

I would argue that a fairly useful and consistent way to define information in terms of its application to describing the evolving genetics of biological organisms would be to describe it as any positive correlation between the functionality that the genetic sequences code for and the attributes of the environment that the organism is contained in.  This is useful because it represents the relationship between the genes and the environment and it seems to fit in line with the most well-established models in evolutionary biology, including the fundamental concept of natural selection leading to favored genotypes.

If an organism has a genetic sequence such that it can digest lactose (as per my previous example), and it is within an environment that has a supply of lactose available, then whatever genes are responsible for that functionality are effectively a form of information that describes or represents some real aspects of the organism’s environment (sources of energy, chemical composition, etc.).  The more genes that do this, that is, the more complex and specific the correlation, the more information there is in the organism’s genome.  So for example, if we consider the aforementioned mutation that caused the organism to develop a novel ability to digest sucrose in addition to lactose, then if it is in an environment that has both lactose and sucrose, this genome has even more environmental information stored within it because of the increased correlation between that genome and the environment.  If the organism can most efficiently digest a certain proportion of lactose versus sucrose, then if this optimized proportion evolves to approach the actual proportion of sugars in the environment around that organism (e.g. 30% lactose, 70% sucrose), then once again we have an increase in the amount of environmental information contained within its genome due to the increase in specificity.

Defining information in this way allows us to measure degrees of how well-adapted a particular organism is (even if only one trait or attribute at a time) to its current environment as well as its past environment (based on what the convergent evidence suggests) and it also provides at least one way to measure how genetically complex the organism is.

So not only are the ID claims about genetic information easily refuted with the inherent nature of random mutations and natural selection, but we can also see that the claims are further refuted once we define genetic information such that it encompasses the fundamental relationship between genes and the environment they evolve in.

The Origin and Evolution of Life: Part II

leave a comment »

Even though life appears to be favorable in terms of the second law of thermodynamics (as explained in part one of this post), there have still been very important questions left unanswered regarding the origin of life including what mechanisms or platforms it could have used to get itself going initially.  This can be summarized by a “which came first, the chicken or the egg” dilemma, where biologists have wondered whether metabolism came first or if instead it was self-replicating molecules like RNA that came first.

On the one hand, some have argued that since metabolism is dependent on proteins and enzymes and the cell membrane itself, that it would require either RNA or DNA to code for those proteins needed for metabolism, thus implying that RNA or DNA would have to originate before metabolism could begin.  On the other hand, even the generation and replication of RNA or DNA requires a catalytic substrate of some kind and this is usually accomplished with proteins along with metabolic driving forces to accomplish those polymerization reactions, and this would seem to imply that metabolism along with some enzymes would be needed to drive the polymerization of RNA or DNA.  So biologists we’re left with quite a conundrum.  This was partially resolved when several decades ago, it was realized that RNA has the ability to not only act as a means of storing genetic information just like DNA, but it also has the additional ability of catalyzing chemical reactions just like an enzyme protein can.  Thus, it is feasible that RNA could act as both an information storage molecule as well as an enzyme.  While this helps to solve the problem if RNA began to self-replicate itself and evolve over time, the problem still remains of how the first molecules of RNA formed, because it seems that some kind of non-RNA metabolic catalyst would be needed to drive this initial polymerization.  Which brings us back to needing some kind of catalytic metabolism to drive these initial reactions.

These RNA polymerization reactions may have spontaneously formed on their own (or evolved from earlier self-replicating molecules that predated RNA), but the current models of how the early pre-biotic earth would have been around four billion years ago seem to suggest that there would have been too many destructive chemical reactions that would have suppressed the accumulation of any RNA and would have likely suppressed other self-replicating molecules as well.  What seems to be needed then is some kind of a catalyst that could create them quickly enough such that they would be able to accumulate in spite of any destructive reactions present, and/or some kind of physical barrier (like a cell wall) that protects the RNA or other self-replicating polymers so that they don’t interact with those destructive processes.

One possible solution to this puzzle that has been developing over the last several years involves alkaline hydrothermal vents.  We actually didn’t know that these kinds of vents existed until the year 2000 when they were discovered on a National Science Foundation expedition in the mid-Atlantic.  Then a few years later they were studied more closely to see what kinds of chemistries were involved with these kinds of vents.  Unlike the more well-known “black smoker” vents (which were discovered in the 1970’s), these alkaline hydrothermal vents have several properties that would have been hospitable to the emergence of life back during the Hadeon eon (between 4.6 and 4 billion years ago).

The ocean water during the Hadeon eon would have been much more acidic due to the higher concentrations of carbon dioxide (thus forming carbonic acid), and this acidic ocean water would have mixed with the hydrogen-rich alkaline water found within the vents, and this would have formed a natural proton gradient within the naturally formed pores of these rocks.  Also, electron transfer would have likely occurred when the hydrogen and methane-rich vent fluid contacted the carbon dioxide-rich ocean water, thus generating an electrical gradient.  This is already very intriguing because all living cells ultimately derive their metabolic driving forces from proton gradients or more generally from the flow of some kind of positive charge carrier and/or electrons.  Since the rock found in these vents undergoes a process called surpentization, which spreads the rock apart into various small channels and pockets, many different kinds of pores form in the rocks, and some of them would have been very thin-walled membranes separating the acidic ocean water from the alkaline hydrogen.  This would have facilitated the required semi-permeable barrier that modern cells have which we expect the earliest proto-cells to also have, and it would have provided the necessary source of energy to power various chemical reactions.

Additionally, these vents would have also provided a source of minerals (namely green rust and molybdenum) which likely would have behaved as enzymes, catalyzing reactions as various chemicals came into contact with them.  The green rust could have allowed the use of the proton gradient to generate molecules that contained phosphate, which could have stored the energy produced from the gradient — similar to how all living systems that we know of store their energy in ATP (Adenosine Tri-Phosphate).  The molybdenum on the other hand would have assisted in electron transfer through those membranes.

So this theory provides a very plausible way for catalytic metabolism as well as proto-cellular membrane formation to have resulted from natural geological processes.  These proto-cells would then likely have begun concentrating simple organic molecules formed from the reaction of CO2 and H2 with all the enzyme-like minerals that were present.  These molecules could then react with one another to polymerize and form larger and more complex molecules including eventually nucleotides and amino acids.  One promising clue that supports this theory is the fact that every living system on earth is known to share a common metabolic system, known as the citric acid cycle or Kreb’s cycle, where it operates in the forward direction for aerobic organisms and in the reverse direction for anaerobic organisms.  Since this cycle consists of only 11 molecules, and since all biological components and molecules that we know of in any species have been made by some number or combination of these 11 fundamental building blocks, scientists are trying to test (among other things) whether or not they can mimic these alkaline hydrothermal vent conditions along with the acidic ocean water that would have been present in the Hadrean era and see if it will precipitate some or all of these molecules.  If they can, it will show that this theory is more than plausible to account for the origin of life.

Once these basic organic molecules were generated, eventually proteins would have been able to form, some of which that could have made their way to the membrane surface of the pores and acted as pumps to direct the natural proton gradient to do useful work.  Once those proteins evolved further, it would have been possible and advantageous for the membranes to become less permeable so that the gradient could be highly focused on the pump channels on the membrane of these proto-cells.  The membrane could have begun to change into one made from lipids produced from the metabolic reactions, and we already know that lipids readily form micelles or small closed spherical structures once they aggregate in aqueous conditions.  As this occurred, the proto-cells would no longer have been trapped in the porous rock, but would have eventually been able to slowly migrate away from the vents altogether, eventually forming the phospholipid bi-layer cell membranes that we see in modern cells.  Once this got started, self-replicating molecules and the rest of the evolution of the cell would have underwent natural selection as per the Darwinian evolution that most of us are familiar with.

As per the earlier discussion regarding life serving as entropy engines and energy dissipation channels, this self-replication would have been favored thermodynamically as well because replicating those entropy engines and the energy dissipation channels means that they will only become more effective at doing so.  Thus, we can tie this all together, where natural geological processes would have allowed for the required metabolism to form, thus powering organic molecular synthesis and polymerization, and all of these processes serving to increase entropy and maximize energy dissipation.  All that was needed for this to initiate was a planet that had common minerals, water, and CO2, and the natural geological processes can do the rest of the work.  These kinds of planets actually seem to be fairly common in our galaxy, with estimates ranging in the billions, thus potentially harboring life (or where it is just a matter of time before it initiates and evolves if it hasn’t already).  While there is still a lot of work to be done to confirm the validity of these models and to try to find ways of testing them vigorously, we are getting relatively close to solving the puzzle of how life originated, why it is the way it is, and how we can better search for it in other parts of the universe.