The Open Mind

Cogito Ergo Sum

Posts Tagged ‘Evolution

Conscious Realism & The Interface Theory of Perception

with 6 comments

A few months ago I was reading an interesting article in The Atlantic about Donald Hoffman’s Interface Theory of Perception.  As a person highly interested in consciousness studies, cognitive science, and the mind-body problem, I found the basic concepts of his theory quite fascinating.  What was most interesting to me was the counter-intuitive connection between evolution and perception that Hoffman has proposed.  Now it is certainly reasonable and intuitive to assume that evolutionary natural selection would favor perceptions that are closer to “the truth” or closer to the objective reality that exists independent of our minds, simply because of the idea that perceptions that are more accurate will be more likely to lead to survival than perceptions that are not accurate.  As an example, if I were to perceive lions as inert objects like trees, I would be more likely to be naturally selected against and eaten by a lion when compared to one who perceives lions as a mobile predator that could kill them.

While this is intuitive and reasonable to some degree, what Hoffman actually shows, using evolutionary game theory, is that with respect to organisms with comparable complexity, those with perceptions that are closer to reality are never going to be selected for nearly as much as those with perceptions that are tuned to fitness instead.  More so, truth in this case will be driven to extinction when it is up against perceptual models that are tuned to fitness.  That is to say, evolution will select for organisms that perceive the world in a way that is less accurate (in terms of the underlying reality) as long as the perception is tuned for survival benefits.  The bottom line is that given some specific level of complexity, it is more costly to process more information (costing more time and resources), and so if a “heuristic” method for perception can evolve instead, one that “hides” all the complex information underlying reality and instead provides us with a species-specific guide to adaptive behavior, that will always be the preferred choice.

To see this point more clearly, let’s consider an example.  Let’s imagine there’s an animal that regularly eats some kind of insect, such as a beetle, but it needs to eat a particular sized beetle or else it has a relatively high probability of eating the wrong kind of beetle (and we can assume that the “wrong” kind of beetle would be deadly to eat).  Now let’s imagine two possible types of evolved perception: it could have really accurate perceptions about the various sizes of beetles that it encounters so it can distinguish many different sizes from one another (and then choose the proper size range to eat), or it could evolve less accurate perceptions such that all beetles that are either too small or too large appear as indistinguishable from one another (maybe all the wrong-sized beetles whether too large or too small look like indistinguishable red-colored blobs) and perhaps all the beetles that are in the ideal size range for eating appear as green-colored blobs (that are again, indistinguishable from one another).  So the only discrimination in this latter case of perception is between red and green colored blobs.

Both types of perception would solve the problem of which beetles to eat or not eat, but the latter type (even if much less accurate) would bestow a fitness advantage over the former type, by allowing the animal to process much less information about the environment by not focusing on relatively useless information (like specific beetle size).  In this case, with beetle size as the only variable under consideration for survival, evolution would select for the organism that knows less total information about beetle size, as long as it knows what is most important about distinguishing the edible beetles from the poisonous beetles.  Now we can imagine that in some cases, the fitness function could align with the true structure of reality, but this is not what we ever expect to see generically in the world.  At best we may see some kind of overlap between the two but if there doesn’t have to be any then truth will go extinct.

Perception is Analogous to a Desktop Computer Interface

Hoffman analogizes this concept of a “perception interface” with the desktop interface of a personal computer.  When we see icons of folders on the desktop and drag one of those icons to the trash bin, we shouldn’t take that interface literally, because there isn’t literally a folder being moved to a literal trash bin but rather it is simply an interface that hides most if not all of what is really going on in the background — all those various diodes, resistors and transistors that are manipulated in order to modify stored information that is represented in binary code.

The desktop interface ultimately provides us with an easy and intuitive way of accomplishing these various information processing tasks because trying to do so in the most “truthful” way — by literally manually manipulating every diode, resistor, and transistor to accomplish the same task — would be far more cumbersome and less effective than using the interface.  Therefore the interface, by hiding this truth from us, allows us to “navigate” through that computational world with more fitness.  In this case, having more fitness simply means being able to accomplish information processing goals more easily, with less resources, etc.

Hoffman goes on to say that even though we shouldn’t take the desktop interface literally, obviously we should still take it seriously, because moving that folder to the trash bin can have direct implications on our lives, by potentially destroying months worth of valuable work on a manuscript that is contained in that folder.  Likewise we should take our perceptions seriously, even if we don’t take them literally.  We know that stepping in front of a moving train will likely end our conscious experience even if it is for causal reasons that we have no epistemic access to via our perception, given the species-specific “desktop interface” that evolution has endowed us with.

Relevance to the Mind-body Problem

The crucial point with this analogy is the fact that if our knowledge was confined to the desktop interface of the computer, we’d never be able to ascertain the underlying reality of the “computer”, because all that information that we don’t need to know about that underlying reality is hidden from us.  The same would apply to our perception, where it would be epistemically isolated from the underlying objective reality that exists.  I want to add to this point that even though it appears that we have found the underlying guts of our consciousness, i.e., the findings in neuroscience, it would be mistaken to think that this approach will conclusively answer the mind-body problem because the interface that we’ve used to discover our brains’ underlying neurobiology is still the “desktop” interface.

So while we may think we’ve found the underlying guts of “the computer”, this is far from certain, given the possibility of and support for this theory.  This may end up being the reason why many philosophers claim there is a “hard problem” of consciousness and one that can’t be solved.  It could be that we simply are stuck in the desktop interface and there’s no way to find out about the underlying reality that gives rise to that interface.  All we can do is maximize our knowledge of the interface itself and that would be our epistemic boundary.

Predictions of the Theory

Now if this was just a fancy idea put forward by Hoffman, that would be interesting in its own right, but the fact that it is supported by evolutionary game theory and genetic algorithm simulations shows that the theory is more than plausible.  Even better, the theory is actually a scientific theory (and not just a hypothesis), because it has made falsifiable predictions as well.  It predicts that “each species has its own interface (with some similarities between phylogenetically related species), almost surely no interface performs reconstructions (read the second link for more details on this), each interface is tailored to guide adaptive behavior in the relevant niche, much of the competition between and within species exploits strengths and limitations of interfaces, and such competition can lead to arms races between interfaces that critically influence their adaptive evolution.”  The theory predicts that interfaces are essential to understanding evolution and the competition between organisms, whereas the reconstruction theory makes such understanding impossible.  Thus, evidence of interfaces should be widespread throughout nature.

In his paper, he mentions the Jewel beetle as a case in point.  This beetle has a perceptual category, desirable females, which works well in its niche, and it uses it to choose larger females because they are the best mates.  According to the reconstructionist thesis, the male’s perception of desirable females should incorporate a statistical estimate of the true sizes of the most fertile females, but it doesn’t do this.  Instead, it has a category based on “bigger is better” and although this bestows a high fitness behavior for the male beetle in its evolutionary niche, if it comes into contact with a “stubbie” beer bottle, it falls into an infinite loop by being drawn to this supernormal stimuli since it is smooth, brown, and extremely large.  We can see that the “bigger is better” perceptual category relies on less information about the true nature of reality and instead chooses an “informational shortcut”.  The evidence of supernormal stimuli which have been found with many species further supports the theory and is evidence against the reconstructionist claim that perceptual categories estimate the statistical structure of the world.

More on Conscious Realism (Consciousness is all there is?)

This last link provided here shows the mathematical formalism of Hoffman’s conscious realist theory as proved by Chetan Prakash.  It contains a thorough explanation of the conscious realist theory (which goes above and beyond the interface theory of perception) and it also provides answers to common objections put forward by other scientists and philosophers on this theory.

Advertisements

Darwin’s Big Idea May Be The Biggest Yet

with 13 comments

Back in 1859, Charles Darwin released his famous theory of evolution by natural selection whereby inherent variations in the individual members of some population of organisms under consideration would eventually lead to speciation events due to those variations producing a differential in survival and reproductive success and thus leading to the natural selection of some subset of organisms within that population.  As Darwin explained in his On The Origin of Species:

If during the long course of ages and under varying conditions of life, organic beings vary at all in the several parts of their organisation, and I think this cannot be disputed; if there be, owing to the high geometrical powers of increase of each species, at some age, season, or year, a severe struggle for life, and this certainly cannot be disputed; then, considering the infinite complexity of the relations of all organic beings to each other and to their conditions of existence, causing an infinite diversity in structure, constitution, and habits, to be advantageous to them, I think it would be a most extraordinary fact if no variation ever had occurred useful to each being’s own welfare, in the same way as so many variations have occurred useful to man. But if variations useful to any organic being do occur, assuredly individuals thus characterised will have the best chance of being preserved in the struggle for life; and from the strong principle of inheritance they will tend to produce offspring similarly characterised. This principle of preservation, I have called, for the sake of brevity, Natural Selection.

While Darwin’s big idea completely transformed biology in terms of it providing (for the first time in history) an incredibly robust explanation for the origin of the diversity of life on this planet, his idea has since inspired other theories pertaining to perhaps the three largest mysteries that humans have ever explored: the origin of life itself (not just the diversity of life after it had begun, which was the intended scope of Darwin’s theory), the origin of the universe (most notably, why the universe is the way it is and not some other way), and also the origin of consciousness.

Origin of Life

In order to solve the first mystery (the origin of life itself), geologists, biologists, and biochemists are searching for plausible models of abiogenesis, whereby the general scheme of these models would involve chemical reactions (pertaining to geology) that would have begun to incorporate certain kinds of energetically favorable organic chemistries such that organic, self-replicating molecules eventually resulted.  Now, where Darwin’s idea of natural selection comes into play with life’s origin is in regard to the origin and evolution of these self-replicating molecules.  First of all, in order for any molecule at all to build up in concentration requires a set of conditions such that the reaction leading to the production of the molecule in question is more favorable than the reverse reaction where the product transforms back into the initial starting materials.  If merely one chemical reaction (out of a countless number of reactions occurring on the early earth) led to a self-replicating product, this would increasingly favor the production of that product, and thus self-replicating molecules themselves would be naturally selected for.  Once one of them was produced, there would have been a cascade effect of exponential growth, at least up to the limit set by the availability of the starting materials and energy sources present.

Now if we assume that at least some subset of these self-replicating molecules (if not all of them) had an imperfect fidelity in the copying process (which is highly likely) and/or underwent even a slight change after replication by reacting with other neighboring molecules (also likely), this would provide them with a means of mutation.  Mutations would inevitably lead to some molecules becoming more effective self-replicators than others, and then evolution through natural selection would take off, eventually leading to modern RNA/DNA.  So not only does Darwin’s big idea account for the evolution of diversity of life on this planet, but the basic underlying principle of natural selection would also account for the origin of self-replicating molecules in the first place, and subsequently the origin of RNA and DNA.

Origin of the Universe

Another grand idea that is gaining heavy traction in cosmology is that of inflationary cosmology, where this theory posits that the early universe underwent a period of rapid expansion, and due to quantum mechanical fluctuations in the microscopically sized inflationary region, seed universes would have resulted with each one having slightly different properties, one of which that would have expanded to be the universe that we live in.  Inflationary cosmology is currently heavily supported because it has led to a number of predictions, many of which that have already been confirmed by observation (it explains many large-scale features of our universe such as its homogeneity, isotropy, flatness, and other features).  What I find most interesting with inflationary theory is that it predicts the existence of a multiverse, whereby we are but one of an extremely large number of other universes (predicted to be on the order of 10^500, if not an infinite number), with each one having slightly different constants and so forth.

Once again, Darwin’s big idea, when applied to inflationary cosmology, would lead to the conclusion that our universe is the way it is because it was naturally selected to be that way.  The fact that its constants are within a very narrow range such that matter can even form, would make perfect sense, because even if an infinite number of universes exist with different constants, we would only expect to find ourselves in one that has the constants within the necessary range in order for matter, let alone life to exist.  So any universe that harbors matter, let alone life, would be naturally selected for against all the other universes that didn’t have the right properties to do so, including for example, universes that had too high or too low of a cosmological constant (such as those that would have instantly collapsed into a Big Crunch or expanded into a heat death far too quickly for any matter or life to have formed), or even universes that didn’t have the proper strong nuclear force to hold atomic nuclei together, or any other number of combinations that wouldn’t work.  So any universe that contains intelligent life capable of even asking the question of their origins, must necessarily have its properties within the required range (often referred to as the anthropic principle).

After our universe formed, the same principle would also apply to each galaxy and each solar system within those galaxies, whereby because variations exist in each galaxy and within each substituent solar system (differential properties analogous to different genes in a gene pool), then only those that have an acceptable range of conditions are capable of harboring life.  With over 10^22 stars in the observable universe (an unfathomably large number), and billions of years to evolve different conditions within each solar system surrounding those many stars, it isn’t surprising that eventually the temperature and other conditions would be acceptable for liquid water and organic chemistries to occur in many of those solar systems.  Even if there was only one life permitting planet per galaxy (on average), that would add up to over 100 billion life permitting planets in the observable universe alone (with many orders of magnitude more life permitting planets in the non-observable universe).  So given enough time, and given some mechanism of variation (in this case, differences in star composition and dynamics), natural selection in a sense can also account for the evolution of some solar systems that do in fact have life permitting conditions in a universe such as our own.

Origin of Consciousness

The last significant mystery I’d like to discuss involves the origin of consciousness.  While there are many current theories pertaining to different aspects of consciousness, and while there has been much research performed in the neurosciences, cognitive sciences, psychology, etc., pertaining to how the brain works and how it correlates to various aspects of the mind and consciousness, the brain sciences (though neuroscience in particular) are in their relative infancy and so there are still many questions that haven’t been answered yet.  One promising theory that has already been shown to account for many aspects of consciousness is Gerald Edelman’s theory of neuronal group selection (NGS) otherwise known as neural Darwinism (ND), which is a large scale theory of brain function.  As one might expect from the name, the mechanism of natural selection is integral to this theory.  In ND, the basic idea consists of three parts as read on the Wiki:

  1. Anatomical connectivity in the brain occurs via selective mechanochemical events that take place epigenetically during development.  This creates a diverse primary neurological repertoire by differential reproduction.
  2. Once structural diversity is established anatomically, a second selective process occurs during postnatal behavioral experience through epigenetic modifications in the strength of synaptic connections between neuronal groups.  This creates a diverse secondary repertoire by differential amplification.
  3. Re-entrant signaling between neuronal groups allows for spatiotemporal continuity in response to real-world interactions.  Edelman argues that thalamocortical and corticocortical re-entrant signaling are critical to generating and maintaining conscious states in mammals.

In a nutshell, the basic differentiated structure of the brain that forms in early development is accomplished through cellular proliferation, migration, distribution, and branching processes that involve selection processes operating on random differences in the adhesion molecules that these processes use to bind one neuronal cell to another.  These crude selection processes result in a rough initial configuration that is for the most part fixed.  However, because there are a diverse number of sets of different hierarchical arrangements of neurons in various neuronal groups, there are bound to be functionally equivalent groups of neurons that are not equivalent in structure, but are all capable of responding to the same types of sensory input.  Because some of these groups should in theory be better than others at responding to some particular type of sensory stimuli, this creates a form of neuronal/synaptic competition in the brain, whereby those groups of neurons that happen to have the best synaptic efficiency for the stimuli in question are naturally selected over the others.  This in turn leads to an increased probability that the same network will respond to similar or identical signals in the future.  Each time this occurs, synaptic strengths increase in the most efficient networks for each particular type of stimuli, and this would account for a relatively quick level of neural plasticity in the brain.

The last aspect of the theory involves what Edelman called re-entrant signaling whereby a sampling of the stimuli from functionally different groups of neurons occurring at the same time leads to a form of self-organizing intelligence.  This would provide a means for explaining how we experience spatiotemporal consistency in our experience of sensory stimuli.  Basically, we would have functionally different parts of the brain, such as various maps in the visual centers that pertain to color versus others that pertain to orientation or shape, that would effectively amalgamate the two (previously segregated) regions such that they can function in parallel and thus correlate with one another producing an amalgamation of the two types of neural maps.  Once this re-entrant signaling is accomplished between higher order or higher complexity maps in the brain, such as those pertaining to value-dependent memory storage centers, language centers, and perhaps back to various sensory cortical regions, this would create an even richer level of synchronization, possibly leading to consciousness (according to the theory).  In all of the aspects of the theory, the natural selection of differentiated neuronal structures, synaptic connections and strengths and eventually that of larger re-entrant connections would be responsible for creating the parallel and correlated processes in the brain believed to be required for consciousness.  There’s been an increasing amount of support for this theory, and more evidence continues to accumulate in support of it.  In any case, it is a brilliant idea and one with a lot of promise in potentially explaining one of the most fundamental aspects of our existence.

Darwin’s Big Idea May Be the Biggest Yet

In my opinion, Darwin’s theory of evolution through natural selection was perhaps the most profound theory ever discovered.  I’d even say that it beats Einstein’s theory of Relativity because of its massive explanatory scope and carryover to other disciplines, such as cosmology, neuroscience, and even the immune system (see Edelman’s Nobel work on the immune system, where he showed how the immune system works through natural selection as well, as opposed to some type of re-programming/learning).  Based on the basic idea of natural selection, we have been able to provide a number of robust explanations pertaining to many aspects of why the universe is likely to be the way it is, how life likely began, how it evolved afterward, and it may possibly be the answer to how life eventually evolved brains capable of being conscious.  It is truly one of the most fascinating principles I’ve ever learned about and I’m honestly awe struck by its beauty, simplicity, and explanatory power.

DNA & Information: A Response to an Old ID Myth

with 35 comments

A common myth that goes around in Intelligent Design (creationist) circles is the idea that DNA can only degrade over time, and thus any and all mutations are claimed to be harmful and only serve to reduce “information” stored in that DNA.  The claim is specifically meant to suggest that evolution from a common ancestor is impossible by naturalistic processes because DNA wouldn’t have been able to form in the first place and/or it wouldn’t be able to grow or change to allow for speciation.  Thus, the claim implies that either an intelligent designer had to intervene and guide evolution every step of the way (by creating DNA, fixing mutations as they occurred or preventing them from happening, and then ceasing this intervention as soon as scientists began studying genetics), or it implies that all organisms must have been created all at once by an intelligent designer with DNA that was “intelligently” designed to fail and degrade over time (thus questioning the intelligence of this designer).

These claims have been refuted a number of times over the years by the scientific community with a consensus that’s been drawn from years of research in evolutionary biology among other disciplines, and the claims seem to be mostly a result of fundamental misunderstandings of biology (or intentional misrepresentations of the facts) and also the result of an improper application of information theory to biological processes.  What’s unfortunate is that these claims are still circulating around, largely because the propagators aren’t interested in reason, evidence, or anything that may threaten their beliefs in the supernatural, and so they simply repeat this non-sense to others without fact checking them and without any consideration as to whether the claims even appear to be rational or logically sound at all.

After having recently engaged in a discussion with a Christian that made this very claim (among many other unsubstantiated, faith-based assertions), I figured it would be useful to demonstrate why this claim is so easily refutable based on some simple thought experiments as well as some explanations and evidence found in the actual biological sciences.  First, let’s consider a strand of DNA with the following 12 nucleotide sequence (split into triplets for convenience):

ACT-GAC-TGA-CAG

If a random mutation occurs in this strand during replication, say, at the end of the strand, thus turning Guanine (G) to Adenine (A), then we’d have:

ACT-GAC-TGA-CAA

If another random mutation occurs in this string during replication, say, at the end of the string once again, thus turning Adenine (A) back to Guanine (G), then we’d have the original nucleotide sequence once again.  This shows how two random mutations could lead to the same original strand of genetic information, thus showing how it can lose its original information and have it re-created once again.  It’s also relevant to note that because there are 64 possible codons produced from the four available nucleotides (4^3 = 64), and since only 20 amino acids are needed to make proteins, there are actually several codons that code for any individual amino acid.

In the case given above, the complementary RNA sequence produced for the two sequences (before and after mutation) would be:

UGA-CUG-ACU-GUC (before mutation)
UGA-CUG-ACU-GUU (after mutation)

It turns out that GUC and GUU (the last triplets in these sequences) are both codons that code for the same amino acid (Valine), thus showing how a silent mutation can occur as well, where a silent mutation is one in which there are no changes to the amino acids or subsequent proteins that the sequence codes for (and thus no functional change in the organism at all).  The fact that silent mutations even exist also shows how mutations don’t necessarily result in a loss or change of information at all.  So in this case, as a result of the two mutations, the end result was no change in the information at all.  Had the two strands been different such that they actually coded for different proteins after the initial mutation, then the second mutation would have reversed this problem anyway thus re-creating the original information that was lost.  So this demonstration in itself already refutes the claim that DNA can only lose information over time, or that mutations necessarily lead to a loss of information.  All one needs are random mutations, and there will always be a chance that some information is lost and then re-created.  Furthermore, if we had started with a strand that didn’t code for any amino acid at all in the last triplet, and then the random mutation changed it such that it did code for an amino acid (such as Valine), this would be an increase in information regardless (since a new amino acid was expressed that was previously absent), although this depends on how we define information (more on that in a minute).

Now we could ask, is the mutation valuable, that is, conducive to the survival of the organism?  That would entirely depend on the internal/external environment of that organism.  If we changed the diet of the organism or the other conditions in which it lived, we could arrive at opposite conclusions.  Which goes to show that of the mutations that aren’t neutral (most mutations are neutral), those that are harmful or beneficial are often so because of the specific internal/external environment under consideration. If an organism is able to digest lactose exclusively and it undergoes a mutation that provides some novel ability of digesting sucrose at the expense of digesting lactose a little less effectively than before, this would be a harmful mutation if the organism lived in an environment with lactose as the only available sugar.  If however, the organism was already in an environment that had more sucrose than lactose available, then the mutation would obviously be beneficial for now the organism could exploit the most available food source.  This would likely lead to that mutation being naturally selected for and increasing its frequency in the gene pool of that organism’s local population.

Another thing that is often glossed over with the Intelligent Design (ID) claims about genetic information being lost is the fact that they first have to define what exactly information is necessarily before presenting the rest of their argument.  Whether or not information is gained or lost requires knowing how to measure information in the first place.  This is where other problems begin to surface with ID claims like these because they tend to leave this definition either poorly defined, ambiguous or conveniently malleable to serve the interests of their argument.  What we need is a clear and consistent definition of information, and then we need to check that the particular definition given is actually applicable to biological systems, and then we can check to see if the claim is true.  I have yet to see this actually demonstrated successfully.  I was able to avoid this problem in my example above, because no matter how information is defined, it was shown that two mutations can lead to the original nucleotide sequence (whatever amount of genetic “information” that may have been).  If the information had been lost, it was recreated, and if it wasn’t technically lost at all during the mutation, then it shows that not all mutations lead to a loss of information.

I would argue that a fairly useful and consistent way to define information in terms of its application to describing the evolving genetics of biological organisms would be to describe it as any positive correlation between the functionality that the genetic sequences code for and the attributes of the environment that the organism is contained in.  This is useful because it represents the relationship between the genes and the environment and it seems to fit in line with the most well-established models in evolutionary biology, including the fundamental concept of natural selection leading to favored genotypes.

If an organism has a genetic sequence such that it can digest lactose (as per my previous example), and it is within an environment that has a supply of lactose available, then whatever genes are responsible for that functionality are effectively a form of information that describes or represents some real aspects of the organism’s environment (sources of energy, chemical composition, etc.).  The more genes that do this, that is, the more complex and specific the correlation, the more information there is in the organism’s genome.  So for example, if we consider the aforementioned mutation that caused the organism to develop a novel ability to digest sucrose in addition to lactose, then if it is in an environment that has both lactose and sucrose, this genome has even more environmental information stored within it because of the increased correlation between that genome and the environment.  If the organism can most efficiently digest a certain proportion of lactose versus sucrose, then if this optimized proportion evolves to approach the actual proportion of sugars in the environment around that organism (e.g. 30% lactose, 70% sucrose), then once again we have an increase in the amount of environmental information contained within its genome due to the increase in specificity.

Defining information in this way allows us to measure degrees of how well-adapted a particular organism is (even if only one trait or attribute at a time) to its current environment as well as its past environment (based on what the convergent evidence suggests) and it also provides at least one way to measure how genetically complex the organism is.

So not only are the ID claims about genetic information easily refuted with the inherent nature of random mutations and natural selection, but we can also see that the claims are further refuted once we define genetic information such that it encompasses the fundamental relationship between genes and the environment they evolve in.

The Origin and Evolution of Life: Part II

leave a comment »

Even though life appears to be favorable in terms of the second law of thermodynamics (as explained in part one of this post), there have still been very important questions left unanswered regarding the origin of life including what mechanisms or platforms it could have used to get itself going initially.  This can be summarized by a “which came first, the chicken or the egg” dilemma, where biologists have wondered whether metabolism came first or if instead it was self-replicating molecules like RNA that came first.

On the one hand, some have argued that since metabolism is dependent on proteins and enzymes and the cell membrane itself, that it would require either RNA or DNA to code for those proteins needed for metabolism, thus implying that RNA or DNA would have to originate before metabolism could begin.  On the other hand, even the generation and replication of RNA or DNA requires a catalytic substrate of some kind and this is usually accomplished with proteins along with metabolic driving forces to accomplish those polymerization reactions, and this would seem to imply that metabolism along with some enzymes would be needed to drive the polymerization of RNA or DNA.  So biologists we’re left with quite a conundrum.  This was partially resolved when several decades ago, it was realized that RNA has the ability to not only act as a means of storing genetic information just like DNA, but it also has the additional ability of catalyzing chemical reactions just like an enzyme protein can.  Thus, it is feasible that RNA could act as both an information storage molecule as well as an enzyme.  While this helps to solve the problem if RNA began to self-replicate itself and evolve over time, the problem still remains of how the first molecules of RNA formed, because it seems that some kind of non-RNA metabolic catalyst would be needed to drive this initial polymerization.  Which brings us back to needing some kind of catalytic metabolism to drive these initial reactions.

These RNA polymerization reactions may have spontaneously formed on their own (or evolved from earlier self-replicating molecules that predated RNA), but the current models of how the early pre-biotic earth would have been around four billion years ago seem to suggest that there would have been too many destructive chemical reactions that would have suppressed the accumulation of any RNA and would have likely suppressed other self-replicating molecules as well.  What seems to be needed then is some kind of a catalyst that could create them quickly enough such that they would be able to accumulate in spite of any destructive reactions present, and/or some kind of physical barrier (like a cell wall) that protects the RNA or other self-replicating polymers so that they don’t interact with those destructive processes.

One possible solution to this puzzle that has been developing over the last several years involves alkaline hydrothermal vents.  We actually didn’t know that these kinds of vents existed until the year 2000 when they were discovered on a National Science Foundation expedition in the mid-Atlantic.  Then a few years later they were studied more closely to see what kinds of chemistries were involved with these kinds of vents.  Unlike the more well-known “black smoker” vents (which were discovered in the 1970’s), these alkaline hydrothermal vents have several properties that would have been hospitable to the emergence of life back during the Hadeon eon (between 4.6 and 4 billion years ago).

The ocean water during the Hadeon eon would have been much more acidic due to the higher concentrations of carbon dioxide (thus forming carbonic acid), and this acidic ocean water would have mixed with the hydrogen-rich alkaline water found within the vents, and this would have formed a natural proton gradient within the naturally formed pores of these rocks.  Also, electron transfer would have likely occurred when the hydrogen and methane-rich vent fluid contacted the carbon dioxide-rich ocean water, thus generating an electrical gradient.  This is already very intriguing because all living cells ultimately derive their metabolic driving forces from proton gradients or more generally from the flow of some kind of positive charge carrier and/or electrons.  Since the rock found in these vents undergoes a process called surpentization, which spreads the rock apart into various small channels and pockets, many different kinds of pores form in the rocks, and some of them would have been very thin-walled membranes separating the acidic ocean water from the alkaline hydrogen.  This would have facilitated the required semi-permeable barrier that modern cells have which we expect the earliest proto-cells to also have, and it would have provided the necessary source of energy to power various chemical reactions.

Additionally, these vents would have also provided a source of minerals (namely green rust and molybdenum) which likely would have behaved as enzymes, catalyzing reactions as various chemicals came into contact with them.  The green rust could have allowed the use of the proton gradient to generate molecules that contained phosphate, which could have stored the energy produced from the gradient — similar to how all living systems that we know of store their energy in ATP (Adenosine Tri-Phosphate).  The molybdenum on the other hand would have assisted in electron transfer through those membranes.

So this theory provides a very plausible way for catalytic metabolism as well as proto-cellular membrane formation to have resulted from natural geological processes.  These proto-cells would then likely have begun concentrating simple organic molecules formed from the reaction of CO2 and H2 with all the enzyme-like minerals that were present.  These molecules could then react with one another to polymerize and form larger and more complex molecules including eventually nucleotides and amino acids.  One promising clue that supports this theory is the fact that every living system on earth is known to share a common metabolic system, known as the citric acid cycle or Kreb’s cycle, where it operates in the forward direction for aerobic organisms and in the reverse direction for anaerobic organisms.  Since this cycle consists of only 11 molecules, and since all biological components and molecules that we know of in any species have been made by some number or combination of these 11 fundamental building blocks, scientists are trying to test (among other things) whether or not they can mimic these alkaline hydrothermal vent conditions along with the acidic ocean water that would have been present in the Hadrean era and see if it will precipitate some or all of these molecules.  If they can, it will show that this theory is more than plausible to account for the origin of life.

Once these basic organic molecules were generated, eventually proteins would have been able to form, some of which that could have made their way to the membrane surface of the pores and acted as pumps to direct the natural proton gradient to do useful work.  Once those proteins evolved further, it would have been possible and advantageous for the membranes to become less permeable so that the gradient could be highly focused on the pump channels on the membrane of these proto-cells.  The membrane could have begun to change into one made from lipids produced from the metabolic reactions, and we already know that lipids readily form micelles or small closed spherical structures once they aggregate in aqueous conditions.  As this occurred, the proto-cells would no longer have been trapped in the porous rock, but would have eventually been able to slowly migrate away from the vents altogether, eventually forming the phospholipid bi-layer cell membranes that we see in modern cells.  Once this got started, self-replicating molecules and the rest of the evolution of the cell would have underwent natural selection as per the Darwinian evolution that most of us are familiar with.

As per the earlier discussion regarding life serving as entropy engines and energy dissipation channels, this self-replication would have been favored thermodynamically as well because replicating those entropy engines and the energy dissipation channels means that they will only become more effective at doing so.  Thus, we can tie this all together, where natural geological processes would have allowed for the required metabolism to form, thus powering organic molecular synthesis and polymerization, and all of these processes serving to increase entropy and maximize energy dissipation.  All that was needed for this to initiate was a planet that had common minerals, water, and CO2, and the natural geological processes can do the rest of the work.  These kinds of planets actually seem to be fairly common in our galaxy, with estimates ranging in the billions, thus potentially harboring life (or where it is just a matter of time before it initiates and evolves if it hasn’t already).  While there is still a lot of work to be done to confirm the validity of these models and to try to find ways of testing them vigorously, we are getting relatively close to solving the puzzle of how life originated, why it is the way it is, and how we can better search for it in other parts of the universe.

A Scientific Perspective of the Arts

with 2 comments

Science and the arts have long been regarded as mutually exclusive domains, where many see artistic expression as something that science can’t explain or reduce in any way, or as something that just shouldn’t be explored by any kind of scientific inquiry.  To put it another way, many people have thought it impossible for there to ever be any kind of a “science of the arts”.  The way I see it, science isn’t something that can be excluded from any domain at all, because we apply science in a very general way every time we learn or conceive of new ideas, experiment with them, and observe the results to determine if we should modify our beliefs based on those experiences.  Whenever we pose a question about anything we experience, in the attempt to learn something new and gain a better understanding about those experiences, a scientific approach (based on reason and the senses) is the only demonstrably reliable way we’ve ever been able to arrive at any kind of meaningful answer.  The arts are no exception to this, and in fact, many questions that have been asked about the arts and aesthetics in general have not only been answered by an application of the aforementioned general scientific reasoning that we use every day, but have in fact also been answered within many specific well-established branches of science.

Technology & The Scientific Method

It seems to me that the sciences and the various rewards we’ve reaped from them have influenced art in a number of ways and even facilitated new variations of artistic expression.  For example, science has been applied to create the very technologies used in producing art.  The various technologies created through the application of science have been used to produce new sounds (and new combinations thereof), new colors (and new color gradients), new shapes, and various other novel visual effects.  We’ve even used them to produce new tastes and smells (in the culinary arts for example).  They’ve also been used to create entirely new media through which art is exemplified.  So in a large number of ways, any kind of art has been dependent on science in some way or another — even by simply applying the scientific method by hypothesizing a way to express art in some way, even through a new medium or with a new technique, where the artist experiments with that medium or technique to see if it is satisfactory, and then modifies their hypothesis if needed until the artist obtains the desired result for what they’re trying to express (whether through simple trial and error or what-have-you).

Evolutionary Factors Influencing Aesthetic Preferences

Then we have the questions that pertain to whether or not aesthetic preferences are solely subjective and individualistic, or if they are also objective in some ways.  Some of these questions have in fact been explored within the fields of evolutionary biology and psychology (and within the field of psychology in general), where it is well known that humans find certain types of perceptions pleasurable, such as environments and objects that are conducive to our survival.  For example, the majority of people enjoy visually perceiving an abundance of food, fresh water and plush vegetation, healthy social relationships (including sex) and various emotions, etc. There are also various sounds, smells, tastes, and even tactile sensations that we’ve evolved to find pleasurable — such as the sound of laughter, flowing water, or rain, the taste of salt, fat, and sugar, the smell of various foods and plants, or the tactile sensation of sexual stimulation (to give but a few examples).  So it’s not surprising that many forms of art can appeal to the majority of people by employing these kinds of objects and environments within them, especially in cases where these sources of pleasurable sensations are artificially amplified into supernormal stimuli, thus producing unprecedented levels of pleasure not previously attainable through the natural environment that our senses evolved within.

Additionally, there are certain emotions that we’ve evolved to express as well as understand simply because they increase our chances of survival within our evolutionary niche, and thus artistic representations of these types of universal human emotions will also likely play a substantial role in our aesthetic preferences.  Even the evolved traits of empathy and sympathy, which are quite advantageous to a social species such as our own (due to them reinforcing cooperation and reciprocal altruism among other benefits), are employed by those that are perceiving and appreciating these artistic expressions.

Another possible evolutionary component related to our appreciation of art has to do with sexual selection.  Often times, particular forms of art are appreciated, not only because of the emotions it evokes in the recipient or person perceiving it, but also when they include clever uses of metaphor, allegory, poetry, and other components that often demonstrate significant levels of intelligence or brilliance in the artist that produced them.  In terms of our evolutionary history, having these kinds of skills and displays of intelligence would be attractive to prospective sexual mates for a number of reasons including the fact that they demonstrate that the artist has a surplus of mental capacity to solve more complex problems that are far beyond those they’d typically encounter day to day.  So this can provide a rather unique way of demonstrating particular aspects of their fitness to survive as well as their abilities to protect any future offspring.

Artistic expression (as well as other displays of intelligence and surplus mental capacity) can be seen as analogous to the male peacock’s large and vibrant tail.  Even though this type of tail increases its chances of being caught by a predator, if it has survived to reproductive age and beyond, it shows the females that the male has a very high fitness despite these odds being stacked against him.  It also shows that the male is fit enough to possess a surplus of resources from its food intake that are continually donated to maintaining that tail.  Beyond this, a higher degree of symmetry in the tail (the visual patterns within each feather, the morphology of each feather, and the uniformity of the feathers as a whole set) demonstrates a lower number of mutations in its genome, thus providing better genes for any future offspring.  Because of all these factors, the female has evolved to find these male attributes attractive.

Similarly, for human beings (both male and female), an intelligent brain that is able to produce brilliant expressions of art (among other feats of intelligence), illustrates that the genome for that individual is likely to have less mutations in it.  This is especially apparent once we realize that the number of genes in our genome that pertain to our brain’s development and function accounts for an entire 50% of our total genome.  So if someone is intelligent, since their highly functional brain was dependent on having a small number of mutations in the portion of their genome pertaining to the brain, this shows that the rest of their genome is also far less likely to have harmful mutations in it (and thus less mutations passed on to future offspring).  Art aside, this kind of sexual selection is actually one prominent theory within evolutionary biology to explain why our brains grew as quickly as they did, and as large as they did.  Quite simply, if larger brains were something that both males and females found sexually attractive (through the feats of intelligence they could produce), they would be sexually selected for, thus leading to higher survival rates for offspring and a runaway effect of unprecedented brain growth.  These aesthetic preferences would then likely carry over to general displays of artistic ability, thus no longer pertaining exclusively to the search for prospective sexual mates, but also to simply enjoy the feats of intelligence themselves regardless of the source.  So there are many interesting facets that pertain to likely influential evolutionary factors relating to the origin of artistic expression (or at least the origin of our mental capacity to do so).

Neuroscience & The Arts

One final aspect I’d like to discuss that pertains to the arts within the context of the sciences, lies in the realm of neuroscience.  As neuroscientists are progressing in terms of mapping the brain’s structure and activity, they are becoming better able to determine what kinds of neurological conditions are correlated with various aspects of our conscious experience, our personality, and our behavior in general.  As for how this relates to the arts, we should also eventually be able to determine why we have have the aesthetic preferences we do, whether they are based on: various neurological predispositions, the emotional tagging of various past experiences via the amygdala (and how the memory of those emotionally tagged experiences change over time), possible differences in individual sensitivities to particular stimuli, etc.

Once we get to this level of understanding of the brain itself, when we combine it with the conjoined efforts of other scientific disciplines such as anthropology, archaeology, evolutionary biology and psychology, etc., and if we collaborate with experts in the arts and humanities themselves, we should definitely be able to answer a plethora of questions relating to the origin of art, how and why it has evolved over time as it has (and how it will likely continue to evolve given that our brains as well as our culture are continually evolving in parallel), how and why the arts affect us as they do, etc.  With this kind of knowledge developing in these fields, we may even one day see artists producing art by utilizing this knowledge in very specific and articulate ways, in order to produce expressions that are the most aesthetically pleasing, the most intellectually stimulating, and the most emotionally powerful that we’ve ever experienced, by design.  I think that by putting all of this knowledge together, we would effectively have a true science of the arts.

The arts have no doubt been a fundamental facet of the human condition, and I’m excited to see us beginning to learn the answers to these truly remarkable questions.  I’m hoping that the arts and the sciences can better collaborate with one another, rather than remain relatively alienated from one another, so that we can maximize the knowledge we gain in order to answer these big questions more effectively.  We may begin to see some truly remarkable changes in how the arts are performed and produced based on this knowledge, and this should only enhance the pleasure and enjoyment that they already bring to us.

Artificial Intelligence: A New Perspective on a Perceived Threat

with 5 comments

There is a growing fear in many people of the future capabilities of artificial intelligence (AI), especially as the intelligence of these computing systems begins to approach that of human beings.  Since it is likely that AI will eventually surpass the intelligence of humans, some wonder if these advancements will be the beginning of the end of us.  Stephen Hawking, the eminent British physicist, was recently quoted by the BBC as saying “The development of full artificial intelligence could spell the end of the human race.”  BBC technology correspondent Rory Cellan-Jones said in a recent article “Prof Hawking says the primitive forms of artificial intelligence developed so far have already proved very useful, but he fears the consequences of creating something that can match or surpass humans.”  Hawking then said “It would take off on its own, and re-design itself at an ever increasing rate.  Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”

Hawking isn’t alone with this fear, and clearly this fear isn’t ill-founded.  It doesn’t take a rocket scientist to realize that human intelligence has allowed us to overcome just about any environmental barrier we’ve come across, driving us to the top of the food chain.  We’ve all seen the benefits of our high intelligence as a species, but we’ve also seen what can happen due to that intelligence being imperfect, having it operate on incomplete or fallacious information, and ultimately lacking an adequate capability of accurately determining the long-term consequences of our actions.  Because we have such a remarkable ability to manipulate our environment, that manipulation can be extremely beneficial or extremely harmful as we’ve seen with the numerous species we’ve threatened on this planet (some having gone extinct).  We’ve even threatened many fellow human beings in the process, whether intentionally or not.  Our intelligence, combined with some elements of short-sightedness, selfishness, and aggression, has led to some pretty abhorrent products throughout human history — anything from the mass enslavement of others spanning back thousands of years to modern forms of extermination weaponry (e.g. bio-warfare and nuclear bombs).  If AI reaches and eventually surpasses our level of intelligence, it is reasonable to consider the possibility that we may find ourselves on a lower rung of the food chain (so to speak), potentially becoming enslaved or exterminated by this advanced intelligence.

AI: Friend or Foe?

So what exactly prompted Stephen Hawking to make these claims?  As the BBC article mentions, “His warning came in response to a question about a revamp of the technology he uses to communicate, which involves a basic form of AI…The theoretical physicist, who has the motor neurone disease amyotrophic lateral sclerosis (ALS), is using a new system developed by Intel to speak.  Machine learning experts from the British company Swiftkey were also involved in its creation. Their technology, already employed as a smartphone keyboard app, learns how the professor thinks and suggests the words he might want to use next.”

Reading this article suggests another possibility or perspective that I don’t think a lot of people are considering with regard to AI technology.  What if AI simply replaces us gradually, by actually becoming the new “us”?  That is, as we further progress in Cyborg (i.e. cybernetic organism) technologies, using advancements similar to Stephen Hawking’s communication ability upgrade, we are ultimately becoming amalgams of biological and synthetic machines anyway.  Even the technology that we currently operate through an external peripheral interface (like smart phones and all other computers) will likely become integrated into our bodies internally.  Google glasses, voice recognition, and other technologies like those used by Hawking are certainly taking us in that direction.  It’s not difficult to imagine one day being able to think about a particular question or keyword, and having an integrated blue-tooth implant in our brain recognize the mental/physiological command cue, and successfully retrieve the desired information wirelessly from an online cloud or internet database of some form.  Going further still, we will likely one day be able to take sensory information that enters the neuronal network of our brain, and once again, send it wirelessly to supercomputers stored elsewhere that are able to process the information with billions of pattern recognition modules.  The 300 million or so pattern recognition modules that are currently in our brain’s neo-cortex would be dwarfed by this new peripheral-free interface and wirelessly accessible technology.

For those that aren’t very familiar with the function or purpose of the brain’s neo-cortex, we use its 300 million or so pattern recognition modules to notice patterns in the environment around us (and meta patterns of neuronal activity within the brain), thus being able to recognize and memorize sensory data, and think.  Ultimately, we use this pattern recognition to accomplish goals, solve problems, and gain knowledge from past experience.  In short, these pattern recognition modules are our primary source or physiological means for our intelligence.  Thus, being able to increase the number of pattern recognition modules (as well as the number of hierarchies between different sets of them), will only increase our intelligence.  Regardless of whether we integrate computer chips in our brain to do at least some or all of this processing locally, or use a wireless means of communication to an external supercomputer farm or otherwise, we will likely continue to integrate our biology with synthetic analogs to increase our capabilities.

When we realize that a higher intelligence allows us to better predict the consequences of our actions, we can see that our increasing integration with AI will likely have incredible survival benefits.  This integration will also catalyze further technologies that could never have been accomplished with biological brains alone, because we simply aren’t naturally intelligent enough to think beyond a certain level of complexity.  As Hawking said regarding AI that could surpass our intelligence, “It would take off on its own, and re-design itself at an ever increasing rate.  Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”  Yes, but if that AI becomes integrated in us, then really it is humans that are finding a way to circumvent slow biological evolution with a non-biological substrate that supercedes it.

At this time I think it is relevant to mention something else I’ve written about previously, which is the advancements being made in genetic engineering and how they are taking us into our last and grandest evolutionary leap, a “conscious evolution”, thus being able to circumvent our own slow biological evolution through an intentionally engineered design.  So as we gain more knowledge in the field of genetic engineering (combined with the increasing simulation and computing power afforded by AI), we will likely be able to catalyze our own biological evolution such that we can evolve quickly as we increase our Cyborg integrations with AI.  So we will likely see an increase in genetic engineering capabilities developing in close parallel with AI advancements, with each field substantially contributing to the other and ultimately leading to our transhumanism.

Final Thoughts

It seems clear that advancements in AI are providing us with more tools to accomplish ever-more complex goals as a species.  As we continue to integrate AI into ourselves, what we now call “human” is simply going to change as we change.  This would happen regardless, as human biological evolution continues its course into another speciation event, similar to the one that led to humans in the first place.  In fact, if we wanted to maintain the way we are right now as a species, biologically speaking, it would actually require us to use genetic engineering to do so, because genetic differentiation mechanisms (e.g. imperfect DNA replication, mutations, etc.) are inherent in our biology.  Thus, for those that argue against certain technologies based on a desire to maintain humanity and human attributes, they must also realize that the very technologies they despise are in fact their only hope for doing so.  More importantly, the goal of maintaining what humans currently are goes against our natural evolution, and we should embrace change, even if we embrace it with caution.

If AI continues to become further integrated into ourselves, forming a Cyborg amalgam of some kind, as it advances to a certain point we may choose one day to entirely eliminate the biological substrate of that amalgam, if it is both possible and advantageous to do so.  Even if we maintain some of our biology, and merely hybridize with AI, then Hawking was right to point out that “The development of full artificial intelligence could spell the end of the human race.”  Although, rather than a doomsday scenario like we saw in the movie The Terminator, with humans and machines at war with one another, the end of the human race may simply mean that we will end up changing into a different species, just as we’ve done throughout our evolutionary history.  Only this time, it will be a transition from a purely biological evolution to a cybernetic hybrid variation.  Furthermore, if it happens, it will be a transition that will likely increase our intelligence (and many other capabilities) to unfathomable levels, giving us an unprecedented ability to act based on more knowledge of the consequences of our actions as we move forward.  We should be cautious indeed, but we should also embrace our ongoing evolution and eventual transhumanism.

Objective Morality & Arguments For God

with 2 comments

Morality is certainly an important facet of the human condition, and as a philosophical topic of such high regard, it clearly deserves critical reflection and a thorough analysis.  It is often the case that when people think of ethics, moral values, and moral duties, religion enters the discussion, specifically in terms of the widely held (although certainly not ubiquitous) belief that religions provide some form of objective foundation for morals and ethics.  The primary concern here regarding morals is determining whether our morals are ontologically objective in some way or another, and even if they are, is it still accurate to describe morality as some kind of an emergent human construct that is malleable and produced by naturalistic socio-biological processes?

One of the most common theistic arguments, commonly referred to as the Divine Command Theory, states that the existence of a God (or many gods for that matter) necessarily provides an ontologically objective foundation for morals and ethics.  Furthermore, coinciding with this belief are the necessary supportive beliefs that God exists and that this God is inherently “good”, for if either of these assumptions were not also the case, then the theistic foundation for morals (i.e. what is deemed to be “good”) would be unjustified. The assumption that God exists, and that this God is inherently “good” is based upon yet a few more assumptions, although there is plenty of religious and philosophical contention regarding which assumptions are necessary, let alone which are valid.

Let’s examine some of the arguments that have been used to try and prove the existence of God as well as some arguments used to show that an existent God is necessarily good. After these arguments are examined, I will conclude this post with a brief look at moral objectivity including the most common motivations underlying its proposed existence, the implications of believing in theologically grounded objective morals, and finally, some thoughts about our possible moral future.

Cosmological Argument

The Cosmological Argument for God’s existence basically asserts that every effect has a cause, and thus if the universe began to exist, it too must have had a cause.  It is then proposed that the initial cause is something transcendent from physical reality, something supernatural, or what many would refer to as a God.  We can see that this argument most heavily relies on the initial assumption of causality.  While causality certainly appears to be an attribute of our universe, Hume was correct to point out the problem of induction, whereby, causality itself is not known to exist by a priori reasoning, but rather by a posteriori reasoning, otherwise known as induction.  Because of this, our assumption of causality is not logically grounded, and therefore it is not necessarily true.

Clearly science relies on this assumption of causality as well as on the efficacy of induction, but its predictive power and efficacy only requires that causal relationships hold up most of the time, although perhaps it would be better to say that science only requires that causal relationships hold up with the phenomena it wishes to describe.  It is not a requirement for performing science that everything is causally closed or operating under causal principles.  Even quantum mechanics has shown us acausal properties whereby atomic and subatomic particles exhibit seemingly random behavior with no local hidden variables found.  It may be the case that ontologically speaking, the seemingly random quantum behavior is actually governed by causal processes (albeit with non-local hidden variables), but we’ve found no evidence for such causal processes. So it seems unjustified to assume that causality is necessarily the case, not only because this assumption has been derived from logically uncertain induction alone, but also because within science, specifically within quantum physics, we’ve actually observed what appear to be completely acausal processes.  As such, it is certainly both possible and plausible that the universe arose from acausal processes as well, with this possibility heavily supported by the quantum mechanical principles that underlie it.

To provide a more satisfying explanation for how something could come from nothing (as in some acausal process), one could look at abstract concepts within mathematics for an analogy.  For example, if 0 = (-1) + (1), and “0” is analogous to “nothing”, then couldn’t “nothing” (i.e. “0”) be considered equivalent to the collection of complementary “somethings” (e.g. “-1” and “+1”)?  That is, couldn’t a “0” state have existed prior to the Big Bang, and this produced two universes, say, “-1” and “+1”?  Clearly one could ask how or why the “0” state transformed into anything at all, but if the collection or sum of those things are equivalent to the “0” which one started with, then perhaps the question of how or why is an illogical question to begin with.  Perhaps this ill-formulated question would be analogous to asking how zero can spontaneously give rise to zero.  In any case, quantum mechanical principles certainly defy logic and intuition, and so there’s no reason to suppose that the origins of the universe should be any less illogical or counter-intuitive.  Additionally, it is entirely possible that our conceptions of “nothing” and “something” may not be ontologically accurate or coherent with respect to cosmology and quantum physics, even if we think of those concepts as trivial and seemingly obvious in other domains of knowledge.

Even if the universe was internally causal within its boundaries and thus with every process inside that universe, would that imply that the universe as a whole, from an external perspective, would be bound by the same causal processes?  To give an analogy, imagine that the universe is like a fishbowl, and the outer boundary of the fishbowl is completely opaque and impenetrable.  To all inhabitants inside the fishbowl (e.g. some fish swimming in water), there isn’t anything to suppose except for what exists within the boundary, i.e., the water, the fish, and the laws of physics that govern the motion and physical processes therein (e.g. buoyant or freely floating objects and a certain amount of frictional drag between the fish and the water).  Now it could be that this fishbowl of a universe is itself contained within a much larger environment (e.g. a multi-verse or some meta-space) with physical laws that don’t operate like those within the fishbowl.  For example, the meta-space could be completely dry, where the fishbowl of a universe isn’t itself buoyant or floating in any way, and the universe (when considered as one object) doesn’t experience any frictional drag between itself and the meta-space medium around it.  Due to the opaque surface of the fishbowl, the inhabitants are unaware that the fishbowl itself isn’t floating, just as they are unaware of any of the other foreign physical laws or properties that lay outside of it.  In the same sense, we could be erroneously assuming that the universe itself is a part of some causal process, simply because everything within the universe appears to operate under causal processes.  Thus, it may be the case that the universe as a whole, from an external perspective that we have no access to, is not governed by the laws we see within the universe, be they the laws of time, space, causality, etc.

Even if the universe was caused by something, one can always ask, what caused the cause?  The proposition that a God exists provides no solution to this problem, for we’d then want to know who or what created that God, and this would create an infinite regress.  If one tries to solve the infinite regress by contending that a God has always existed, then we can simplify the explanation further by removing any God from it and simply positing that the universe has always existed.  Even if the Big Bang model within cosmology is correct in some sense, what if the universe has constantly undergone some kind of cycle whereby a Big Bang is preceded by and eventually succeeded by a Big Crunch ad infinitum?  Even if we have an epistemological limitation from ever confirming such a model, for example, if the information of any previous universe is somehow lost with the start of every new cycle, it is certainly a possible model, and one that no longer requires an even more complex entity to explain, such as a God.

Fine-Tuning Argument

It is often claimed by theists that the dimensionless physical constants in our universe appear to be finely tuned such that matter, let alone intelligent life, could exist.  Supposedly, if these physical constants were changed by even a small amount, life as we know it (including the evolution of consciousness) wouldn’t be possible, therefore, the universe was finely tuned by an intelligent designer, or a God.  Furthermore, it is often argued that it has been finely tuned for the eventual evolution of conscious human beings.

One question that can be posed in response to this argument is whether or not the physical constants could be better than they currently are, such that the universe would be even more conducive to matter, life, and eventually intelligent life.  Indeed, it has been determined that the physical constants could be much better than they are, and we can also clearly see that the universe is statistically inhospitable to life, empirically supported by the fact that we have yet to find life elsewhere in the universe.  Statistically, it is still very likely that life exists in many other places throughout the universe, but it certainly doesn’t exist in most places.  Changing the physical constants in just the right way would indeed make life ubiquitous.  So it doesn’t appear that the universe was really finely tuned at all, at least not for any of the reasons that have been supposed.

There have also been other naturalistic theories presented as possible solutions to the fine-tuning argument, such as that of the Multi-verse, whereby we are but one universe living among an extremely large number of other universes (potentially infinite, although not necessarily so), and each universe has slightly different physical constants.  In a way, we could say that a form of natural selection among universes occurs, where the appearance of a finely tuned universe is analogous to the apparent design in biological nature.  We now know that natural selection along with some differentiation mechanisms are all that are necessary to produce the appearance of designed phenotypes.  The same thing could apply to universes, and by the anthropic principle, we can see that those universes that had physical constants within a particular range conducive to life, and eventually intelligent life, would indeed be the type of universe that we are living in such that we can even ask the question.  That is, some universes could be naturally selected to undergo the evolution of consciousness and eventually self-awareness.

There have been other theories presented to account for the appearance of a finely tuned universe such as a quantum superposition of initial conditions during the Big Bang, but they utilize the same basic principles of cosmic differentiation and natural selection, and so need not be mentioned further.  In any case, we can see that there are several possible naturalistic explanations for what appear to be finely tuned physical constants.

An even more important point worth mentioning is the possibility that every combination of physical constants could produce some form of consciousness completely unfathomable to us. We have yet to solve the mind-body problem (if it is indeed solvable), and so without knowing what physical mechanism produces consciousness, are we justified in assuming which processes can not produce consciousness? Even if consciousness as we know it is limited to carbon-based biological organisms with brains, can we justifiably dismiss the possibility of completely different mechanisms and processes that lead to some form of self-regulating “life”, “consciousness”, or “awareness”? Even a form of life or consciousness that does not involve brains, let alone atoms or molecules?  If this is the case, then all universes could have some form of “life” or “consciousness”, even if they would never come close to falling within our limited definition of such concepts.

“God is Good” & The Ontological Argument

The assumption that a God which exists must necessarily be a good God is definitely necessary for one to believe that the existence of that God provides an ontologically objective foundation for morals and ethics. So what exactly is the basis for this assumption that a God must necessarily be good?

This assumption has been derived by many from some versions of what is known as the Ontological Argument for God’s existence. This argument, believed to have been first asserted by St. Anselm of Canterbury in the year 1078 CE, basically asserts that God, by definition, is the greatest conceivable being.  However, if the greatest conceivable being is supposedly limited to the mind, that is, as a mental construct, then an even greater conceivable being is possible, namely one that actually exists outside of the mind as an entity in reality, therefore, God exists in both the mind as well as in reality.  Furthermore, regarding the concept of God being good, some people take this argument further and believe that the greatest conceivable being, that is, a God, also has to be good, since it is believed that the most perfect God, by definition, would deserve to be worshipped, and would only create or command that which is best.  So it follows then by the Ontological Argument, that not only God exists, but also that God must necessarily be good.

One obvious criticism to this argument is the fact that just because one can conceive of something, that act in itself certainly doesn’t make that conception exist in any sense other than as a mental construct.  Even if I can conceive of a perfect object, like a perfect planet that is perfectly spherical for example, this doesn’t mean that it necessarily has to exist.  Even if I limit my conceptions to a perfect God, what if I conceive of two perfect beings, with the assumption that two perfect beings are somehow better than one?  Does this mean that two perfect beings must necessarily exist? How about an infinite number of perfect beings? Isn’t an infinite number of infinitely perfect beings the best conception of all?  If so, why isn’t this conception necessarily existent in reality as well?  Such an assertion would indeed provide proof for polytheism.  One could certainly argue over which conceptions are truly perfect or the best, and thus which should truly produce something necessarily in reality, but regardless, one still hasn’t shown how conceptions alone can lead to realities.  Notice also that the crux of St. Anselm’s argument is dependent on one’s definition of what God is, which leads me to what I believe to be a much more important criticism of the Ontological Argument.

The primary criticism I have with such an argument, or any argument claiming particular attributes of a God for that matter, is the lack of justification for assuming that anyone could actually know anything about a God.  Are we to assume that any attributes at all of a God should necessarily be within the limits of human comprehension?  This assumption of such a potent human attribute of understanding sounds incredibly pretentious, egotistical, and entirely unsubstantiated.  As for the common assumptions about what God is, why would a God necessarily have to be different from, or independent of, the universe itself, as presumably required for an ontologically objective foundation for morality?  Pantheists for example (which can be classified as atheists as far as I’m concerned), assume that the universe itself is God, and thus the universe needed no creator nor anything independent of itself.  Everything in the universe is considered a part of that God and that’s simply all there is to it.

If one takes a leap of faith and assumes that a presumed God not only exists, but is indeed also independent of the universe in some way, aren’t they even less justified in making claims about the attributes of this God?  Wouldn’t it be reasonable to assume that they’d have an even larger epistemological barrier between themselves and an external, separate, and independent God?  It seems incredibly clear that any claims about what a God would be like are based on the unsubstantiated assumption that humans must necessarily have access to such knowledge, and in order to hold such a view, it seems that one would have to abandon all logic and reasoning.

Euthyphro dilemma

One common challenge to the Divine Command Theory mentioned earlier is the Euthyphro dilemma, whereby one must determine if actions are good simply because a presumed God commands them, or rather that the presumed God commands particular actions because they are good independently of that God.  If the former premise is chosen, this would imply that whatever a God commands, even if humans or others see those commands to be immoral, that they must be moral and good regardless of human criticisms. If the latter premise is chosen, then morality is clearly not dependent on God thus defeating the Divine Command Theory altogether as well as the precept that God is omnipotent (since God in this case wouldn’t ultimately have control over defining what is good and what is not good).  So for those that ascribe to the Divine Command Theory, it appears that they also have to accept that all moral actions (no matter how immoral they may seem to us) are indeed moral simply because a God commands them. One should also contemplate that if a God were theoretically able to modify its commands over time (presumably possible with an omnipotent God), then any theological objective foundation for morals would be malleable and subject to change, thereby reducing, if not defeating, the pragmatic utility of that objective foundation.

There are many people that have absolutely no problem with such Divine Command Theory assumptions, including the many theists that accept and justify the purported acts of their God (or gods), despite there being an enormous number of people outside of those particular religions that see many of those acts as heinous and morally reprehensible (e.g. divinely authorized war, murder, rape, genocide, slavery, racism, sexism, sexual-orientationism, etc.).  Another problem that exists for the Divine Command Theory is the problem of contradictory divine commands, whereby many different religions each claim to follow divine commands despite the fact that the divine commands of one religion may differ from another.  These differences clearly indicate that even if the Divine Command Theory were true, the fact that people don’t agree on what those divine commands are, and the fact that there is no known method for confirming what the true divine commands are, illustrates that the theory is pragmatically useless as it fails to actually provide any way of knowing what these ontologically objective morals and ethics would be.  In other words, even if morals did have a theologically-based ontologically objective foundation, it appears that we have an epistemological barrier from ever confirming such an objective status.

Argument from Morality for the Existence of God

Some believe in what is often referred to as the “Moral Argument for God” or the “Argument from Morality”, whereby at least one variation asserts that because moral values exist in some sense, it then follows that a God must necessarily exist, since nature on its own appears to be morally neutral, as nature doesn’t appear to have any reason or mechanism for producing moral values from purely physical or materialistic processes. One can also see that by accepting such an assertion, if one wants to believe in the existence of an objective foundation for morals, one need only believe that morals exist, for this supposedly implies that God exists, and it is presumed that an existent God (if one ascribes to the common assumption that “God” must be good as explained earlier) also provides an objective foundation for morals.

Well, what if morals are not actually separate from naturalistic mechanisms and explanations?  While nature may appear to be morally neutral, there is evidence to suggest that what we often call “morality” (at least partially) resulted from natural selection pressures ingraining into humans a tendency for reciprocal altruism among other innate behaviors that have been beneficial to the survival of our highly social species, or at least beneficial in the context of the environment we once lived in prior to our cultural evolution into civilization.  For example, altruism, which can roughly be expressed or represented by the Golden Rule (i.e. do to others what you would have them do to you), is a beneficial behavior for it provides an impulse toward productive cooperation and reciprocal favors between individuals.  Another example of innate morality would be the innate aversion from incest, and this also makes evolutionary sense because incestual reproduction is more likely to produce birth defects due to genetically identical recessive mutations or problematic genes being expressed more often.

These innate tendencies, that is, what we innately feel to be good and bad behaviors are what we often label as “moral” and “immoral” behaviors, respectively.  It is certainly plausible that after our unconscious, pre-conscious, or primitively conscious ancestors evolved into self-aware and more complex conscious beings that were able to culturally transmit information over generations as well as learn new behavior, they also realized that their innate tendencies and feelings were basically fixed attributes of their human nature that couldn’t simply be unlearned or modified culturally.  Without having any idea where these innate tendencies came from, due to a lack of knowledge about evolutionary biology and psychology, humans likely intuitively concluded that moral values (or at least those that are innate) were something supernaturally based or divinely ordained.  It is at least arguable that not all morals that humans ascribe to are necessarily innate, as there also appears to be a malleable moral influence derived from the cultural transmission of certain memes, often aided by our intellectual ability to override certain instincts.  However, I think it would be more accurate to say that our most fundamental goals in life in terms of achieving personal satisfaction (through cultivating virtues and behaving with respect to the known consequences of our actions) constitutes our fundamental morality — and I think that this morality is indeed innate based on evolutionary psychology, biology, etc.

Additionally, a large number of these culturally transmitted behaviors (that we often label as “morals”) often align with our innate moral tendencies anyway, for example, memes promoting racism may be supported by our natural tendency to conveniently lump people into groups and see outsiders as dangerous or threatening.  Or the opposite may occur, for example, when memes promoting racial equality may be supported by our natural tendency for racially-neutral reciprocal altruism.  Clearly what we tend to call “morals” are an amalgam of culturally transmitted ideas as well as innate predispositions, that is, they result from socio-biological or cultural-biological processes — even if there is an innate fundamental morality that serves as an objective foundation for those culturally constructed morals.

Moreover, since other animals (or at least most other animals) do not seem to exhibit what we call moral behavior, it is likely that most humans saw it (and many still continue to see it) as a unique property of humans alone, and thus somehow existing independently of the rest of the nature around them.  One response to this anthropocentric perspective would be to note that if we look at other animals’ behavior, they may just as easily be described as having their own morals based on their own naturally selected innate behavioral tendencies, even if those morals are completely different from our own, and even if those morals are not as intelligently informed due to our more complex brains and self-awareness (most notably in the case of culturally transmitted morals).  Now it may be true that what evolutionary biologists, psychologists, and sociologists have discovered to be the mechanism or explanation for human morality, as well as how we choose to define that morality naturalistically, is not something that certain people want to accept.  However, that lack of acceptance or lack of comfort doesn’t make it any less true or any less plausible.  It seems that some people simply want morality to have a different kind of ontological status or some level of objectivity, such that they can find more solace in their convictions and also to support their anthropocentric presuppositions.

Objective Morality, Moral Growth, and our Moral Future

While the many arguments for God have been refuted or at least highly challenged, it appears that the actual existence of God isn’t nearly as important as people’s belief in such a God, especially when it comes to concepts such as morality.  Sartre once quoted Dostoyevsky as saying, “If there is no God, then everything is permissible.”  I personally feel that this quote illustrates quite eloquently why so many people feel compelled to argue that a God exists (among other reasons), as many seem to feel that without the notion of a God existing, the supposed lack of an objective foundation for morality will lead people to do whatever they want to do, and thus people will no longer ascribe to truly “moral” behavior.  However, as we can clearly see, there are many atheists who behave quite morally relative to the Golden Rule, if we must indeed specify some moral frame of reference.  There are also plenty of people who believe in a God and yet behave in ways that are morally reprehensible relative to the same Golden Rule standard.  The key difference between the atheist and the theist, at least concerning moral objectivity, is that the atheist, by definition, doesn’t believe that any of their behavior has a theologically grounded objective ontological status to justify it, although the atheist may still believe in some type of moral objectivity (likely grounded in a science of morality, which is a view I actually agree with).  On the other hand, the theist does believe in a theological basis for moral objectivity, so if either the atheist or theist behave in ways that you or I would find morally reprehensible, the theist alone would actually feel religiously obligated to do so.

Regarding the concern for a foundation for morals, I think it is fair to say that the innate morality of human beings, that is, those morals that have been ingrained in us for evolutionary reasons (such as altruism), could be described as having a reliable foundation, even if not an ontologically objective one.  On top of this “naturally selected” foundation for morality, we can build upon it by first asking ourselves why we believe moral behavior is important in the first place.  If humans overwhelmingly agree that morality is important for promoting and maximizing the well-being of conscious creatures (with higher-level conscious creatures prioritized over those with less complex brains and lower-level consciousness), or if they agree with the contra-positive of that proposition, that morality is important for inhibiting and minimizing the suffering of conscious creatures, then one could say that humans at least have a moral axiom that they could ascribe to.  This moral axiom, i.e., that moral behavior is defined as that which maximizes the well-being of conscious creatures (as proposed by many “Science of Morality” proponents such as Sam Harris), is indeed an axiom that one can further build upon, refine, and implement through the use of epistemologically objective methods in science.  Even if this “moral axiom” doesn’t provide an ontologically objective morality, it has a foundation that is grounded on human intuition, reason, and empirical data.  If one argues that this still isn’t as good as having a theologically grounded ontologically objective morality, then one must realize that the theological assumptions for said moral objectivity have no empirical basis at all.  After all, even if a God does in fact exist, why exactly would a God necessarily provide an objective foundation for morals?  More importantly, as I mentioned earlier, there appears to be no epistemologically objective way to ascertain any ontologically objective morals, so it doesn’t really matter anyway.

One can also see that the theist’s position, in terms of which morals to follow, is supposedly fixed, although history has shown us that religions and their morals can change over time, either by modifying the scripture or basic tenets, or by modifying the interpretation of said scripture or basic tenets. Even if moral modifications take place with a religion or its followers, the claim of moral objectivity (and an intentional resistance to change those morals) is often maintained, paradoxically. On the other hand, the atheist’s position on morals is not inherently fixed and thus the atheist is at least possibly amenable to reason in order to modify the morals they ascribe to, with the potential to culturally adapt to a society that increasingly abhors war, murder, rape, genocide, slavery, racism, sexism, sexual-orientationism, etc.  Whereas the typical theist can not morally adapt to the culturally evolving world around them (at least not consciously or admittedly), even as more evidence and data are obtained pertaining to a better understanding of that world, the typical atheist indeed has these opportunities for moral growth.

As I’ve mentioned in previous posts, human nature is malleable and will continue to change as our species continues to evolve.  As such, our innate predispositions regarding moral behavior will likely continue to change as it has throughout our evolutionary history.  If we utilize “engineered selection” through the aid of genetic engineering, our moral malleability will be catalyzed and these changes to human nature will precipitate incredibly quickly and with conscious foresight.  Theists are no exception to evolution, and thus they will continue to evolve as well, and as a result their innate morality will also be subject to change.  Any changes that do occur to human nature will also likely affect which memes are culturally transmitted (including memes pertaining to morality) and thus morality will likely continue to be a dynamic amalgam of both biological and cultural influences.  So despite the theistic fight for an objective foundation for morality, it appears that the complex interplay between evolution and culture that led to theism in the first place will continue to change, and the false idea of any ontologically objective foundation for morality existing will likely continue to dissipate.

History has shown us that reason as well as our innate drive for reciprocal altruism is all we need in order to behave in ways that adhere to the Golden Rule (or to some other moral axiom that maximizes the well-being of conscious creatures).  Reason and altruism have also given us the capability of adapting our morals as we learn more about our species and the consequences of our actions. These assets, combined with a genetically malleable human nature will likely lead us to new moral heights over time. In the mean time, we have reason and an innate drive for altruism to morally guide us. It should be recognized that some religions which profess the existence of a God and an objective morality also abide by some altruistic principles, but many of them do not (or do so inconsistently), and when they do, they are likely driven by our innate altruism anyway. However, it takes belief in a God and its objective foundation for morality to most effectively justify behaving in any way imaginable, often in ways that negate both reason and our instinctual drive for altruism, and often reinforced by the temptation of eternal reward and the threat of eternal damnation. In any case, the belief in moral objectivity (or more specifically moral absolutes), let alone the belief in theologically grounded moral objectivity or absolutism, appears to be a potentially dangerous one.