The Open Mind

Cogito Ergo Sum

Archive for the ‘Laws of Physics’ Category

Conscious Realism & The Interface Theory of Perception

with 6 comments

A few months ago I was reading an interesting article in The Atlantic about Donald Hoffman’s Interface Theory of Perception.  As a person highly interested in consciousness studies, cognitive science, and the mind-body problem, I found the basic concepts of his theory quite fascinating.  What was most interesting to me was the counter-intuitive connection between evolution and perception that Hoffman has proposed.  Now it is certainly reasonable and intuitive to assume that evolutionary natural selection would favor perceptions that are closer to “the truth” or closer to the objective reality that exists independent of our minds, simply because of the idea that perceptions that are more accurate will be more likely to lead to survival than perceptions that are not accurate.  As an example, if I were to perceive lions as inert objects like trees, I would be more likely to be naturally selected against and eaten by a lion when compared to one who perceives lions as a mobile predator that could kill them.

While this is intuitive and reasonable to some degree, what Hoffman actually shows, using evolutionary game theory, is that with respect to organisms with comparable complexity, those with perceptions that are closer to reality are never going to be selected for nearly as much as those with perceptions that are tuned to fitness instead.  More so, truth in this case will be driven to extinction when it is up against perceptual models that are tuned to fitness.  That is to say, evolution will select for organisms that perceive the world in a way that is less accurate (in terms of the underlying reality) as long as the perception is tuned for survival benefits.  The bottom line is that given some specific level of complexity, it is more costly to process more information (costing more time and resources), and so if a “heuristic” method for perception can evolve instead, one that “hides” all the complex information underlying reality and instead provides us with a species-specific guide to adaptive behavior, that will always be the preferred choice.

To see this point more clearly, let’s consider an example.  Let’s imagine there’s an animal that regularly eats some kind of insect, such as a beetle, but it needs to eat a particular sized beetle or else it has a relatively high probability of eating the wrong kind of beetle (and we can assume that the “wrong” kind of beetle would be deadly to eat).  Now let’s imagine two possible types of evolved perception: it could have really accurate perceptions about the various sizes of beetles that it encounters so it can distinguish many different sizes from one another (and then choose the proper size range to eat), or it could evolve less accurate perceptions such that all beetles that are either too small or too large appear as indistinguishable from one another (maybe all the wrong-sized beetles whether too large or too small look like indistinguishable red-colored blobs) and perhaps all the beetles that are in the ideal size range for eating appear as green-colored blobs (that are again, indistinguishable from one another).  So the only discrimination in this latter case of perception is between red and green colored blobs.

Both types of perception would solve the problem of which beetles to eat or not eat, but the latter type (even if much less accurate) would bestow a fitness advantage over the former type, by allowing the animal to process much less information about the environment by not focusing on relatively useless information (like specific beetle size).  In this case, with beetle size as the only variable under consideration for survival, evolution would select for the organism that knows less total information about beetle size, as long as it knows what is most important about distinguishing the edible beetles from the poisonous beetles.  Now we can imagine that in some cases, the fitness function could align with the true structure of reality, but this is not what we ever expect to see generically in the world.  At best we may see some kind of overlap between the two but if there doesn’t have to be any then truth will go extinct.

Perception is Analogous to a Desktop Computer Interface

Hoffman analogizes this concept of a “perception interface” with the desktop interface of a personal computer.  When we see icons of folders on the desktop and drag one of those icons to the trash bin, we shouldn’t take that interface literally, because there isn’t literally a folder being moved to a literal trash bin but rather it is simply an interface that hides most if not all of what is really going on in the background — all those various diodes, resistors and transistors that are manipulated in order to modify stored information that is represented in binary code.

The desktop interface ultimately provides us with an easy and intuitive way of accomplishing these various information processing tasks because trying to do so in the most “truthful” way — by literally manually manipulating every diode, resistor, and transistor to accomplish the same task — would be far more cumbersome and less effective than using the interface.  Therefore the interface, by hiding this truth from us, allows us to “navigate” through that computational world with more fitness.  In this case, having more fitness simply means being able to accomplish information processing goals more easily, with less resources, etc.

Hoffman goes on to say that even though we shouldn’t take the desktop interface literally, obviously we should still take it seriously, because moving that folder to the trash bin can have direct implications on our lives, by potentially destroying months worth of valuable work on a manuscript that is contained in that folder.  Likewise we should take our perceptions seriously, even if we don’t take them literally.  We know that stepping in front of a moving train will likely end our conscious experience even if it is for causal reasons that we have no epistemic access to via our perception, given the species-specific “desktop interface” that evolution has endowed us with.

Relevance to the Mind-body Problem

The crucial point with this analogy is the fact that if our knowledge was confined to the desktop interface of the computer, we’d never be able to ascertain the underlying reality of the “computer”, because all that information that we don’t need to know about that underlying reality is hidden from us.  The same would apply to our perception, where it would be epistemically isolated from the underlying objective reality that exists.  I want to add to this point that even though it appears that we have found the underlying guts of our consciousness, i.e., the findings in neuroscience, it would be mistaken to think that this approach will conclusively answer the mind-body problem because the interface that we’ve used to discover our brains’ underlying neurobiology is still the “desktop” interface.

So while we may think we’ve found the underlying guts of “the computer”, this is far from certain, given the possibility of and support for this theory.  This may end up being the reason why many philosophers claim there is a “hard problem” of consciousness and one that can’t be solved.  It could be that we simply are stuck in the desktop interface and there’s no way to find out about the underlying reality that gives rise to that interface.  All we can do is maximize our knowledge of the interface itself and that would be our epistemic boundary.

Predictions of the Theory

Now if this was just a fancy idea put forward by Hoffman, that would be interesting in its own right, but the fact that it is supported by evolutionary game theory and genetic algorithm simulations shows that the theory is more than plausible.  Even better, the theory is actually a scientific theory (and not just a hypothesis), because it has made falsifiable predictions as well.  It predicts that “each species has its own interface (with some similarities between phylogenetically related species), almost surely no interface performs reconstructions (read the second link for more details on this), each interface is tailored to guide adaptive behavior in the relevant niche, much of the competition between and within species exploits strengths and limitations of interfaces, and such competition can lead to arms races between interfaces that critically influence their adaptive evolution.”  The theory predicts that interfaces are essential to understanding evolution and the competition between organisms, whereas the reconstruction theory makes such understanding impossible.  Thus, evidence of interfaces should be widespread throughout nature.

In his paper, he mentions the Jewel beetle as a case in point.  This beetle has a perceptual category, desirable females, which works well in its niche, and it uses it to choose larger females because they are the best mates.  According to the reconstructionist thesis, the male’s perception of desirable females should incorporate a statistical estimate of the true sizes of the most fertile females, but it doesn’t do this.  Instead, it has a category based on “bigger is better” and although this bestows a high fitness behavior for the male beetle in its evolutionary niche, if it comes into contact with a “stubbie” beer bottle, it falls into an infinite loop by being drawn to this supernormal stimuli since it is smooth, brown, and extremely large.  We can see that the “bigger is better” perceptual category relies on less information about the true nature of reality and instead chooses an “informational shortcut”.  The evidence of supernormal stimuli which have been found with many species further supports the theory and is evidence against the reconstructionist claim that perceptual categories estimate the statistical structure of the world.

More on Conscious Realism (Consciousness is all there is?)

This last link provided here shows the mathematical formalism of Hoffman’s conscious realist theory as proved by Chetan Prakash.  It contains a thorough explanation of the conscious realist theory (which goes above and beyond the interface theory of perception) and it also provides answers to common objections put forward by other scientists and philosophers on this theory.

Transcendental Argument For God’s Existence: A Critique

with 2 comments

Theist apologists and theologians have presented many arguments for the existence of God throughout history including the Ontological Argument, Cosmological Argument, Fine-Tuning Argument, the Argument from Morality, and many others — all of which having been refuted with various counter arguments.  I’ve written about a few of these arguments in the past (1, 2, 3), but one that I haven’t yet touched on is that of the Transcendental Argument for God (or simply TAG).  Not long ago I heard the Christian apologist Matt Slick conversing/debating with the well renowned atheist Matt Dillahunty on this topic and then I decided to look deeper into the argument as Slick presents it on his website.  I have found a number of problems with his argument, so I decided to iterate them in this post.

Slick’s basic argument goes as follows:

  1. The Laws of Logic exist.
    1. Law of Identity: Something (A) is what it is and is not what it is not (i.e. A is A and A is not not-A).
    2. Law of Non-contradiction: A cannot be both A and not-A, or in other words, something cannot be both true and false at the same time.
    3. Law of the Excluded Middle: Something must either be A or not-A without a middle ground, or in other words, something must be either true or false without a middle ground.
  2. The Laws of Logic are conceptual by nature — are not dependent on space, time, physical properties, or human nature.
  3. They are not the product of the physical universe (space, time, matter) because if the physical universe were to disappear, The Laws of Logic would still be true.
  4. The Laws of Logic are not the product of human minds because human minds are different — not absolute.
  5. But, since the Laws of Logic are always true everywhere and not dependent on human minds, it must be an absolute transcendent mind that is authoring them.  This mind is called God.
  6. Furthermore, if there are only two options to account for something, i.e., God and no God, and one of them is negated, then by default the other position is validated.
  7. Therefore, part of the argument is that the atheist position cannot account for the existence of The Laws of Logic from its worldview.
  8. Therefore God exists.

Concepts are Dependent on and the Product of Physical Brains

Let’s begin with number 2, 3, and 4 from above:

The Laws of Logic are conceptual by nature — are not dependent on space, time, physical properties, or human nature.  They are not the product of the physical universe (space, time, matter) because if the physical universe were to disappear, The Laws of Logic would still be true.  The Laws of Logic are not the product of human minds because human minds are different — not absolute.

Now I’d like to first mention that Matt Dillahunty actually rejected the first part of Slick’s premise here, as Dillahunty explained that while logic (the concept, our application of it, etc.) may in fact be conceptual in nature, the logical absolutes themselves (i.e. the laws of logic) which logic is based on are in fact neither conceptual nor physical.  My understanding of what Dillahunty was getting at here is that he was basically saying that just as the concept of an apple points to or refers to something real (i.e. a real apple) which is not equivalent to the concept of an apple, so also does the concept of the logical absolutes refer to something that is not the same as the concept itself.  However, what it points to, Dillahunty asserted, is something that isn’t physical either.  Therefore, the logical absolutes themselves are neither physical nor conceptual (as a result, Dillahunty later labeled “the essence” of the LOL as transcendent).  When Dillahunty was pressed by Slick to answer the question, “then what caused the LOL to exist?”, Dillahunty responded by saying that nothing caused them (or we have no reason to believe so) because they are transcendent and are thus not a product of anything physical nor conceptual.

If this is truly the case, then Dillahunty’s point here does undermine the validity of the logical structure of Slick’s argument, because Slick would then be beginning his argument by referencing the content and the truth of the logical absolutes themselves, and then later on switching to the concept of the LOL (i.e. their being conceptual in their nature, etc.).  For the purposes of this post, I’m going to simply accept Slick’s dichotomy that the logical absolutes (i.e. the laws of logic) are in fact either physical or conceptual by nature and then I will attempt to refute the argument anyway.  This way, if “conceptual or physical” is actually a true dichotomy (i.e. if there are no other options), despite the fact that Slick hasn’t proven this to be the case, his argument will be undermined anyway.  If Dillahunty is correct and “conceptual or physical” isn’t a true dichotomy, then even if my refutation here fails, Slick’s argument will still be logically invalid based on the points Dillahunty raised.

I will say however that I don’t think I agree with the point that Dillahunty made that the LOL are neither physical nor conceptual, and for a few reasons (not least of all because I am a physicalist).  My reasons for this will become more clear throughout the rest of this post, but in a nutshell, I hold that concepts are ultimately based on and thus are a subset of the physical, and the LOL would be no exception to this.  Beyond the issue of concepts, I believe that the LOL are physical in their nature for a number of other reasons as well which I’ll get to in a moment.

So why does Slick think that the LOL can’t be dependent on space?  Slick mentions in the expanded form of his argument that:

They do not stop being true dependent on location. If we travel a million light years in a direction, The Laws of Logic are still true.

Sure, the LOL don’t depend on a specific location in space, but that doesn’t mean that they aren’t dependent on space in general.  I would actually argue that concepts are abstractions that are dependent on the brains that create them based on those brains having recognized properties of space, time, and matter/energy.  That is to say that any concept such as the number 3 or the concept of redness is in fact dependent on a brain having recognized, for example, a quantity of discrete objects (which when generalized leads to the concept of numbers) or having recognized the color red in various objects (which when generalized leads to the concept of red or redness).  Since a quantity of discrete objects or a color must be located in some kind of space — even if three points on a one dimensional line (in the case of the number 3), or a two-dimensional red-colored plane (in the case of redness), then we can see that these concepts are ultimately dependent on space and matter/energy (of some kind).  Even if we say that concepts such as the color red or the number 3 do not literally exist in actual space nor are made of actual matter, they do have to exist in a mental space as mental objects, just as our conception of an apple floating in empty space doesn’t actually lie in space nor is made of matter, it nevertheless exists as a mental/perceptual representation of real space and real matter/energy that has been experienced by interaction with the physical universe.

Slick also mentions in the expanded form of his argument that the LOL can’t be dependent on time because:

They do not stop being true dependent on time. If we travel a billion years in the future or past, The Laws of Logic are still true.

Once again, sure, the LOL do not depend on a specific time, but rather they are dependent on time in general, because minds depend on time in order to have any experience of said concepts at all.  So not only are concepts only able to be formed by a brain that has created abstractions from physically interacting with space, matter, and energy within time (so far as we know), but the mind/concept-generating brain itself is also made of actual matter/energy, lying in real space, and operating/functioning within time.  So concepts are in fact not only dependent on space, time, and matter/energy (so far as we know), but are in fact also the product of space, time, and matter/energy, since it is only certain configurations of such that in fact produce a brain and the mind that results from said brain.  Thus, if the LOL are conceptual, then they are ultimately the product of and dependent on the physical.

Can Truth Exist Without Brains and a Universe?  Can Identities Exist Without a Universe?  I Don’t Think So…

Since Slick himself even claims that The Laws of Logic (LOL) are conceptual by nature, then that would mean that they are in fact also dependent on and the product of the physical universe, and more specifically are dependent on and the product of the human mind (or natural minds in general which are produced by a physical brain).  Slick goes on to say that the LOL can’t be dependent on the physical universe (which contains the brains needed to think or produce those concepts) because “…if the physical universe were to disappear, The Laws of Logic would still be true.”  It seems to me that without a physical universe, there wouldn’t be any “somethings” with any identities at all and so the Law of Identity which is the root of the other LOL wouldn’t apply to anything because there wouldn’t be anything and thus no existing identities.  Therefore, to say that the LOL are true sans a physical universe would be meaningless because identities themselves wouldn’t exist without a physical universe.  One might argue that abstract identities would still exist (like numbers or properties), but abstractions are products of a mind and thus need a brain to exist (so far as we know).  If one argued that supernatural identities would still exist without a physical universe, this would be nothing more than an ad hoc metaphysical assertion about the existence of the supernatural which carries a large burden of proof that can’t be (or at least hasn’t been) met.  Beyond that, if at least one of the supernatural identities was claimed to be God, this would also be begging the question.  This leads me to believe that the LOL are in fact a property of the physical universe (and appear to be a necessary one at that).

And if truth is itself just another concept, it too is dependent on minds and by extension the physical brains that produce those minds (as mentioned earlier).  In fact, the LOL seem to be required for any rational thought at all (hence why they are often referred to as the Laws of Thought), including the determination of any truth value at all.  So our ability to establish the truth value of the LOL (or the truth of anything for that matter) is also contingent on our presupposing the LOL in the first place.  So if there were no minds to presuppose the very LOL that are needed to establish its truth value, then could one say that they would be true anyway?  Wouldn’t this be analogous to saying that 1 + 1 = 2 would still be true even if numbers and addition (constructs of the mind) didn’t exist?  I’m just not sure that truth can exist in the absence of any minds and any physical universe.  I think that just as physical laws are descriptions of how the universe changes over time, these Laws of Thought are descriptions that underlie what our rational thought is based on, and thus how we arrive at the concept of truth at all.  If rational thought ceases to exist in the absence of a physical universe (since there are no longer any brains/minds), then the descriptions that underlie that rational thought (as well as their truth value) also cease to exist.

Can Two Different Things Have Something Fundamental in Common?

Slick then erroneously claims that the LOL can’t be the product of human minds because human minds are different and thus aren’t absolute, apparently not realizing that even though human minds are different from one another in many ways, they also have a lot fundamentally in common, such as how they process information and how they form concepts about the reality they interact with generally.  Even though our minds differ from one another in a number of ways, we nevertheless only have evidence to support the claim that human brains produce concepts and process information in the same general way at the most fundamental neurological level.  For example, the evidence suggests that the concept of the color red is based on the neurological processing of a certain range of wavelengths of electromagnetic radiation that have been absorbed by the eye’s retinal cells at some point in the past.  However, in Slick’s defense, I’ll admit that it could be the case that what I experience as the color red may be what you would call the color blue, and this would in fact suggest that concepts that we think we mutually understand are actually understood or experienced differently in a way that we can’t currently verify (since I can’t get in your mind and compare it to my own experience, and vice versa).

Nevertheless, just because our minds may experience color differently from one another or just because we may differ slightly in terms of what range of shades/tints of color we’d like to label as red, this does not mean that our brains/minds (or natural minds in general) are not responsible for producing the concept of red, nor does it mean that we don’t all produce that concept in the same general way.  The number 3 is perhaps a better example of a concept that is actually shared by humans in an absolute sense, because it is a concept that isn’t dependent on specific qualia (like the color red is).  The concept of the number 3 has a universal meaning in the human mind since it is derived from the generalization of a quantity of three discrete objects (which is independent of how any three specific objects are experienced in terms of their respective qualia).

Human Brains Have an Absolute Fundamental Neurology Which Encompasses the LOL

So I see no reason to believe that human minds differ at all in their conception of the LOL, especially if this is the foundation for rational thought (and thus any coherent concept formed by our brains).  In fact, I also believe that the evidence within the neurosciences suggests that the way the brain recognizes different patterns and thus forms different/unique concepts and such is dependent on the fact that the brain uses a hardware configuration schema that encompasses the logical absolutes.  In a previous post, my contention was that:

Additionally, if the brain’s wiring has evolved in order to see dimensions of difference in the world (unique sensory/perceptual patterns that is, such as quantity, colors, sounds, tastes, smells, etc.), then it would make sense that the brain can give any particular pattern an identity by having a unique schema of hardware or unique use of said hardware to perceive such a pattern and distinguish it from other patterns.  After the brain does this, the patterns are then arguably organized by the logical absolutes.  For example, if the hardware scheme or process used to detect a particular pattern “A” exists and all other patterns we perceive have or are given their own unique hardware-based identity (i.e. “not-A” a.k.a. B, C, D, etc.), then the brain would effectively be wired such that pattern “A” = pattern “A” (law of identity), any other pattern which we can call “not-A” does not equal pattern “A” (law of non-contradiction), and any pattern must either be “A” or some other pattern even if brand new, which we can also call “not-A” (law of the excluded middle).  So by the brain giving a pattern a physical identity (i.e. a specific type of hardware configuration in our brain that when activated, represents a detection of one specific pattern), our brains effectively produce the logical absolutes by nature of the brain’s innate wiring strategy which it uses to distinguish one pattern from another.  So although it may be true that there can’t be any patterns stored in the brain until after learning begins (through sensory experience), the fact that the DNA-mediated brain wiring strategy inherently involves eventually giving a particular learned pattern a unique neurological hardware identity to distinguish it from other stored patterns, suggests that the logical absolutes themselves are an innate and implicit property of how the brain stores recognized patterns.

So I believe that our brain produces and distinguishes these different “object” identities by having a neurological scheme that represents each perceived identity (each object) with a unique set of neurons that function in a unique way and thus which have their own unique identity.  Therefore, it would seem that the absolute nature of the LOL can easily be explained by how the brain naturally encompasses them through its fundamental hardware schema.  In other words, my contention is that our brain uses this wiring schema because it is the only way that it can be wired to make any discriminations at all and validly distinguish one identity from another in perception and thought, and this ability to discriminate various aspects of reality would be evolutionarily naturally-selected for based on the brain accurately modeling properties of the universe (in this case different identities/objects/causal-interactions existing) as it interacts with that environment via our sensory organs.  Which would imply that the existence of discrete identities is a property of the physical universe, and the LOL would simply be a description of what identities are.  This would explain why we see the LOL as absolute and fundamental and presuppose them.  Our brains simply encompass them in the most fundamental aspect of our neurology as it is a fundamental physical property of the universe that our brains model.

I believe that this is one of the reasons that Dillahunty and others believe that the LOL are transcendent (neither physical nor conceptual), because natural brains/minds are neurologically incapable of imagining a world existing without them.  The problem then only occurs because Dillahunty is abstracting a hypothetical non-physical world or mode of existence, yet doesn’t realize that he is unable to remove every physical property from any abstracted world or imagined mode of existence.  In this case, the physical property that he is unable to remove from his hypothetical non-physical world is his own neurological foundation, the very foundation that underlies all concepts (including that of existential identities) and which underlies all rational thought.  I may be incorrect about Dillahunty’s position here, but this is what I’ve inferred anyway based on what I’ve heard him say while conversing with Slick about this topic.

Human (Natural) Minds Can’t Account for the LOL, But Disembodied Minds Can?

Slick even goes on to say in point 5 that:

But, since the Laws of Logic are always true everywhere and not dependent on human minds, it must be an absolute transcendent mind that is authoring them.  This mind is called God.

We can see here that he concedes that the LOL is in fact the product of a mind, only he rejects the possibility that it could be a human mind (and by implication any kind of natural mind).  Rather, he insists that it must be a transcendent mind of some kind, which he calls God.  The problem with this conclusion is that we have no evidence or argument that demonstrates that minds can exist without physical brains existing in physical space within time.  To assume so is simply to beg the question.  Thus, he is throwing in an ontological/metaphysical assumption of substance dualism as well as that of disembodied minds, not only claiming that there must exist some kind of supernatural substance, but that this mysterious “non-physical” substance also has the ability to constitute a mind, and somehow do so without any dependence on time (even though mental function and thinking is itself a temporal process).  He assumes all of this of course without providing any explanation of how this mind could work even in principle without being made of any kind of stuff, without being located in any kind of space, and without existing in any kind of time.  As I’ve mentioned elsewhere concerning the ad hoc concept of disembodied minds:

…the only concept of a mind that makes any sense at all is that which involves the properties of causality, time, change, space, and material, because minds result from particular physical processes involving a very complex configuration of physical materials.  That is, minds appear to be necessarily complex in terms of their physical structure (i.e. brains), and so trying to conceive of a mind that doesn’t have any physical parts at all, let alone a complex arrangement of said parts, is simply absurd (let alone a mind that can function without time, change, space, etc.).  At best, we are left with an ad hoc, unintelligible combination of properties without any underlying machinery or mechanism.

In summary, I believe Slick has made several errors in his reasoning, with the most egregious being his unfounded assumption that natural minds aren’t capable of producing an absolute concept such as the LOL simply because natural minds have differences between one another (not realizing that all minds have fundamental commonalities), and also his argument’s reliance on the assumption that an ad hoc disembodied mind not only exists (whatever that could possibly mean) but that this mind can somehow account for the LOL in a way that natural minds can not, which is nothing more than an argument from ignorance, a logical fallacy.  He also insists that the Laws of Logic would be true without any physical universe, not realizing that the truth value of the Laws of Logic can only be determined by presupposing the Laws of Logic in the first place, which is circular, thus showing that the truth value of the Laws of Logic can’t be used to prove that they are metaphysically transcendent in any way (even if they actually happen to be metaphysically transcendent).  Lastly, without a physical universe of any kind, I don’t see how identities themselves can exist, and identities seem to be required in order for the LOL to be meaningful at all.

Substance Dualism, Interactionism, & Occam’s razor

with 13 comments

Recently I got into a discussion with Gordon Hawkes on A Philosopher’s Take about the arguments and objections for substance dualism, that is, the position that there are actually two ontological substances that exist in the world: physical substances and mental (or non-physical) substances.  Here’s the link to part 1, and part 2 of that author’s post series (I recommend taking a look at this group blog and see what he and others have written over there on various interesting topics).  Many dualists would identify or liken this supposed second substance as a soul or the like, but I’m not going to delve into that particular detail within this post.  I’d prefer to focus on the basics of the arguments presented rather than those kinds of details.  Before I dive into the topic, I want to mention that Hawkes was by no means making an argument for substance dualism, but rather he was merely pointing out some flaws in common arguments against substance dualism.  Now that that’s been said, I’m going to get to the heart of the topic, but I will also be providing evidence and arguments against substance dualism.  The primary point raised in the first part of that series was the fact that just because neuroscience is continuously finding correlations between particular physical brain states and particular mental states, this doesn’t mean that these empirical findings show that dualism is necessarily false — since some forms of dualism seem to be entirely compatible with these empirical findings (e.g. interactionist dualism).  So the question ultimately boils down to whether or not mental processes are identical with, or metaphysically supervenient upon, the physical processes of the brain (if so, then substance dualism is necessarily false).

Hawkes talks about how the argument from neuroscience (as it is sometimes referred to) is fallacious because it is based on the mistaken belief that correlation (between mental states and brain states) is equivalent with identity or supervenience of mental and physical states.  Since this isn’t the case, then one can’t rationally use the neurological correlation to disprove (all forms of) substance dualism.  While I agree with this, that is, that the argument from neuroscience can’t be used to disprove (all forms of) substance dualism, it is nevertheless strong evidence that a physical foundation exists for the mind and it also provides evidence against all forms of substance dualism that posit that the mind can exist independently of the physical brain.  At the very least, it shows that the prior probability of minds existing without brains is highly unlikely.  This would seem to suggest that any supposed mental substance is necessarily dependent on a physical substance (so disembodied minds would be out of the question for the minimal substance dualist position).  Even more damning for substance dualists though, is the fact that since the argument from neuroscience suggests that minds can’t exist without physical brains, this would mean that prior to brains evolving in any living organisms within our universe, at some point in the past there weren’t any minds at all.  This in turn would suggest that the second substance posited by dualists isn’t at all conserved like the physical substances we know about are conserved (as per the Law of Conservation of Mass and Energy).  Rather, this second substance would have presumably had to have come into existence ex nihilo once some subset of the universe’s physical substances took on a particular configuration (i.e. living organisms that eventually evolved a brain complex enough to afford mental experiences/consciousness/properties).  Once all the brains in the universe disappear in the future (after the heat death of the universe guarantees such a fate), then this second substance will once again disappear from our universe.

The only way around this (as far as I can tell) is to posit that the supposed mental substance had always existed and/or will always continue to exist, but in an entirely undetectable way somehow detached from any physical substance (which is a position that seems hardly defensible given the correlation argument from neuroscience).  Since our prior probabilities of any hypothesis are based on all our background knowledge, and since the only substance we can be sure of exists (a physical substance) has been shown to consistently abide by conservation laws (within the constraints of general relativity and quantum mechanics), it is more plausible that any other ontological substance would likewise be conserved rather than not conserved.  If we had evidence to the contrary, that would change the overall consequent probability, but without such evidence, we only have data points from one ontological substance, and it appears to follow conservation laws.  For this reason alone, it is less likely that a second substance exists at all, if it isn’t itself conserved as that of the physical.

Beyond that, the argument from neuroscience also provides at least some evidence against interactionism (the idea that the mind and brain can causally interact with each other in both causal directions), and interactionism is something that substance dualists would likely need in order to have any reasonable defense of their position at all.  To see why this is true, one need only recognize the fact that the correlates of consciousness found within neuroscience consist of instances of physical brain activity that are observed prior to the person’s conscious awareness of any experience, intentions, or willed actions produced by said brain activity.  For example, studies have shown that when a person makes a conscious decision to do something (say, to press one of two possible buttons placed in front of them), there are neurological patterns that can be detected prior to their awareness of having made a decision and so these patterns can be used to correctly predict which choice the person will make even before they do!  I would say that this is definitely evidence against interactionism, because we have yet to find any cases of mental experiences occurring prior to the brain activity that is correlated with it.  We’ve only found evidence of brain activity preceding mental experiences, never the other way around.  If the mind was made from a different substance, existing independently of the physical brain (even if correlated with it), and able to causally interact with the physical brain, then it seems reasonable to expect that we should be able to detect and confirm instances of mental processes/experiences occurring prior to correlated changes in physical brain states.  Since this hasn’t been found yet in the plethora of brain studies performed thus far, the prior probability of interactionism being true is exceedingly low.  Additionally, the conservation of mass and energy that we observe (as well as the laws of physics in general) in our universe also challenges the possibility of any means of causal energy transfer between a mind to a brain or vice versa.  For the only means of causal interaction we’ve observed thus far in our universe is by way of energy/momentum transfer from one physical particle/system to another.  If a mind is non-physical, then by what means can it interact at all with a brain or vice versa?

The second part of the post series from Hawkes talked about Occam’s razor and how it’s been applied in arguments against dualism.  Hawkes argues that even though one ontological substance is less complex and otherwise preferred over two substances (when all else is equal), Occam’s razor apparently isn’t applicable in this case because physicalism has been unable to adequately address what we call a mind, mental properties, etc.  My rebuttal to this point is that dualism doesn’t adequately address what we call a mind, mental properties, etc., either.  In fact it offers no additional explanatory power than physicalism does because nobody has proposed how it could do so.  That is, nobody has yet demonstrated (as far as I know) what any possible mechanisms would be for this new substance to instantiate a mind, how this non-physical substance could possibly interact with the physical brain, etc.  Rather it seems to have been posited out of an argument from ignorance and incredulity, which is a logical fallacy.  Since physicalism hasn’t yet provided a satisfactory explanation for what some call the mind and mental properties, it is therefore supposed by dualists that a second ontological substance must exist that does explain it or account for it adequately.

Unfortunately, because of the lack of any proposed mechanism for the second substance to adequately account for the phenomena, one could simply replace the term “second substance” with “magic” and be on the same epistemic footing.  It is therefore an argument from ignorance to presume that a second substance exists, for the sole reason that nobody has yet demonstrated how the first substance can fully explain what we call mind and mental phenomena.  Just as we’ve seen throughout history where an unknown phenomena is attributed to magic or the supernatural and later found to be accounted for by a physical explanation, this means that the prior probability that this will also be the case for the phenomena of the mind and mental properties is extraordinarily high.  As a result, I think that Occam’s razor is applicable in this case, because I believe it is always applicable to an argument from ignorance that’s compared to an (even incomplete) argument from evidence.  Since physicalism accounts for many aspects of mental phenomena (such as the neuroscience correlations, etc.), dualism needs to be supported by at least some proposed mechanism (that is falsifiable) in order to nullify the application of Occam’s razor.

Those are my thoughts on the topic for now.  I did think that Hawkes made some valid points in his post series — such as the fact that correlation doesn’t equal identity or supervenience and also the fact that Occam’s razor is only applicable under particular circumstances (such as when both hypotheses explain the phenomena equally, good or bad, with one containing a more superfluous ontology).  However, I think that overall the arguments and evidence against substance dualism are strong enough to eliminate any reasonable justification for supposing that dualism is true (not that Hawkes was defending dualism as he made clear at the beginning of his post) and I also believe that both physicalism and dualism explain the phenomena equally well such that Occam’s razor is applicable (since dualism doesn’t seem to add any explanatory power to that already provided by physicalism).  So even though correlation doesn’t equal identity or supervenience, the arguments and evidence from neuroscience and physics challenge the possibility of any interactionism between the physical and supposed non-physical substance, and it challenges the existence of the second substance in general (due to it’s apparent lack of conservation over time among other reasons).

The Origin and Evolution of Life: Part II

leave a comment »

Even though life appears to be favorable in terms of the second law of thermodynamics (as explained in part one of this post), there have still been very important questions left unanswered regarding the origin of life including what mechanisms or platforms it could have used to get itself going initially.  This can be summarized by a “which came first, the chicken or the egg” dilemma, where biologists have wondered whether metabolism came first or if instead it was self-replicating molecules like RNA that came first.

On the one hand, some have argued that since metabolism is dependent on proteins and enzymes and the cell membrane itself, that it would require either RNA or DNA to code for those proteins needed for metabolism, thus implying that RNA or DNA would have to originate before metabolism could begin.  On the other hand, even the generation and replication of RNA or DNA requires a catalytic substrate of some kind and this is usually accomplished with proteins along with metabolic driving forces to accomplish those polymerization reactions, and this would seem to imply that metabolism along with some enzymes would be needed to drive the polymerization of RNA or DNA.  So biologists we’re left with quite a conundrum.  This was partially resolved when several decades ago, it was realized that RNA has the ability to not only act as a means of storing genetic information just like DNA, but it also has the additional ability of catalyzing chemical reactions just like an enzyme protein can.  Thus, it is feasible that RNA could act as both an information storage molecule as well as an enzyme.  While this helps to solve the problem if RNA began to self-replicate itself and evolve over time, the problem still remains of how the first molecules of RNA formed, because it seems that some kind of non-RNA metabolic catalyst would be needed to drive this initial polymerization.  Which brings us back to needing some kind of catalytic metabolism to drive these initial reactions.

These RNA polymerization reactions may have spontaneously formed on their own (or evolved from earlier self-replicating molecules that predated RNA), but the current models of how the early pre-biotic earth would have been around four billion years ago seem to suggest that there would have been too many destructive chemical reactions that would have suppressed the accumulation of any RNA and would have likely suppressed other self-replicating molecules as well.  What seems to be needed then is some kind of a catalyst that could create them quickly enough such that they would be able to accumulate in spite of any destructive reactions present, and/or some kind of physical barrier (like a cell wall) that protects the RNA or other self-replicating polymers so that they don’t interact with those destructive processes.

One possible solution to this puzzle that has been developing over the last several years involves alkaline hydrothermal vents.  We actually didn’t know that these kinds of vents existed until the year 2000 when they were discovered on a National Science Foundation expedition in the mid-Atlantic.  Then a few years later they were studied more closely to see what kinds of chemistries were involved with these kinds of vents.  Unlike the more well-known “black smoker” vents (which were discovered in the 1970’s), these alkaline hydrothermal vents have several properties that would have been hospitable to the emergence of life back during the Hadeon eon (between 4.6 and 4 billion years ago).

The ocean water during the Hadeon eon would have been much more acidic due to the higher concentrations of carbon dioxide (thus forming carbonic acid), and this acidic ocean water would have mixed with the hydrogen-rich alkaline water found within the vents, and this would have formed a natural proton gradient within the naturally formed pores of these rocks.  Also, electron transfer would have likely occurred when the hydrogen and methane-rich vent fluid contacted the carbon dioxide-rich ocean water, thus generating an electrical gradient.  This is already very intriguing because all living cells ultimately derive their metabolic driving forces from proton gradients or more generally from the flow of some kind of positive charge carrier and/or electrons.  Since the rock found in these vents undergoes a process called surpentization, which spreads the rock apart into various small channels and pockets, many different kinds of pores form in the rocks, and some of them would have been very thin-walled membranes separating the acidic ocean water from the alkaline hydrogen.  This would have facilitated the required semi-permeable barrier that modern cells have which we expect the earliest proto-cells to also have, and it would have provided the necessary source of energy to power various chemical reactions.

Additionally, these vents would have also provided a source of minerals (namely green rust and molybdenum) which likely would have behaved as enzymes, catalyzing reactions as various chemicals came into contact with them.  The green rust could have allowed the use of the proton gradient to generate molecules that contained phosphate, which could have stored the energy produced from the gradient — similar to how all living systems that we know of store their energy in ATP (Adenosine Tri-Phosphate).  The molybdenum on the other hand would have assisted in electron transfer through those membranes.

So this theory provides a very plausible way for catalytic metabolism as well as proto-cellular membrane formation to have resulted from natural geological processes.  These proto-cells would then likely have begun concentrating simple organic molecules formed from the reaction of CO2 and H2 with all the enzyme-like minerals that were present.  These molecules could then react with one another to polymerize and form larger and more complex molecules including eventually nucleotides and amino acids.  One promising clue that supports this theory is the fact that every living system on earth is known to share a common metabolic system, known as the citric acid cycle or Kreb’s cycle, where it operates in the forward direction for aerobic organisms and in the reverse direction for anaerobic organisms.  Since this cycle consists of only 11 molecules, and since all biological components and molecules that we know of in any species have been made by some number or combination of these 11 fundamental building blocks, scientists are trying to test (among other things) whether or not they can mimic these alkaline hydrothermal vent conditions along with the acidic ocean water that would have been present in the Hadrean era and see if it will precipitate some or all of these molecules.  If they can, it will show that this theory is more than plausible to account for the origin of life.

Once these basic organic molecules were generated, eventually proteins would have been able to form, some of which that could have made their way to the membrane surface of the pores and acted as pumps to direct the natural proton gradient to do useful work.  Once those proteins evolved further, it would have been possible and advantageous for the membranes to become less permeable so that the gradient could be highly focused on the pump channels on the membrane of these proto-cells.  The membrane could have begun to change into one made from lipids produced from the metabolic reactions, and we already know that lipids readily form micelles or small closed spherical structures once they aggregate in aqueous conditions.  As this occurred, the proto-cells would no longer have been trapped in the porous rock, but would have eventually been able to slowly migrate away from the vents altogether, eventually forming the phospholipid bi-layer cell membranes that we see in modern cells.  Once this got started, self-replicating molecules and the rest of the evolution of the cell would have underwent natural selection as per the Darwinian evolution that most of us are familiar with.

As per the earlier discussion regarding life serving as entropy engines and energy dissipation channels, this self-replication would have been favored thermodynamically as well because replicating those entropy engines and the energy dissipation channels means that they will only become more effective at doing so.  Thus, we can tie this all together, where natural geological processes would have allowed for the required metabolism to form, thus powering organic molecular synthesis and polymerization, and all of these processes serving to increase entropy and maximize energy dissipation.  All that was needed for this to initiate was a planet that had common minerals, water, and CO2, and the natural geological processes can do the rest of the work.  These kinds of planets actually seem to be fairly common in our galaxy, with estimates ranging in the billions, thus potentially harboring life (or where it is just a matter of time before it initiates and evolves if it hasn’t already).  While there is still a lot of work to be done to confirm the validity of these models and to try to find ways of testing them vigorously, we are getting relatively close to solving the puzzle of how life originated, why it is the way it is, and how we can better search for it in other parts of the universe.

The Origin and Evolution of Life: Part I

leave a comment »

In the past, various people have argued that life originating at all let alone evolving higher complexity over time was thermodynamically unfavorable due to the decrease in entropy involved with both circumstances, and thus it was believed to violate the second law of thermodynamics.  For those unfamiliar with the second law, it basically asserts that the amount of entropy (often referred to as disorder) in a closed system tends to increase over time, or to put it another way, the amount of energy available to do useful work in a closed system tends to decrease over time.  So it has been argued that since the origin of life and the evolution of life with greater complexity would entail decreases in entropy, these events are therefore either at best unfavorable (and therefore the result of highly improbable chance), or worse yet they are altogether impossible.

We’ve known for quite some time now that these thermodynamic arguments aren’t at all valid because earth isn’t a thermodynamically closed or isolated system due to the constant supply of energy we receive from the sun.  Because we get a constant supply of energy from the sun, and because the entropy increase from the sun far outweighs the decrease in entropy produced from all biological systems on earth, the net entropy of the entire system increases and thus fits right in line with the second law as we would expect.

However, even though the emergence and evolution of life on earth do not violate the second law and are thus physically possible, that still doesn’t show that they are probable processes.  What we need to know is how favorable the reactions are that are required for initiating and then sustaining these processes.  Several very important advancements have been made in abiogenesis over the last ten to fifteen years, with the collaboration of geologists and biochemists, and it appears that they are in fact not only possible but actually probable processes for a few reasons.

One reason is that the chemical reactions that living systems undergo produce a net entropy as well, despite the drop of entropy associated with every cell and/or it’s arrangement with respect to other cells.  This is because all living systems give off heat with every favorable chemical reaction that is constantly driving the metabolism and perpetuation of those living systems. This gain in entropy caused by heat loss more than compensates for the loss in entropy that results with the production and maintenance of all the biological components, whether lipids, sugars, nucleic acids or amino acids and more complex proteins.  Beyond this, as more complexity arises during the evolution of the cells and living systems, the entropy that those systems produce tends to increase even more and so living systems with a higher level of complexity appear to produce a greater net entropy (on average) than less complex living systems.  Furthermore, once photosynthetic organisms evolved in particular, any entropy (heat) that they give off in the form of radiation ends up being of lower energy (infrared) than the photons given off by the sun to power those reactions in the first place.  Thus, we can see that living systems effectively dissipate the incoming energy from the sun, and energy dissipation is energetically favorable.

Living systems seem to serve as a controllable channel of energy flow for that energy dissipation, just like lightning, the eye of a hurricane, or a tornado, where high energy states in the form of charge gradients or pressure or temperature gradients end up falling to a lower energy state by dissipating that energy through specific focused channels that spontaneously form (e.g. individual concentrated lightning bolts, the eye of a hurricane, vortices, etc.).  These channels for energy flow are favorable and form because they allow the energy to be dissipated faster since the channels are initiated by some direction of energy flow that is able to self-amplify into a path of decreasing resistance for that energy dissipation.  Life and the metabolic processes involved with it, seem to direct energy flow in ways that are very similar to these other naturally arising processes in non-living physical systems.  Interestingly enough, a relevant hypothesis has been proposed for why consciousness and eventually self-awareness would have evolved (beyond the traditional reasons proposed by natural selection).  If an organism can evolve the ability to predict where energy is going to flow, where an energy dissipation channel will form (or form more effective ones themselves), conscious organisms can then behave in ways that much more effectively dissipate energy even faster (and also by catalyzing more entropy production), thus showing why certain forms of biological complexity such as consciousness, memory, etc., would have also been favored from a thermodynamic perspective.

Thus, the origin of life as well as the evolution of biological complexity appears to be increasingly favored by the second law, thus showing a possible fundamental physical driving force behind the origin and evolution of life.  Basically, the origin and evolution of life appear to be effectively entropy engines and catalytic energy dissipation channels, and these engines and channels produce entropy at a greater rate than the planet otherwise would in the absence of that life, thus showing at least one possible driving force behind life, namely, the second law of thermodynamics.  So ironically, not only does the origin and evolution of life not violate the second law of thermodynamics, but it actually seems to be an inevitable (or at least favorable) result because of the second law.  Some of these concepts are still being developed in various theories and require further testing to better validate them but they are in fact supported by well-established physics and by consistent and sound mathematical models.

Perhaps the most poetic concept I’ve recognized with these findings is that life is effectively speeding up the heat death of the universe.  That is, the second law of thermodynamics suggests that the universe will eventually lose all of its useful energy when all the stars burn out and all matter eventually spreads out and decays into lower and lower energy photons, and thus the universe is destined to undergo a heat death.  Life, because it is producing entropy faster than the universe otherwise would in the absence of that life, is actually speeding up this inevitable death of the universe, which is quite fascinating when you think about it.  At the very least, it should give a new perspective to those that ask the question “what is the meaning or purpose of life?”  Even if we don’t think it is proper to think of life as having any kind of objective purpose in the universe, what life is in fact doing is accelerating the death of not only itself, but of the universe as a whole.  Personally, this further reinforces the idea that we should all ascribe our own meaning and purpose to our lives, because we should be enjoying the finite amount of time that we have, not only as individuals, but as a part of the entire collective life that exists in our universe.

To read about the newest and most promising discoveries that may explain how life got started in the first place, read part two here.

The Properties of God: Much Ado About Nothing

leave a comment »

Having previously written about various Arguments for God’s Existence, including some of the inherent flaws and problems with those arguments, and having analyzed some of the purported attributes of God as most often defined by theists, I decided to reiterate some of the previous points I’ve mentioned and also expand further on the topic. Specifically, I’d like to further analyze the most common definitions and properties of God as claimed by theists.  God is often defined by theists as an omnipotent, omniscient, omnipresent, and omnibenevolent being that is also uncaused, beginningless, timeless, changeless, spaceless, and immaterial among other attributes.  God is also defined by many as some form of disembodied mind possessing free will.  Since this list of terms is perhaps the most common I’ve seen over the years within theological circles, I’ll simply focus on these terms to analyze within this post.

Omniscience, Omnipotence, Changelessness, and Free Will

The property of omniscience is perhaps the single most significant property within this list because if it is taken to be true, it inevitably leads to the logical impossibility of some of the other attributes in this list.  For instance, if God’s knowledge includes complete knowledge of the future, then God is unable to change that future.  That is, whatever future that God would be aware of must happen exactly as it does, and God would not have the ability to change such a fate (otherwise this God would have failed to know the future without error).  This leads to the logical impossibility of God possessing both omniscience and omnipotence, as God loses the ability to enact any kind of change whatsoever that isn’t already pre-ordained or known by this God in advance.  God would not only know the future of all events occurring within the universe (presumably mediated by the very laws of physics that this God would have created) thus eliminating any possible free will for all of humanity, but this God would also know the future of all his other actions, thoughts, intentions, etc., and thus God wouldn’t be able to have free will either.  One can try to preserve the theological property of omnipotence or free will by denying that of omniscience (by limiting God’s knowledge of the future in some way).  However, even if this God didn’t have the ability to know the future with 100% certainty as implied with omniscience, the absence of omniscience wouldn’t negate the possibility that this God may still have no choice or ability to act any other way (even if this God doesn’t know ahead of time what those actions will be).

Even if we accepted that God doesn’t have omniscience, and if we also ignored the possibility that God may still lack free will or omnipotence even in the absence of that omniscient foreknowledge, one must still explain how a definitively changeless being could ever instantiate any kind of change at all, let alone to create the entire universe, space, and time (which is dependent on change).  Is it even logically possible for a changeless being to instantiate change?  That is, could a being possessing a de facto property such as changelessness simultaneously possess a modal property or capability of change?  Even if it were logically possible, there doesn’t appear to be any way at all for the modal property to ever be self-instantiated by a de facto changeless being.

An outside causal force may be able to instantiate the change in the previously changeless being, but I see no way that this could be accomplished by the changeless being itself.  One may try to resolve this dilemma by positing that one aspect or component of the changeless state of God was the constant or changeless intention to eventually cause a change at some future time x (e.g. to eventually create the universe), but this attempted resolution carries with it the problem of contradicting the supposed theological property of timelessness, since there can’t be some future moment for any change to occur in any kind of timeless scenario.  This would suggest that some kind of temporal delay is occurring until the change is eventually realized, which is logically incoherent in a timeless scenario.  Thus, I see no reason or logical argument to support the claim that a de facto property of changelessness could ever co-exist with a modal property or capability of self-causing any kind of change, and thus a timeless or changeless being would be causally effete thereby negating the property of omnipotence.

Omnibenevolence

One major problem that I see regarding the property of omnibenevolence, is that the term itself isn’t well-defined.  Sure, one can easily grasp the basic concept of being all-loving or all-good, but exactly what standard is one using to define goodness, or love, since these are not objectively defined concepts?  Another way of describing this problem, within the context of Divine Command Theory, is known as Euthyphro’s Dilemma (from one of Plato’s dialogues), where one must ask: Is something good because God says it is good, or does God say something is good because of some other quality it has?  If the standard of goodness comes from God (i.e. “it’s good because God says so”), then it is entirely arbitrary and this would also mean that the definition of omnibenevolence is circular and therefore invalid.  If the standard of goodness comes from some other cause or being, then that means that goodness is dependent on something other than God and this would also undermine the idea that God is uncaused or beginningless, since the property of God’s benevolence (even if omnibenevolent) would have been dependent on something other than God.  Beyond these problems it would also undermine the idea of God being omnipotent since God wouldn’t have the power to self-instantiate this standard of goodness.

Another problem with positing that God is omnibenevolent, is the oft mentioned Problem of Evil, which ultimately refers to the problem of how to reconcile the supposed existence and omnibenevolence of God with all of the suffering that exists in the world.  If God was truly omnibenevolent, then how can one explain the existence of any “evil or suffering at all?  If God doesn’t have the ability to create a universe without any suffering, then this is another argument against God’s omnipotence.  If God does have the ability to do this but doesn’t, then this is an argument against God’s omnibenevolence, assuming that the elimination of all suffering is in accord with the standard of goodness, as one would expect.

Some philosophers have attempted to form various theodicies or defenses to reconcile the Problem of Evil with the idea of an omnipotent and/or omnibenevolent God, but they are ultimately unsuccessful.  For example, some attempts to resolve this problem involve asserting that good simply can’t logically exist without evil, implying that they are relative to and thus dependent on one another, which basically reasserts the old adage “you can’t have the sour without the sweet”.  The problem with this argument is that, if taken further, it would also imply that an omnibenevolent being (as God is often defined as) is also logically dependent on the existence of an equal but opposite omnimalevolent being, or at the very least, that it is dependent on the property of omnimalevolence.  This would mean that if God is indeed omnibenevolent, then this property of God is logically dependent on the existence of omnimalevolence, and this is another argument showing that God is not uncaused or beginningless, because this particular property of God wouldn’t even be a possibility without the existence of something that is definitively not a part of God (by definition).

Beyond all of these problems mentioned thus far, there seem to be at least several possible solutions that God (if omnibenevolent and all-powerful) could employ to eliminate suffering, and if these possibilities exist, the fact that none of them have been implemented argues against God being omnibenevolent.  For example, why couldn’t God simply feed our brains (even if just a brain in a vat) with a sensory input of nothing but pleasurable experiences?  Even if pleasure was dependent on some kind of contrast with less pleasurable experiences in the past (or if we would unavoidably become desensitized to a particular level of pleasure), God could simply amplify the magnitude of pleasurable sensory inputs with each subsequent moment of time indefinitely, thus producing an experience of nothing but constant and equally potent pleasure.

Moreover, if the God that most theists propose truly exists, and some kind of heaven or eternal paradise is within God’s capabilities (filled with a bunch of disembodied minds or souls), then there’s no rational reason why God couldn’t simply create all of us in heaven from the very beginning of our existence.  This is basically the case already with many miscarried or aborted fetuses (if theists assume that fetuses have souls and go to heaven immediately after their death), since many of these fetuses aren’t even alive long enough to have developed a brain with any level of consciousness or ability to experience any suffering at all.  Thus, they would represent a perfect example of individuals that only experience an eternity of pleasure completely void of any kind of suffering.  One would think if this is already a reality for some individuals, God should have the power to make it the case for all people, so nobody has to suffer at all.  This is of course if God couldn’t simply create all humans in heaven from the very beginning and skip the creation of the physical universe altogether.  If God lacks this ability, it is yet another argument against this God being omnipotent.  In addition to this, if it were the case that any conscious being created by God is ever destined to any kind of eternal torture (i.e. some version of “hell”), due to no chance of forgiveness after death, this would be perhaps the strongest argument against this God being omnibenevolent.  So as we can see, if eternal paradise and/or eternal damnation are actually real places created/mediated by God, then their very existence argues against God’s omnibenevolence and/or God’s omnipotence since we’re not all created in heaven from the very beginning of our existence, and/or since there are people destined to suffer for eternity.

Another attempt to resolve this Problem of Evil is the argument that humans wouldn’t be able to have free will without the existence of “evil” or suffering.  However, this makes absolutely no sense for a number of reasons.  For one, as mentioned previously, classical free will (i.e. the ability to have chosen to behave differently, given the same initial conditions, less randomness) is already impossible based on the laws of physics and our level of causal closure, and this is the case whether our physical laws are fundamentally deterministic or random.  So this attempted resolution is a desperate objection at best, because it also requires us to assume that we’re constantly violating the laws of physics and causal closure in order to be causa sui, or self-caused intentional agents.  So we’d have to grant one absurdity in order to explain away another which doesn’t solve the dilemma at all, but rather just replaces one dilemma with another.

Finally, if “heaven” or some form of eternal paradise is still a possible product of God’s power, then the free will argument is irrelevant in any case.  After all, presumably we wouldn’t have free will in heaven either, for if we did have free will to rebel or cause “evil” or suffering in heaven, this would contradict the very idea of what heaven is supposed to be (since it is defined as an eternal and perfect paradise without any “evil” or suffering at all).  If one argues that it is still possible to have free will in a heaven that is guaranteed to be void of evil or suffering, then this simply shows that suffering isn’t necessary in order to have free will, and thus the free will argument to the Problem of Evil still fails.  If we didn’t have free will in heaven (which would seem to be logically necessary in order for heaven to exist as defined), then we can see that infinite or maximal “goodness” or eternal paradise is indeed possible even in the absence of any free will, which would thus negate the free will argument to the Problem of Evil (even if we granted the absurdity that classical free will was possible).  So no matter how you look at it, the property of omnibenevolence appears to be ill-defined or circular and is thus meaningless and/or it is incompatible with some of the other purported theological properties used to define God (i.e. uncaused, beginningless, omnipotent, etc.).

Omnipresence

If God was omnipresent, one would think that we would be able to universally and undeniably detect the presence of God, and yet the exact opposite is the case.  In fact, God appears to be completely invisible and entirely undetectable.  In cases where there are theists that claim to have actually experienced or detected the presence of God in some way, it is always in a way that can’t be validated or confirmed by any physical evidence whatsoever.  Science has demonstrated time and time again that when people experience phenomena that do not correlate with reality, i.e., phenomena that do not occur outside of their minds and thus that can’t be independently verified with physical evidence, they are the result of perceptual illusions and other strictly mental phenomena (whether they are full blown hallucinations, delusions, mis-attributed emotional experiences, etc.).  In general though, the basic trend exemplified by theists is that whenever they have an experience that is seemingly unexplainable, they attribute it to being an act of God.

Unfortunately, this is an extremely weak position to take (and increasingly weak as history has amply shown) simply because this “God of the gaps” mentality has been demonstrably proven to be fallacious and unreliable as science has continued to explain more and more previously unexplainable phenomena that were once attributed to one god or another.  So in Bayesian terms, the prior probability that some unexplainable phenomenon is the result of some kind of God is infinitesimally small, and that probability has only decreased over time and will only continue to decrease over time as scientific progress continues to falsify supernatural explanations and attributions by replacing them with natural ones.

So unless we are talking about some kind of Pantheism (where God is basically defined as being equivalent to the universe itself), then we have theists claiming that God is everywhere when this God in fact appears to be nowhere at all.  The simple fact that nobody has been able to demonstrate or verify the existence of God with any physical evidence whatsoever, is a strong argument against the omnipresence of God (if not an argument against the very existence of God).  Ultimately, the theological property of omnipresence is a meaningless term if this type of presence is one that is completely undetectable and unfalsifiable, which would make sense regarding a being that doesn’t possess any properties of space, time, or material, but unfortunately it also means that this term doesn’t adhere to any reasonable convention of what it means to be present, and it also means that the property of omnipresence is incompatible with the properties of being spaceless, timeless, and immaterial.  If the type of omnipresence is that which is claimed to be experienced by theists from time to time, experiences that have been shown to be strictly mental with no correlation to the external world, then this is actually nothing more than a limited type of presence (and one that is strictly mental), and one likely resulting from mis-attributed emotions combined with various inherent human cognitive biases.

Abstract Objects, Disembodied Minds & God

Perhaps the most interesting thing I’ve discovered regarding these theological properties pertains to the subset of properties that specifically describe God to be uncaused, beginningless, timeless, changeless, spaceless, and immaterial (which I’ll now abbreviate as simply UBTCSI).  These terms have also been formulated by theists in various arguments for the existence of God (such as the Kalam Cosmological Argument), with theists trying to argue that the origin of the universe must have been brought about by a cause having this particular set of properties.  What I find most interesting is that contemporary philosophers of ontology have ascribed this set of terms to certain abstract objects such as numbers and properties.  It is also notable that these properties seem to result by way of negation, that is, by removing all (or nearly all) aspects of our perceived reality.

The fact that these terms are used to describe the properties of abstract objects in general, which are almost universally agreed to be causally effete, actually supports the idea that God is nothing more than an abstract object.  Even if abstract objects have some kind of ontological existence independent of the brains that most likely produce them, they have still been shown to be causally effete.  If abstract objects do not have any kind of ontological existence independent of the brains that most likely produce them, then they are actually the product of brains which possess the converse of the UBTCSI properties, that is, they are the product of brains which possess the properties of being caused and thus having a beginning, as well as the properties of time, change, space, and material.

If abstract objects are nothing more than constructs of the brain, then we may expect that the minds that produce these abstract objects would have similar properties ascribed to them as well.  Sure enough, many philosophers have indeed also used the aforementioned UBTCSI properties to describe a mind.  So, if it is true that abstract objects as well as the minds they appear to be dependent on are ultimately products of the physical brain (with the latter being well-nigh proven at this point), then ultimately they are both produced from that which possesses the naturalistic properties of causality, beginning, time, change, space, material, etc., thus arguably challenging the claim that either abstracta or minds can be defined properly with the UBTCSI properties.

Many theists have taken advantage of the aforementioned “ontology of mind” and posited that God is some kind of disembodied mind, thus presumably adhering to these same UBTCSI properties, yet with the addition of several more properties that were mentioned earlier (i.e. omnipotence, omniscience, etc.).  However, one major problem with this tactic is that the term, disembodied mind, is simply an ad hoc conceptualization, and one that doesn’t make much if any sense at all when thought about more critically.  After all, if the only minds that we’re aware of are those demonstrably produced from the underlying machinery of physical brains, then what exactly would a disembodied mind entail anyway?  What would it be composed of if not physical materials (and thus those which lie in space)?  How would it function at all if the only minds we know of involve an underlying machinery of constantly changing neuronal configurations which subsequently cause the mental experience that we call a mind?  How can this mind think at all, when thinking is itself a temporal process, known to speed up or slow down depending on various physical variables (e.g. neurotransmitter concentrations, temperature, Relativistic effects, etc.)?

These questions illustrate the fact that the only concept of a mind that makes any sense at all is that which involves the properties of causality, time, change, space, and material, because minds result from particular physical processes involving a very complex configuration of physical materials.  That is, minds appear to be necessarily complex in terms of their physical structure (i.e. brains), and so trying to conceive of a mind that doesn’t have any physical parts at all, let alone a complex arrangement of said parts, is simply absurd (let alone a mind that can function without time, change, space, etc.).  At best, we are left with an ad hoc, unintelligible combination of properties without any underlying machinery or mechanism.

So the fact that there exist strong arguments and evidence in support of abstract objects being nothing more than products of the mind, and the fact that minds in general are demonstrably the product of physical brains and their underlying complex neuronal configurations, illustrates that the only things in our universe that philosophers have ascribed these UBTCSI properties to (minds and abstract objects) are in fact more accurately described by the converse of those very properties.  It would then logically follow that God, claimed to possess the very same properties, is most likely to be nothing more than a causally effete abstract object — a mere mentally simulated model produced by our physical brains.  This entails that the remaining properties of omnipotence, omniscience, omnipresence, and omnibenevolence, which are themselves abstract objects, are ultimately ascribed to yet another causally effete abstract object.

Much Ado About Nothing

As we can see, the properties commonly ascribed to God suggest that this God as described is:

1) Ill-defined since some of the properties are ultimately meaningless or circular, and

2) Logically impossible since some of the properties contradict one another, and

3) Likely to be a causally effete construct of the mind.

So overall, the theist’s strenuous endeavors in arguing over what the properties of their purported God must be, has ultimately been much ado about nothing at all.

The Kalam Cosmological Argument for God’s Existence

with 20 comments

Previously, I’ve written briefly about some of the cosmological arguments for God.  I’d like to expand on this topic, and I’ll begin doing so in this post by analyzing the Kalam Cosmological Argument (KCA), since it is arguably the most well known version of the argument, which can be described with the following syllogism:

(1) Everything that begins to exist has a cause;

(2) The universe began to exist;

Therefore,

(3) The universe has a cause.

The conclusion of this argument is often expanded by theists to suggest that the cause must be supernaturally transcendent, immaterial, timeless, spaceless, and perhaps most importantly, this cause must itself be uncaused, in order to avoid the causal infinite regress implied by the KCA’s first premise.

Unfortunately this argument fails for a number of reasons.  The first thing that needs to be clarified is the definitions of terms used in these premises.  What is meant by “everything”, or “begins to exist”?  “Everything” in this context does imply that there are more than one of these things, which means that we are referring to a set of things, indeed the set of all things in this case.  The set of all things implied here apparently refers to all matter and energy in the universe, specifically the configuration of any subset of all matter and/or energy.  Then we have the second element in the first premise, “begins to exist”, which would thus refer to when the configuration of some set of matter and/or energy changes to a new configuration.  So we could rewrite the first premise as “any configuration of matter and/or energy that exists at time T and which didn’t exist at the time immediately prior to time T (which we could call T’), was a result of some cause”.  If we want to specify how “immediately prior” T’ is to T, we could use the smallest unit of time that carries any meaning per the laws of physics which would be the Planck time (roughly 10^-43 seconds), which is the time it takes the fastest entity in the universe (light) to traverse the shortest distance in the universe (the Planck length).

Does Everything Have a Cause?

Now that we’ve more clearly defined what is meant by the first premise, we can address whether or not that premise is sound.  It seems perfectly reasonable based on the nature of causality that we currently understand that there is indeed some cause that drives the changes in the configurations of sets of matter and energy that we observe in the universe, most especially in the everyday world that we observe.  On a most fundamental physical level, we would typically say that the cause of these configuration changes is described as the laws of physics.  Particles and waves all behave as they do, very predictably changing from one form into another based on these physical laws or consistent patterns that we’ve discovered.  However, depending on the interpretation of quantum mechanics used, there may be acausal quantum processes happening, for example, as virtual particle/anti-particle pairs pop into existence without any apparent deterministic path.  That is, unless there are non-local hidden variables that we are unaware of which guide/cause these events, there don’t appear to be any deterministic or causal driving forces behind certain quantum phenomena.  At best, the science is inconclusive as to whether all phenomena have causes, and thus one can’t claim certainty to the first premise of the KCA.  Unless we find a way to determine that quantum mechanics is entirely deterministic, we simply don’t know that matter and energy are fundamentally causally connected as are objects that we observe at much larger scales.

The bottom line here is that quantum indeterminism carries with it the possibility of acausality until proven otherwise, thus undermining premise one of the KCA with the empirical evidence found within the field of quantum physics.  As such, it is entirely plausible that if the apparent quantum acausal processes are fundamental to our physical world, the universe itself may have arisen from said acausal processes, thus undermining premise two as well as the conclusion of the KCA.  We can’t conclude that this is the case, but it is entirely possible and is in fact plausible given the peculiar quantum phenomena we’ve observed thus far.

As for the second premise, if we apply our clarified definition of “began to exist” introduced in the first premise to the second, then “the universe began to exist” would mean more specifically that “there was once a time (T’) when the universe didn’t exist and then at time T, the universe did exist.”  This is the most obviously problematic premise, at least according to the evidence we’ve found within cosmology.  The Big Bang Theory as most people are familiar with, which is the prevailing cosmological model for the earliest known moment of the universe, implies that spacetime itself had it’s earliest moment roughly 13.8 billion years ago, and continued to expand and transform over 13.8 billion years until reaching the state that we see it in today.  Many theists try to use this as evidence for the universe being created by God.  However, since time itself was non-existent prior to the Big Bang, it is not sensible to speak of any creation event happening prior to this moment, since there was no time for such an event to happen within.  This presents a big problem for the second premise in the KCA, because in order for the universe to “begin to exist”, it is implied that there was a time prior in which it didn’t exist, and this goes against the Big Bang model in which time never existed prior to that point.

Is Simultaneous Causation Tenable?

One way that theologians and some philosophers have attempted to circumvent this problem is to invoke the concept of simultaneous causation, that is, that (at least some) causes and effects can happen simultaneously.  Thus, if the cause of the universe happened at the same time as the effect (the Big Bang), then the cause of the universe (possibly “creation”) did happen in time, and thus the problem is said to be circumvented.

The concept of simultaneous causation has been proposed for some time by philosophers, most notably Immanuel Kant and others since.  However, there are a few problems with simultaneous causation that I’ll point out briefly.  For one, there don’t appear to be any actual examples in our universe of simultaneous causation occurring.  Kant did propose what he believed to be a couple examples of simultaneous causation to support the idea.  One example he gave was a scenario where the effect of a heated room supposedly occurs simultaneously with a fire in a fireplace that caused it.  Unfortunately, this example fails, because it actually takes time for thermal energy to make its way from the fire in the fireplace to any air molecules in the room (even those that are closest to the fire).  As combustion is occurring and oxygen is combining with hydrocarbon fuels in the wood to produce carbon dioxide and a lot of heat, that heat takes time to propagate.  As the carbon dioxide is being formed, and the molecule is assuming an energetically favorable state, there is still a lag between this event and any heat given off to nearby molecules in the room.  In fact, no physical processes can occur faster than the speed of light by the principles of Relativity, so this refutes any other example analogous to this one.  The fastest way a fire can propagate heat is through radiation (as opposed to conduction or convection), and we know that the propagation of radiation is limited by the speed of light.  Even pulling a solid object causes it to stretch (at least temporarily) so the end of the object farthest away from where it is being pulled will actually remain at rest for a short time while the other end of the object is first pulled in a particular direction.  It isn’t until a short time lag, that the rest of the object “catches up” with the end being pulled, so even with mechanical processes involving solid materials, we never see instantaneous speeds of causal interactions.

Another example Kant gave was one in which a lead ball lies on a cushion and simultaneously causes the effect of an indentation or “hollow” in the cushion.  Again, in order for the ball to cause a dent in the cushion in the first place it had to be moved into the cushion which took some finite amount of time.  Likewise with the previous example, Relativity prevents any simultaneous causation of this sort.  We can see this by noting that at the molecular level, as the electron orbitals from the lead ball approach those of the cushion, the change in the strength of the electric field between the electron orbitals of the two objects can’t travel faster than the speed of light, and thus as the ball moves toward the cushion and eventually “touches” it, the increased strength of the repulsion takes some amount of time to be realized.

One last example I’ve seen given by defenders of simultaneous causation is that of a man sitting down, thus forming a lap.  That is, as the man sits down, and his knees bend, a lap is created in the process, and we’re told that the man sitting down is the cause and the formation of the lap is the simultaneous effect.  Unfortunately, this example also fails because the man sitting down and the lap being formed are really nothing more than two different descriptions of the same event.  One could say that the man formed a lap, or one could say that the man sat down.  Clearly the intentions behind the man were most likely to sit down rather than to form a lap, but nevertheless forming a lap was incidental in the process of sitting down.  Both are describing different aspects of the same event, and thus there aren’t two distinct causal relatum in this example.  In the previous examples mentioned (the fire and heated room or ball denting a cushion), if there are states described that occur simultaneously even after taking Relativity into account, they can likewise be shown to be merely two different aspects or descriptions of the same event.  Even if we could grant that simultaneous causation were possible (which so far, we haven’t seen any defensible examples in the real world), how can we assign causal priority to determine which was the cause and which was the effect?  In terms of the KCA, one could ask, if the cause (C) of the universe occurred at the same time as the effect (E) or existence of the universe, how could one determine if C caused E rather than the other way around?  One has to employ circular argumentation in order to do so, by invoking other metaphysical assumptions in the terms that are being defined which simply begs the question.

Set Theory & Causal Relations

Another problem with the second premise of the KCA is that even if we ignore the cosmological models that refute it, and even ignore the problematic concept of simultaneous causation altogether, there is an implicit assumption that the causal properties of the “things” in the universe also apply to the universe as a whole.  This is fallacious because one can’t assume that the properties of members of a set or system necessarily apply to the system or entire set as a whole.  Much work has been done within set theory to show that this is the case, and thus while some properties of the members or subsets of a system can apply to the whole system, not all properties necessarily do (in fact some properties applying to both members of a set and to the set as a whole can lead to logical contradictions or paradoxes).  One of the properties that is being misapplied here involves the concept of “things” in general.  If we try to consider the universe as a “thing” we can see how this is problematic by noting that we seem to define and conceptualize “things” with causal properties as entities or objects that are located in time and space (that’s an ontology that I think is pretty basic and universal).  However, the universe as a whole is the entirety of space and time (i.e. spacetime), and thus the universe as a whole contains all space and time, and thus can’t itself (as a whole) be located in space or time.

Since the universe appears to be composed of all the things we know about, one might say that the universe is located within “nothing” at all, if that’s at all intelligible to think of.  Either way, the universe as a whole doesn’t appear to be located in time or space, and thus it isn’t located anywhere at all.  Thus, it technically isn’t a “thing” at all, or at the very least, it is not a thing that has any causal properties of its own, since it isn’t located in time or space in order to have causal relations with other things.  Even if one insists on calling it a thing, despite the problems listed here, we are still left with the problem that we can’t assume that causal principles found within the universe apply to the universe as a whole.  So for a number of reasons, premise two of the KCA fails.  Since both premises fail for a number of reasons, the conclusion no longer follows.  So even if the universe does in fact have a cause, in some way unknown to us, the KCA doesn’t successfully support such a claim with its premises.

Is the Kalam Circular?

Yet another problem that Dan Barker and others have pointed out involves the language used in the first premise of the KCA.  The clause, “everything that begins to exist”, implies that reality can be divided into two sets: items that begin to exist (BE) and items that do not begin to exist (NBE).  In order for the KCA to work in arguing for God’s existence, the NBE set can’t be empty.  Even more importantly, it must accommodate more than one item to avoid simply being a synonym for God, for if God is the only object or item within NBE, then the premise “everything that begins to exist has a cause” is equivalent to “everything except God has a cause”.  This simply puts God into the definition of the premise of the argument that is supposed to be used to prove God’s existence, and thus would simply beg the question.  It should be noted that just because the NBE set must accommodate more than one possible item, this doesn’t entail that the NBE set must contain more than one item.  This specific problem with the KCA could be resolved if one could first show that there are multiple possible NBE candidates, followed by showing that of the multiple possible candidates within NBE, only one candidate is valid, and finally by showing that this candidate is in fact some personal creator, i.e., God.  If it can’t be shown that NBE can accommodate more than one item, then the argument is circular.  Moreover, if the only candidate for NBE is God, then the second premise “The universe began to exist” simply reduces to “The universe is not God”, which simply assumes what the argument is trying to prove.  Thus if the NBE set is simply synonymous with God, then the Kalam can be reduced to:

(1) Everything except God has a cause;

(2) The universe is not God;

Therefore,

(3) The universe has a cause.

As we can see, this syllogism is perfectly logical (though the conclusion only follows if the premises are true which is open to debate), but this syllogism is entirely useless as an argument for God’s existence.  Furthermore, regarding the NBE set, one must ask, where do theists obtain the idea that this NBE set exists?  That is, by what observations and/or arguments is the possibility of beginningless objects justified?  We don’t find any such observations in science, although it is certainly possible that the universe itself never began (we just don’t have observations to support this, at least, not at this time) and the concept of a “beginningless universe” is in fact entirely consistent with many eternal cosmological models that have been proposed, in which case the KCA would still be invalidated by refuting premise two in yet another way.  Other than the universe itself potentially being an NBE (which is plausible, though not empirically demonstrated as of yet), there don’t appear to be any other possible NBEs, and there don’t appear to be any observations and/or arguments to justify proposing that any NBEs exist at all (other than perhaps the universe itself, which would be consistent with the law of conservation of mass and energy and/or the Quantum Eternity Theorem).

The KCA Fails

As we can see, the Kalam Cosmological Argument fails for a number of reasons, and thus is unsuccessful in arguing for the existence of God.  Thus, even though it may very well be the case that some god exists and did in fact create the universe, the KCA fails to support such a claim.

Here’s an excellent debate between the cosmologist Sean Carroll and the Christian apologist William Lane Craig which illustrates some of the problems with the KCA, specifically in terms of evidence found within cosmology (or lack thereof).  It goes without saying that Carroll won the debate by far, though he could certainly have raised more points in his rebuttals than he did.  Nevertheless, it was entertaining and a nice civil debate with good points presented on both sides.  Here’s another link to Carroll’s post debate reflections on his blog.