Irrational Man: An Analysis (Part 1, Chapter 2: “The Encounter with Nothingness”)

In the first post in this series on William Barrett’s Irrational Man, I explored Part 1, Chapter 1: The Advent of Existentialism, where Barrett gives a brief description of what he believes existentialism to be, and the environment it evolved within.  In this post, I want to explore Part I, Chapter 2: The Encounter with Nothingness.

Ch. 2 – The Encounter With Nothingness

Barrett talks about the critical need for self-analysis, despite the fact that many feel that this task has already been accomplished and that we’ve carried out this analysis exhaustively.  But Barrett sees this as contemporary society’s running away from the facts of our own ignorance.  Modern humankind, it seems to Barrett, is in even more need to question their identity for we seem to understand ourselves even less than when we first began to question who we are as a species.

1. The Decline of Religion

Ever since the end of the Middle Ages, religion as a whole has been on the decline.  This decrease in religiosity (particularly in Western civilization) became most prominent during the Enlightenment.  As science began to take off, the mechanistic structure and qualities of the universe (i.e. its laws of nature) began to reveal themselves in more and more detail.  This in turn led to a replacement of a large number of superstitious and supernatural religious beliefs about the causes for various phenomena with scientific explanations that could be empirically verified and tested.  Throughout this process, as theological explanations became replaced more and more with naturalistic explanations, the presumed role of God and the Church began to evaporate.  Thus, at the purely intellectual level, we underwent a significant change in terms of how we viewed the world and subsequently how we viewed the nature of human beings and our place in the world.

But, as Barrett points out:

“The waning of religion is a much more concrete and complex fact than a mere change in conscious outlook; it penetrates the deepest strata of man’s total psychic life…Religion to medieval man was not so much a theological system as a solid psychological matrix surrounding the individual’s life from birth to death, sanctifying and enclosing all its ordinary and extraordinary occasions in sacrament and ritual.”

We can see here how the role of religion has changed to some degree from medieval times to the present day.  Rather than simply being a set of beliefs that identified a person with a particular group and which had soteriological, metaphysical, and ethical significance to the believer (as it is more so in modern times), it used to be a complete system or a total solution for how one was to live their life.  And it also provided a means of psychological stability and coherence by providing a ready-made narrative of the essence of man; a sense of familiarity and a pre-defined purpose and structure that didn’t have to be constructed from scratch by the individual.

While the loss of the Church involved losing an entire system of dogmatic teachings, symbols, and various rites and sacraments, the most important loss according to Barrett was the loss of a concrete connection to a transcendent realm of being.  We were now set free such that we had to grapple with the world on our own, with all its precariousness, and to deal head-on with the brute facts of our own existence.

What I find most interesting in this chapter is when Barrett says:

“The rationalism of the medieval philosophers was contained by the mysteries of faith and dogma, which were altogether beyond the grasp of human reason, but were nevertheless powerfully real and meaningful to man as symbols that kept the vital circuit open between reason and emotion, between the rational and non-rational in the human psyche.”

And herein lies the crux of the matter; for Barrett believes that religion’s greatest function historically was its serving as a bridge between the rational and non-rational elements of our psychology, and also its serving as a barrier that limited the effective reach and power of our rationality over the rest of our psyches and our view of the world.  I would go even further to suggest that it may have allowed our emotional expression to more harmoniously co-exist and work with our reason instead of primarily being at odds with it.

I agree with Barrett’s point here in that religion often promulgates ideas and practices that appeal to many of our emotional dispositions and intuitions, thus allowing people to express certain emotional states and to maintain comforting intuitions that might otherwise be hindered or subjugated by reason and rationality.  And it has also provided a path for reason to connect to the unreasonable to some degree; as a means of minimizing the need to compartmentalize rationality from the emotional or irrational influences on a person’s belief systems.  By granting people an opportunity to combine reason and emotion in some way, where this reason could be used to try and make some sense of emotion and to give it some kind of validation without having to reject reason completely, religion has been effective (historically anyway) in helping people to avoid the discomfort of rejecting beliefs that they know to be reasonable (many of these beliefs at least) while also being able to avoid the discomfort of inadequate emotional/non-rational expression.

Once religion began to go by the wayside, due in large part to the accumulated knowledge acquired through reason and scientific progress, it became increasingly difficult to square the evidence and arguments that were becoming more widely known with many of the claims that religion and the Church had been propagating for centuries.  Along with this growing invalidation or loss of credibility came the increased need to compartmentalize reason and rationality from emotionally and irrationally-derived beliefs and experiences.  And this difficulty led to a decline in religiosity for many, which was accompanied with the loss in any emotional and irrational/non-rational expression that religion had once offered the masses.  Once reason and rationality expanded beyond a certain threshold, it effectively popped the religious bubble that had previously contained it, causing many to begin to feel homeless, out of place, and in many ways incomplete in the new world they now found themselves living in.

2. The Rational Ordering of Society

The organization of our lives has been, historically at least, a relatively organic process where it had a kind of self-guiding, pragmatic, and intuitive structure and evolution.  But once we approached the modern age, as Barrett points out, we saw a drastic shift toward an increasingly rational form of organization, where efficiency and a sort of technical precision began to dominate the overall direction of society and the lives of each individual.  The rise of capitalism was a part of this cultural evolutionary process (as was, I would argue, the Industrial Revolution), and only further enhanced the power and influence of reason and rationality over our day-to-day lives.

The collectivization and distribution of labor involved in the mass production of commodities and various products had taken us entirely out of our agrarian and hunter-gatherer roots.  We no longer lived off of the land so to speak, and were no longer ensconced within the kinds of natural scenery while performing the types of day-to-day tasks that our species had adapted to over its long-term evolutionary history.  And with this collectivization, we also lost a large component of our individuality; a component that is fairly important in human psychology.

Barrett comments on how we’ve accepted modern society as normal, relatively unaware of our ancestral roots and our previous way of life:

“We are so used to the fact that we forget it or fail to perceive that the man of the present day lives on a level of abstraction altogether beyond the man of the past.”

And he goes on to talk about how our ratcheting forward in terms of technological progress and any mechanistic societal re-structuring is what gives us our incredible power over our environment but at the cost of feeling rootless and without any concrete sense of feeling, when it’s needed now more than ever.

Perhaps a more interesting point he makes is with respect to how our increased mechanization and collectivization has changed a fundamental part of how our psyche and identity operate:

“Not only can the material wants of the masses be satisfied to a degree greater than ever before, but technology is fertile enough to generate new wants that it can also satisfy…All of this makes for an extraordinary externalization of life in our time. “

And it is this externalization of our identity and psychology, manifested in ever-growing and ever-changing sets of material objects and information flow, that is interesting to ponder over.  It reminds me somewhat of Richard Dawkins’ concept of an extended phenotype, where the effects of an organism’s genes aren’t merely limited to the organism’s body, but rather they extend into how the organism structures its environment. While this term is technically limited to behaviors that have a direct bearing on the organism’s survival, I prefer to think of this extended phenotype as encompassing everything the organism creates and the totality of its behaviors.

The reason I mention this concept is because I think it makes for a useful analogy here.  For in the earlier evolution of organisms on earth, the genes’ effects or the resulting phenotypes were primarily manifested as the particular body and bodily behavior of the organism, and as organisms became more complex (particularly those that evolved brains and a nervous system), that phenotype began to extend itself into the abilities of an organism to make external structures out of raw materials found in its environment.  And more and more genetic resources were allotted to the organism’s brain which made this capacity for environmental manipulation possible.  As this change occurred, the previous boundaries that defined the organism vanished as the external constructions effectively became an extension of the organism’s body.  But with this new capacity came a loss of intimacy in the sense that the organism wasn’t connected to these external structures in the same way it was connected to its own feelings and internal bodily states; and these external structures also lacked the privacy and hidden qualities inherent in an organism’s thoughts, feelings, and overall subjective experience.

Likewise, as we’ve evolved culturally, eventually gaining the ability to construct and mass-produce a plethora of new material goods, we began to dedicate a larger proportion of our attention on these different external objects, wanting more and more of them well past what we needed for survival.  And we began to invest or incorporate more of ourselves, including our knowledge and information, in these externalities, forcing us to compensate by investing less and less in our internal, private states and locally stored knowledge.  Now it would be impractical if not impossible for an organism to perform increasingly complex behaviors and to continuously increase its ability to manipulate its own environment without this kind of trade-off occurring in terms of its identity, and how it distributes its limited psychological resources.

And herein lies the source of our seemingly fractured psyche: the natural selection of curiosity, knowledge accumulation, and behavioral complexity for survival purposes has become co-opted for just about any purpose imaginable, since the hardware and schema required for the former has a capacity that transcends its evolutionary purpose and that transcends the finite boundaries, the guiding constraints, and the essential structure of the lives we once had in our evolutionary past.  Now we’ve moved beyond what used to be a kind of essential quality and highly predictable trajectory of our lives and of our species, and we’ve moved into the unknown; from the realm of the finite and the familiar to the seemingly infinite realm of the unknown.

A big part of this externalization has manifested itself in the new ways we acquire, store, and share information, such as with the advent of mass media.  As Barrett puts it:

“…journalism enables people to deal with life more and more at second hand.  Information usually consists of half-truths, and “knowledgability” becomes a substitute for real knowledge.  Moreover, popular journalism has by now extended its operations into what were previously considered the strongholds of culture-religion, art, philosophy…It becomes more and more difficult to distinguish the secondhand from the real thing, until most people end by forgetting there is such a distinction.”

I think this ties well into what Barrett mentioned previously when he talked about how modern civilization is built on increasing levels of abstraction.  The very information we’re absorbing, in order to make sense of and deal with a large aspect of our contemporary world, is second hand at best.  The information we rely on has become increasingly abstracted, manipulated, reinterpreted, and distorted.  The origin of so much of this information is now at least one level away from our immediate experience, giving it a quality that is disconnected, less important, and far less real than it otherwise would be.  But we often forget that there’s any epistemic difference between our first-hand lived experience and the information that arises from our mass media.

To add to Barrett’s previous description of existentialism as a reaction against positivism, he also mentions Karl Jaspers’ views of existentialism, which he described as:

“…a struggle to awaken in the individual the possibilities of an authentic and genuine life, in the fact of the great modern drift toward a standardized mass society.”

Though I concur with Jaspers’ claim that modernity has involved a shift toward a standardized mass society in a number of ways, I also think that it has provided the means for many more ways of being unique, many more possible talents and interests for one to explore, and many more kinds of goals to choose from for one’s life project(s).  Collectivization and distribution of labor and the technology that has precipitated from it have allowed many to avoid spending all day hunting and gathering food, making or cleaning their clothing, and other tasks that had previously consumed most of one’s time.

Now many people (in the industrialized world at least) have the ability to accumulate enough free time to explore many other types of experiences, including reading and writing, exploring aspects of our existence with highly-focused introspective effort (as in philosophy), creating or enjoying vast quantities of music and other forms of art, listening to and telling stories, playing any number of games, and thousands of other activities.  And even though some of these activities have been around for millennia, many of them have not (or there was little time for them), and of those that have been around the longest, there were still far fewer choices than what we have on offer today.  So we mustn’t forget that many people develop a life-long passion for at least some of these experiences that would never have been made possible without our modern society.

The issue I think lies in the balance or imbalance between standardization and collectivization on the one hand (such that we reap the benefits of more free time and more recreational choices), and opportunities for individualistic expression on the other.  And of the opportunities that exist for individualistic expression, there is still the need to track the psychological consequences that result from them so we can pick more psychologically fulfilling choices; so that we can pick choices that better allow us to keep open that channel between reason and emotion and between the rational and the non-rational/irrational that religion once provided, as Barrett mentioned earlier.

We also have to accept the fact that the findings in science have largely dehumanized or inhibited the anthropomorphization of nature, instead showing us that the universe is indifferent to us and to our goals; that humans and life in general are more of an aberration than anything else within a vast cosmos that is inhospitable to life.  Only after acknowledging the situation we’re in can we fully appreciate the consequences that modernity has had on upending the comforting structure that religion once gave to humans throughout their lives.  As Barrett tell us:

“Science stripped nature of its human forms and presented man with a universe that was neutral, alien, in its vastness and force, to his human purposes.  Religion, before this phase set in, had been a structure that encompassed man’s life, providing him with a system of images and symbols by which he could express his own aspirations toward psychic wholeness.  With the loss of this containing framework man became not only a dispossessed but a fragmentary being.”

Although we can’t unlearn the knowledge that has caused religion to decline including that which has had a bearing on the questions dealing with our ultimate meaning and purpose, we can certainly find new ways of filling the psychological void felt by many as a result of this decline.  The modern world has many potential opportunities for psychologically fulfilling projects in life, and these opportunities need to be more thoroughly explored.  But, existentialist thought rightly reminds us of how the fruits of our rational and enlightened philosophy have been less capable of providing as satisfying an answer to the question “What are human beings?” as religion once gave.  Along with the fruits of the Enlightenment came a lack of consolation, and a number of painful truths pertaining to the temporal and contingent nature of our existence, previously thought to possess both an eternal and necessary character.  Overall, this cultural change and accumulation of knowledge effectively forced humanity out of its comfort zone.  Barrett described the situation quite well when he said:

“In the end, he [modern man] sees each man as solitary and unsheltered before his own death.  Admittedly, these are painful truths, but the most basic things are always learned with pain, since our inertia and complacent love of comfort prevent us from learning them until they are forced upon us.”

Modern humanity then, has become alienated from God, from nature, and from the complex social machinery that produces the goods and services that he both wants and needs.  Even worse yet however, is the alienation from one’s own self that has occurred as humans have found themselves living in a society that expects each of them to perform some specific function in life (most often not of their choosing), and this leads to society effectively identifying each person as this function, forgetting or ignoring the real person buried underneath.

3.  Science and Finitude

In this last section of chapter two, Barrett discusses some of the ultimate limitations in our use of reason and in the scope of knowledge we can obtain from our scientific and mathematical methods of inquiry.  Reason itself is described as the product of a creature “whose psychic roots still extend downward into the primeval soil,” and is thus a creation from an animal that is still intimately connected to an irrational foundation; an animal still possessing a set of instincts arising from its evolutionary origins.

We see this core presence of irrationality in our day-to-day lives whenever we have trouble trying to employ reason, such as when it conflicts with our emotions and intuitions.  And Barrett brings up the optimism in the confirmed rationalist and their belief that they may still be able to one day overcome all of these obstacles of irrationality by simply employing reason in a more clever way than before.  Clearly Barrett doesn’t share this optimism of the rationalist and he tries to support his pessimism by pointing out a few limitations of reason as suggested within the work of Immanuel Kant, modern physics, and mathematics.

In Kant’s Critique of Pure Reason, he lays out a substantial number of claims and concepts relating to metaphysics and epistemology, where he discusses the limitations of both reason and the senses.  Among other things, he claims that our reason is limited by certain a priori intuitional forms or predispositions about space and time (for example) that allow us to cognize any thing at all.  And so if there are any “things in themselves”, that is, real objects underlying whatever appears to us in the external world, where these objects have qualities and attributes that are independent of our experience, then we can never know anything of substance about these underlying features.  We can never see an object or understand it in any way without incorporating a large number of intuitional forms and assumptions in order to create that experience at all; we can never see the world without seeing it through the limited lens of our minds and our body’s sensory machinery.

For Kant, this also means that we may often use reason erroneously to make unjustified claims of knowledge pertaining to the transcendent, particularly within metaphysics and theology.  When reason is applied to ideas that can’t be confirmed through sensory experience, or that lie outside of our realm of possible experience (such as the idea that cause and effect laws govern every interaction in the universe, something we can never know through experience), it leads to knowledge claims that it can’t justify.  Another limitation of reason, according to Kant, is that it operates through an a priori assumption of unification in our experience, and so the categories and concepts that we infer to exist based on reason are limited by this underlying unifying principle.  Science has added to the rigidity and specificity of this unification, by going beyond what we’ve unified through our unmodified experience (i.e. seeing the sun “rise” and “set” every day), that is, experience without the use of any instruments, telescopes, microscopes, etc. (where the use of these instruments has helped give us more data showing that the earth rotates on an axis rather than the sun revolving around the earth).  Nevertheless, unification is the ultimate goal of reason whether applied with or without a strict scientific method.

Then within physics, we find another limitation in terms of our possible understanding of the world.  Heisenberg’s Uncertainty Principle showed us that we are unable to know complementary parameters pertaining to a particle with an arbitrarily high level of precision.  We eventually hit a limit where, for example, if we know the position of a particle (such as an electron) with a high degree of accuracy at some particular time, then we’re unable to know the momentum of that particle with the same accuracy.  The more we know about one complementary parameter, the less we know about the other.  Barrett describes this discovery in physics as showing that nature may be irrational and chaotic at its most fundamental level and then he says:

“What is remarkable is that here, at the very farthest reaches of precise experimentation, in the most rigorous of the natural sciences, the ordinary and banal fact of our human limitations emerges.”

This is indeed interesting because for a long while many rationalists and positivists held that our potential knowledge was unlimited (or finite, yet complete) and that science was the means to gain this open-ended or complete knowledge of the universe.  Then quantum physics delivered a major blow to this assumption, showing us instead that our knowledge is inherently limited based on how the universe is causally structured.  However, it’s worth pointing out that this is not a human limitation per se, but a limitation of any conscious system acquiring knowledge in our universe.  It wouldn’t matter if humans had different brains, better technology, or used artificial intelligence to perform the computations or measurements.  There is simply a limit on what can ever be measured; a limit on what can ever be known about the state of any system that is being studied.

Godel’s Incompleteness Theorem likewise showed how there were inherent limitations in any mathematical formulation of number theory that was rich enough where, no matter how large or seemingly complete its set of axioms are, if it is consistent, then there will be statements that can’t be proven or disproven within that formulation.  Likewise, the consistency of any formulation of number theory can’t be proven by the formulation itself; rather, it depends on assumptions that lie outside of that formulation.  However, once again, contrary to what Barrett claims, I don’t think this should be taken to be a human limitation of reason, but rather a limitation of mathematics generally.

Regardless, I concede Barrett’s overarching point that, in all of these cases (Kant, physics, and mathematics), we are still running into scenarios where we are unable to do something or to know something that we may have previously thought we could do or thought we could know, at least in principle.  And so these discoveries did run counter to the beliefs of many that thought that humans were inherently unstoppable in these endeavors of knowledge accumulation and in our ability to create technology capable of solving any problem whatsoever.  We can’t do this.  We are finite creatures with finite brains, but perhaps more importantly, our capacities are also limited by what knowledge is theoretically available to any conscious system trying to better understand the universe.  The universe is inherently unknowable in at least some respects, which means it is unpredictable in at least some respects.  And unpredictability doesn’t sit well with humans intent on eliminating uncertainty, eliminating the unknown, and trying to have a coherent narrative to explain everything encountered throughout one’s existence.

In the next post in this series, I’ll explore Irrational Man, Part 1, Chapter 3: The Testimony of Modern Art.

Advertisement

The Experientiality of Matter

If there’s one thing that Nietzsche advocated for, it was to eliminate dogmatism and absolutism from one’s thinking.  And although I generally agree with him here, I do think that there is one exception to this rule.  One thing that we can be absolutely certain of is our own conscious experience.  Nietzsche actually had something to say about this (in Beyond Good and Evil, which I explored in a previous post), where he responds to Descartes’ famous adage “cogito ergo sum” (the very adage associated with my blog!), and he basically says that Descartes’ conclusion (“I think therefore I am”) shows a lack of reflection concerning the meaning of “I think”.  He wonders how he (and by extension, Descartes) could possibly know for sure that he was doing the thinking rather than the thought doing the thinking, and he even considers the possibility that what is generally described as thinking may actually be something more like willing or feeling or something else entirely.

Despite Nietzsche’s criticism against Descartes in terms of what thought is exactly or who or what actually does the thinking, we still can’t deny that there is thought.  Perhaps if we replace “I think therefore I am” with something more like “I am conscious, therefore (my) conscious experience exists”, then we can retain some core of Descartes’ insight while throwing out the ambiguities or uncertainties associated with how exactly one interprets that conscious experience.

So I’m in agreement with Nietzsche in the sense that we can’t be certain of any particular interpretation of our conscious experience, including whether or not there is an ego or a self (which Nietzsche actually describes as a childish superstition similar to the idea of a soul), nor can we make any certain inferences about the world’s causal relations stemming from that conscious experience.  Regardless of these limitations, we can still be sure that conscious experience exists, even if it can’t be ascribed to an “I” or a “self” or any particular identity (let alone a persistent identity).

Once we’re cognizant of this certainty, and if we’re able to crawl out of the well of solipsism and eventually build up a theory about reality (for pragmatic reasons at the very least), then we must remain aware of the priority of consciousness in any resultant theory we construct about reality, with regard to its structure or any of its other properties.  Personally, I believe that some form of naturalistic physicalism (a realistic physicalism) is the best candidate for an ontological theory that is the most parsimonious and explanatory for all that we experience in our reality.  However, most people that make the move to adopt some brand of physicalism seem to throw the baby out with the bathwater (so to speak), whereby consciousness gets eliminated by assuming it’s an illusion or that it’s not physical (therefore having no room for it in a physicalist theory, aside from its neurophysiological attributes).

Although I used to feel differently about consciousness (and it’s relationship to physicalism), where I thought it was plausible for it to be some kind of an illusion, upon further reflection I’ve come to realize that this was a rather ridiculous position to hold.  Consciousness can’t be an illusion in the proper sense of the word, because the experience of consciousness is real.  Even if I’m hallucinating where my perceptions don’t directly correspond with the actual incoming sensory information transduced through my body’s sensory receptors, then we can only say that the perceptions are illusory insofar as they don’t directly map onto that incoming sensory information.  But I still can’t say that having these experiences is itself an illusion.  And this is because consciousness is self-evident and experiential in that it constitutes whatever is experienced no matter what that experience consists of.

As for my thoughts on physicalism, I came to realize that positing consciousness as an intrinsic property of at least some kinds of physical material (analogous to a property like mass) allows us to avoid having to call consciousness non-physical.  If it is simply an experiential property of matter, that doesn’t negate its being a physical property of that matter.  It may be that we can’t access this property in such a way as to evaluate it with external instrumentation, like we can for all the other properties of matter that we know of such as mass, charge, spin, or what-have-you, but that doesn’t mean an experiential property should be off limits for any physicalist theory.  It’s just that most physicalists assume that everything can or has to be reducible to the externally accessible properties that our instrumentation can measure.  And this suggests that they’ve simply assumed that the physical can only include externally accessible properties of matter, rather than both internally and externally accessible properties of matter.

Now it’s easy to see why science might push philosophy in this direction because its methodology is largely grounded on third-party verification and a form of objectivity involving the ability to accurately quantify everything about a phenomenon with little or no regard for introspection or subjectivity.  And I think that this has caused many a philosopher to paint themselves into a corner by assuming that any ontological theory underlying the totality of our reality must be constrained in the same way that the physical sciences are.  To see why this is an unwarranted assumption, let’s consider a “black box” that can only be evaluated by a certain method externally.  It would be fallacious to conclude that just because we are unable to access the inside of the box, that the box must therefore be empty inside or that there can’t be anything substantially different inside the box compared to what is outside the box.

We can analogize this limitation of studying consciousness with our ability to study black holes within the field of astrophysics, where we’ve come to realize that accessing any information about their interior (aside from how much mass there is) is impossible to do from the outside.  And if we managed to access this information (if there is any) from the inside by leaping past its outer event horizon, it would be impossible for us to escape and share any of that information.  The best we can do is to learn what we can from the outside behavior of the black hole in terms of its interaction with surrounding matter and light and infer something about the inside, like how much matter it contains (e.g. we can infer the mass of a black hole from its outer surface area).  And we can learn a little bit more by considering what is needed to create or destroy a black hole, thus creating or destroying any interior qualities that may or may not exist.

A black hole can only form from certain configurations of matter, particularly aggregates that are above a certain mass and density.  And it can only be destroyed by starving it to death, by depriving it of any new matter, where it will slowly die by evaporating entirely into Hawking radiation, thus destroying anything that was on the inside in the process.  So we can infer that any internal qualities it does have, however inaccessible they may be, can be brought into and out of existence with certain physical processes.

Similarly, we can infer some things about consciousness by observing one’s external behavior including inferring some conditions that can create, modify, or destroy that type of consciousness, but we are unable to know what it’s like to be on the inside of that system once it exists.  We’re only able to know about the inside of our own conscious system, where we are in some sense inside our own black hole with nobody else able to access this perspective.  And I think it is easy enough to imagine that certain configurations of matter simply have an intrinsic, externally inaccessible experiential property, just as certain configurations of matter lead to the creation of a black hole with its own externally inaccessible and qualitatively unknown internal properties.  Despite the fact that we can’t access the black hole’s interior with a strictly external method, to determine its internal properties, this doesn’t mean we should assume that whatever properties may exist inside it are therefore fundamentally non-physical.  Just as we wouldn’t consider alternate dimensions (such as those predicted in M-theory/String-Theory) that we can’t physically access to be non-physical.  Perhaps one or more of these inaccessible dimensions (if they exist) is what accounts for an intrinsic experiential property within matter (though this is entirely speculative and need not be true for the previous points to hold, but it’s an interesting thought nevertheless).

Here’s a relevant quote from the philosopher Galen Strawson, where he outlines what physicalism actually entails:

Real physicalists must accept that at least some ultimates are intrinsically experience-involving. They must at least embrace micropsychism. Given that everything concrete is physical, and that everything physical is constituted out of physical ultimates, and that experience is part of concrete reality, it seems the only reasonable position, more than just an ‘inference to the best explanation’… Micropsychism is not yet panpsychism, for as things stand realistic physicalists can conjecture that only some types of ultimates are intrinsically experiential. But they must allow that panpsychism may be true, and the big step has already been taken with micropsychism, the admission that at least some ultimates must be experiential. ‘And were the inmost essence of things laid open to us’ I think that the idea that some but not all physical ultimates are experiential would look like the idea that some but not all physical ultimates are spatio-temporal (on the assumption that spacetime is indeed a fundamental feature of reality). I would bet a lot against there being such radical heterogeneity at the very bottom of things. In fact (to disagree with my earlier self) it is hard to see why this view would not count as a form of dualism… So now I can say that physicalism, i.e. real physicalism, entails panexperientialism or panpsychism. All physical stuff is energy, in one form or another, and all energy, I trow, is an experience-involving phenomenon. This sounded crazy to me for a long time, but I am quite used to it, now that I know that there is no alternative short of ‘substance dualism’… Real physicalism, realistic physicalism, entails panpsychism, and whatever problems are raised by this fact are problems a real physicalist must face.

— Galen Strawson, Consciousness and Its Place in Nature: Does Physicalism Entail Panpsychism?
.
While I don’t believe that all matter has a mind per se, because a mind is generally conceived as being a complex information processing structure, I think it is likely that all matter has an experiential quality of some kind, even if most instantiations of it are entirely unrecognizable as what we’d generally consider to be “consciousness” or “mentality”.  I believe that the intuitive gap here is born from the fact that the only minds we are confident exist are those instantiated by a brain, which has the ability to make an incredibly large number of experiential discriminations between various causal relations, thus giving it a capacity that is incredibly complex and not observed anywhere else in nature.  On the other hand, a particle of matter on its own would be hypothesized to have the capacity to make only one or two of these kinds of discriminations, making it unintelligent and thus incapable of brain-like activity.  Once a person accepts that an experiential quality can come in varying degrees from say one experiential “bit” to billions or trillions of “bits” (or more), then we can plausibly see how matter could give rise to systems that have a vast range in their causal power, from rocks that don’t appear to do much at all, to living organisms that have the ability to store countless representations of causal relations (memory) allowing them to behave in increasingly complex ways.  And perhaps best of all, this approach solves the mind-body problem by eliminating the mystery of how fundamentally non-experiential stuff could possibly give rise to experientiality and consciousness.

The Origin and Evolution of Life: Part II

Even though life appears to be favorable in terms of the second law of thermodynamics (as explained in part one of this post), there have still been very important questions left unanswered regarding the origin of life including what mechanisms or platforms it could have used to get itself going initially.  This can be summarized by a “which came first, the chicken or the egg” dilemma, where biologists have wondered whether metabolism came first or if instead it was self-replicating molecules like RNA that came first.

On the one hand, some have argued that since metabolism is dependent on proteins and enzymes and the cell membrane itself, that it would require either RNA or DNA to code for those proteins needed for metabolism, thus implying that RNA or DNA would have to originate before metabolism could begin.  On the other hand, even the generation and replication of RNA or DNA requires a catalytic substrate of some kind and this is usually accomplished with proteins along with metabolic driving forces to accomplish those polymerization reactions, and this would seem to imply that metabolism along with some enzymes would be needed to drive the polymerization of RNA or DNA.  So biologists we’re left with quite a conundrum.  This was partially resolved when several decades ago, it was realized that RNA has the ability to not only act as a means of storing genetic information just like DNA, but it also has the additional ability of catalyzing chemical reactions just like an enzyme protein can.  Thus, it is feasible that RNA could act as both an information storage molecule as well as an enzyme.  While this helps to solve the problem if RNA began to self-replicate itself and evolve over time, the problem still remains of how the first molecules of RNA formed, because it seems that some kind of non-RNA metabolic catalyst would be needed to drive this initial polymerization.  Which brings us back to needing some kind of catalytic metabolism to drive these initial reactions.

These RNA polymerization reactions may have spontaneously formed on their own (or evolved from earlier self-replicating molecules that predated RNA), but the current models of how the early pre-biotic earth would have been around four billion years ago seem to suggest that there would have been too many destructive chemical reactions that would have suppressed the accumulation of any RNA and would have likely suppressed other self-replicating molecules as well.  What seems to be needed then is some kind of a catalyst that could create them quickly enough such that they would be able to accumulate in spite of any destructive reactions present, and/or some kind of physical barrier (like a cell wall) that protects the RNA or other self-replicating polymers so that they don’t interact with those destructive processes.

One possible solution to this puzzle that has been developing over the last several years involves alkaline hydrothermal vents.  We actually didn’t know that these kinds of vents existed until the year 2000 when they were discovered on a National Science Foundation expedition in the mid-Atlantic.  Then a few years later they were studied more closely to see what kinds of chemistries were involved with these kinds of vents.  Unlike the more well-known “black smoker” vents (which were discovered in the 1970’s), these alkaline hydrothermal vents have several properties that would have been hospitable to the emergence of life back during the Hadeon eon (between 4.6 and 4 billion years ago).

The ocean water during the Hadeon eon would have been much more acidic due to the higher concentrations of carbon dioxide (thus forming carbonic acid), and this acidic ocean water would have mixed with the hydrogen-rich alkaline water found within the vents, and this would have formed a natural proton gradient within the naturally formed pores of these rocks.  Also, electron transfer would have likely occurred when the hydrogen and methane-rich vent fluid contacted the carbon dioxide-rich ocean water, thus generating an electrical gradient.  This is already very intriguing because all living cells ultimately derive their metabolic driving forces from proton gradients or more generally from the flow of some kind of positive charge carrier and/or electrons.  Since the rock found in these vents undergoes a process called surpentization, which spreads the rock apart into various small channels and pockets, many different kinds of pores form in the rocks, and some of them would have been very thin-walled membranes separating the acidic ocean water from the alkaline hydrogen.  This would have facilitated the required semi-permeable barrier that modern cells have which we expect the earliest proto-cells to also have, and it would have provided the necessary source of energy to power various chemical reactions.

Additionally, these vents would have also provided a source of minerals (namely green rust and molybdenum) which likely would have behaved as enzymes, catalyzing reactions as various chemicals came into contact with them.  The green rust could have allowed the use of the proton gradient to generate molecules that contained phosphate, which could have stored the energy produced from the gradient — similar to how all living systems that we know of store their energy in ATP (Adenosine Tri-Phosphate).  The molybdenum on the other hand would have assisted in electron transfer through those membranes.

So this theory provides a very plausible way for catalytic metabolism as well as proto-cellular membrane formation to have resulted from natural geological processes.  These proto-cells would then likely have begun concentrating simple organic molecules formed from the reaction of CO2 and H2 with all the enzyme-like minerals that were present.  These molecules could then react with one another to polymerize and form larger and more complex molecules including eventually nucleotides and amino acids.  One promising clue that supports this theory is the fact that every living system on earth is known to share a common metabolic system, known as the citric acid cycle or Kreb’s cycle, where it operates in the forward direction for aerobic organisms and in the reverse direction for anaerobic organisms.  Since this cycle consists of only 11 molecules, and since all biological components and molecules that we know of in any species have been made by some number or combination of these 11 fundamental building blocks, scientists are trying to test (among other things) whether or not they can mimic these alkaline hydrothermal vent conditions along with the acidic ocean water that would have been present in the Hadrean era and see if it will precipitate some or all of these molecules.  If they can, it will show that this theory is more than plausible to account for the origin of life.

Once these basic organic molecules were generated, eventually proteins would have been able to form, some of which that could have made their way to the membrane surface of the pores and acted as pumps to direct the natural proton gradient to do useful work.  Once those proteins evolved further, it would have been possible and advantageous for the membranes to become less permeable so that the gradient could be highly focused on the pump channels on the membrane of these proto-cells.  The membrane could have begun to change into one made from lipids produced from the metabolic reactions, and we already know that lipids readily form micelles or small closed spherical structures once they aggregate in aqueous conditions.  As this occurred, the proto-cells would no longer have been trapped in the porous rock, but would have eventually been able to slowly migrate away from the vents altogether, eventually forming the phospholipid bi-layer cell membranes that we see in modern cells.  Once this got started, self-replicating molecules and the rest of the evolution of the cell would have underwent natural selection as per the Darwinian evolution that most of us are familiar with.

As per the earlier discussion regarding life serving as entropy engines and energy dissipation channels, this self-replication would have been favored thermodynamically as well because replicating those entropy engines and the energy dissipation channels means that they will only become more effective at doing so.  Thus, we can tie this all together, where natural geological processes would have allowed for the required metabolism to form, thus powering organic molecular synthesis and polymerization, and all of these processes serving to increase entropy and maximize energy dissipation.  All that was needed for this to initiate was a planet that had common minerals, water, and CO2, and the natural geological processes can do the rest of the work.  These kinds of planets actually seem to be fairly common in our galaxy, with estimates ranging in the billions, thus potentially harboring life (or where it is just a matter of time before it initiates and evolves if it hasn’t already).  While there is still a lot of work to be done to confirm the validity of these models and to try to find ways of testing them vigorously, we are getting relatively close to solving the puzzle of how life originated, why it is the way it is, and how we can better search for it in other parts of the universe.

The Origin and Evolution of Life: Part I

In the past, various people have argued that life originating at all let alone evolving higher complexity over time was thermodynamically unfavorable due to the decrease in entropy involved with both circumstances, and thus it was believed to violate the second law of thermodynamics.  For those unfamiliar with the second law, it basically asserts that the amount of entropy (often referred to as disorder) in a closed system tends to increase over time, or to put it another way, the amount of energy available to do useful work in a closed system tends to decrease over time.  So it has been argued that since the origin of life and the evolution of life with greater complexity would entail decreases in entropy, these events are therefore either at best unfavorable (and therefore the result of highly improbable chance), or worse yet they are altogether impossible.

We’ve known for quite some time now that these thermodynamic arguments aren’t at all valid because earth isn’t a thermodynamically closed or isolated system due to the constant supply of energy we receive from the sun.  Because we get a constant supply of energy from the sun, and because the entropy increase from the sun far outweighs the decrease in entropy produced from all biological systems on earth, the net entropy of the entire system increases and thus fits right in line with the second law as we would expect.

However, even though the emergence and evolution of life on earth do not violate the second law and are thus physically possible, that still doesn’t show that they are probable processes.  What we need to know is how favorable the reactions are that are required for initiating and then sustaining these processes.  Several very important advancements have been made in abiogenesis over the last ten to fifteen years, with the collaboration of geologists and biochemists, and it appears that they are in fact not only possible but actually probable processes for a few reasons.

One reason is that the chemical reactions that living systems undergo produce a net entropy as well, despite the drop of entropy associated with every cell and/or it’s arrangement with respect to other cells.  This is because all living systems give off heat with every favorable chemical reaction that is constantly driving the metabolism and perpetuation of those living systems. This gain in entropy caused by heat loss more than compensates for the loss in entropy that results with the production and maintenance of all the biological components, whether lipids, sugars, nucleic acids or amino acids and more complex proteins.  Beyond this, as more complexity arises during the evolution of the cells and living systems, the entropy that those systems produce tends to increase even more and so living systems with a higher level of complexity appear to produce a greater net entropy (on average) than less complex living systems.  Furthermore, once photosynthetic organisms evolved in particular, any entropy (heat) that they give off in the form of radiation ends up being of lower energy (infrared) than the photons given off by the sun to power those reactions in the first place.  Thus, we can see that living systems effectively dissipate the incoming energy from the sun, and energy dissipation is energetically favorable.

Living systems seem to serve as a controllable channel of energy flow for that energy dissipation, just like lightning, the eye of a hurricane, or a tornado, where high energy states in the form of charge gradients or pressure or temperature gradients end up falling to a lower energy state by dissipating that energy through specific focused channels that spontaneously form (e.g. individual concentrated lightning bolts, the eye of a hurricane, vortices, etc.).  These channels for energy flow are favorable and form because they allow the energy to be dissipated faster since the channels are initiated by some direction of energy flow that is able to self-amplify into a path of decreasing resistance for that energy dissipation.  Life and the metabolic processes involved with it, seem to direct energy flow in ways that are very similar to these other naturally arising processes in non-living physical systems.  Interestingly enough, a relevant hypothesis has been proposed for why consciousness and eventually self-awareness would have evolved (beyond the traditional reasons proposed by natural selection).  If an organism can evolve the ability to predict where energy is going to flow, where an energy dissipation channel will form (or form more effective ones themselves), conscious organisms can then behave in ways that much more effectively dissipate energy even faster (and also by catalyzing more entropy production), thus showing why certain forms of biological complexity such as consciousness, memory, etc., would have also been favored from a thermodynamic perspective.

Thus, the origin of life as well as the evolution of biological complexity appears to be increasingly favored by the second law, thus showing a possible fundamental physical driving force behind the origin and evolution of life.  Basically, the origin and evolution of life appear to be effectively entropy engines and catalytic energy dissipation channels, and these engines and channels produce entropy at a greater rate than the planet otherwise would in the absence of that life, thus showing at least one possible driving force behind life, namely, the second law of thermodynamics.  So ironically, not only does the origin and evolution of life not violate the second law of thermodynamics, but it actually seems to be an inevitable (or at least favorable) result because of the second law.  Some of these concepts are still being developed in various theories and require further testing to better validate them but they are in fact supported by well-established physics and by consistent and sound mathematical models.

Perhaps the most poetic concept I’ve recognized with these findings is that life is effectively speeding up the heat death of the universe.  That is, the second law of thermodynamics suggests that the universe will eventually lose all of its useful energy when all the stars burn out and all matter eventually spreads out and decays into lower and lower energy photons, and thus the universe is destined to undergo a heat death.  Life, because it is producing entropy faster than the universe otherwise would in the absence of that life, is actually speeding up this inevitable death of the universe, which is quite fascinating when you think about it.  At the very least, it should give a new perspective to those that ask the question “what is the meaning or purpose of life?”  Even if we don’t think it is proper to think of life as having any kind of objective purpose in the universe, what life is in fact doing is accelerating the death of not only itself, but of the universe as a whole.  Personally, this further reinforces the idea that we should all ascribe our own meaning and purpose to our lives, because we should be enjoying the finite amount of time that we have, not only as individuals, but as a part of the entire collective life that exists in our universe.

To read about the newest and most promising discoveries that may explain how life got started in the first place, read part two here.

Mind, Body, and the Soul: The Quest for an Immaterial Identity

There’s little if any doubt that the brain (the human brain in particular) is the most complex entity or system that we’ve ever encountered in the known universe, and thus it is not surprising that it has allowed humans to reach the top of the food chain and also the ability to manipulate our environment more than any other creature on Earth.  Not only has it provided humans with the necessary means for surviving countless environmental pressures, effectively evolving as a sort of anchor and catalyst for our continued natural selection over time (through learning, language, adaptive technology, etc.), but it has also allowed humans to become aware of themselves, aware of their own consciousness, and aware of their own brains in numerous other ways.  The brain appears to be the first evolved feature of an organism capable of mapping the entire organism (including its interaction with the external environment), and it may even be the case that consciousness later evolved as a result of the brain making maps of itself.  Even beyond these capabilities, the human brain has also been able to map itself in terms of perceptually acquired patterns related to its own activity (i.e. when we study and learn about how our brains work).

It isn’t at all surprising when people marvel over the complexity, beauty and even seemingly surreal qualities of the brain as it produces the qualia of our subjective experience including all of our sensations, emotions and the resulting feelings that ensue.  Some of our human attributes are so seemingly remarkable, that many people have gone so far as to say that at least some of these attributes are either supernatural, supernaturally endowed, and/or are forever exclusive to humans.  For example, some religious people claim that humans alone have some kind of immaterial soul that exists outside of our experiential reality.  Some also believe that humans alone possess free will, are conscious in some way forever exclusive to humans (some have even argued that consciousness in general is an exclusively human trait), and a host of other (perhaps anthropocentric) “human only” attributes, with many of them forever exclusive to humans.  In the interest of philosophical exploration, I’d like to consider and evaluate some of these claims about “exclusively human” attributes.  In particular, I’d like to focus on the non-falsifiable claim of having a soul, with the aid of reason and a couple of thought experiments, although these thought experiments may also shed some light on other purported “exclusively human” attributes (e.g. free will, consciousness, etc.).  For the purposes of simplicity in these thought experiments, I may periodically refer to many or all purported “humanly exclusive” attributes as simply, “H”.  Let’s begin by briefly examining some of the common conceptions of a soul and how it is purported to relate to the physical world.

What is a Soul?

It seems that most people would define a soul to be some incorporeal entity or essence that serves as an immortal aspect or representation of an otherwise mortal/living being.  Furthermore, many people think that souls are something possessed by human beings alone.  There are also people who ascribe souls to non-living entities (such as bodies of water, celestial bodies, wind, etc.), but regardless of these distinctions, for those that believe in souls, there seems to be something in common: souls appear to be non-physical entities correlated, linked, or somehow attached to a particular physical body or system, and are usually believed to give rise to consciousness, a “life force”, animism, or some power of agency.  Additionally, they are often believed to transcend material existence through their involvement in some form of an afterlife.  While it is true that souls and any claims about souls are unfalsifiable and thus are excluded from any kind of empirical investigation, let’s examine some commonly held assumptions and claims about souls and see how they hold up to a more critical examination.

Creation or Correlation of Souls

Many religious people now claim that a person’s life begins at conception (after Science discovered this specific stage of reproduction), and thus it would be reasonable to assume that if they have a soul, that soul is effectively created at conception.  However, some also believe that all souls have co-existed for the same amount of time (perhaps since the dawn of our universe), and that souls are in some sense waiting to be linked to the physical person once they are conceived or come into existence.  Another way of expressing this latter idea is the belief that all souls have existed since some time long ago, but only after the reproductive conception of a person does that soul begin to have a physical correlate or incarnation linked to it.  In any case, the presumed soul is believed to be correlated to a particular physical body (generally presumed to be a “living” body, if not a human body), and this living body has been defined by many to begin its life either at conception (i.e. fertilization), shortly thereafter as an embryo (i.e. once the fertilized egg/cell undergoes division at least once), or once it is considered a fetus (depending on the context for such a definition).  The easiest definition to use for the purposes of this discussion is to define life to begin at conception (i.e. fertilization).

For one, regardless of the definition chosen, it seems difficult to define exactly when the particular developmental stage in question is reached.  Conception could be defined to take place once the spermatozoa’s DNA contents enter the zygote or perhaps not until some threshold has been reached in a particular step of the process afterward (e.g. some time after the individual parent DNA strands have mixed to produce a double-helix daughter strand).  Either way, most proponents of the idea of a human soul seem to assume that a soul is created or at least correlated (if created some time earlier) at the moment of, or not long after, fertilization.  At this point, the soul is believed to be correlated or representative of the now “living” being (which is of course composed of physical materials).

At a most basic level, one could argue, if we knew exactly when a soul was created/correlated with a particular physical body (e.g. a fertilized egg), then by reversing the last step in the process that instigated the creation/correlation of the soul, we should be able to destroy/decorrelate the soul.  Also, if a soul was in fact correlated with an entire fertilized egg, then if we remove even one atom, molecule, etc., would that correlation change?  If not, then it would appear that the soul is not actually correlated with the entire fertilized egg, but rather it is correlated with some higher level aspect or property of it (whatever that may be).

Conservation & Identity of Souls

Assuming a soul is in fact created or correlated with a fertilized egg, what would happen in the case of chimerism, where more than one fertilized egg fuse together in the early stages of embryonic development?  Would this developing individual have two souls?  By the definition or assumptions given earlier, if a soul is correlated with a fertilized egg in some way, and two fertilized eggs (each with their own soul) merge together, then this would indicate one of a few possibilities.  Either two souls merged into one (or one is actually destroyed) which would demonstrate that the number of souls are not conserved (indicating that not all souls are eternal/immortal), or the two souls would co-exist with that one individual and would imply that not all individuals have the same number of souls (some have one, some may have more) and thus souls don’t each have their own unique identity with a particular person, or it would indicate that after the merging of fertilized eggs took place, one of the two souls would detach from or become decorrelated with its physical counterpart, and the remaining soul would get to keep the booty of both fertilized eggs or so to speak.

In the case of identical twins, triplets, etc., a fertilized egg eventually splits, and we are left with the opposite conundrum. It would seem that we would be starting with one soul that eventually splits into two or more, and thus there would be another violation of the conservation of the number of souls.  Alternatively, if the number of souls are indeed conserved, an additional previously existing soul (if this was the case) could become correlated with the second fertilized egg produced. Yet another possibility would be to say that the “twins to be” (i.e. the fertilized egg prior to splitting) has two souls to start with and when the egg splits, the souls are segregated and each pre-destined twin is given their own.

The only way to avoid these implications would be to modify the assumption given earlier, regarding when a soul is created or correlated.  It would have to be defined such that a soul is created or correlated with a physical body some time after an egg is fertilized when it is no longer possible to fuse with another fertilized egg and after it can no longer split into fertilized multiples (i.e. twins, triplets, etc.).  If this is true, then one could no longer say that a fertilized egg necessarily has a soul, for that wouldn’t technically be the case until some time afterward when chimerism or monozygotic multiples were no longer possible.

If people believe in non-physical entities that can’t be seen or in any way extrospectively verified, it’s not much of a stretch to say that they can come up with a way to address these questions or reconcile these issues, with yet more unfalsifiable claims.  Some of these might not even be issues for various believers but I only mention these potential issues to point out the apparent arbitrariness or poorly defined aspects of many claims and assumptions regarding souls. Now let’s look at a few thought experiments to further analyze the concept of a soul and purported “exclusively human” attributes (i.e. “H”) as mentioned in the introduction of this post.

Conservation and Identity of “H”

Thought Experiment # 1: Replace a Neuron With a Non-Biological Analog

What if one neuron in a person’s brain is replaced with a non-biological/artificial version, that is, what if some kind of silicon-based (or other non-carbon-based) analog to a neuron was effectively used to replace a neuron?  We are assuming that this replacement with another version will accomplish the same vital function, that is, the same subjective experience and behavior.  This non-biologically-based neuronal analog may be powered by ATP (Adenosine Triphosphate) and also respond to neurotransmitters with electro-chemical sensors — although it wouldn’t necessarily have to be constrained by the same power or signal transmission media (or mechanisms) as long as it produced the same end result (i.e. the same subjective experience and behavior).  As long as the synthetic neuronal replacement accomplished the same ends, the attributes of the person (i.e. their identity, their beliefs, their actions, etc.) should be unaffected despite any of these changes to their hardware.

Regarding the soul, if souls do in fact exist and they are not physically connected to the body (although people claim that souls are somehow associated with a particular physical body), then it seems reasonable to assume that changing a part of the physical body should have no effect on an individual’s possession of that soul (or any “H” for that matter), especially if the important attributes of the individual, i.e., their beliefs, thoughts, memories, and subsequent actions, etc., were for all practical purposes (if not completely), the same as before.  Even if there were some changes in the important aspects of the individual, say, if there was a slight personality change after some level of brain surgery, could anyone reasonably argue that their presumed soul (or their “H”) was lost as a result?  If physical modifications of the body led to the loss of a soul (or of any elements of “H”), then there would be quite a large number of people (and an increasing number at that) who no longer have souls (or “H”) since many people indeed have had various prosthetic modifications used in or on their bodies (including brain and neural prosthetics) as well as other intervening mediation of body/brain processes (e.g. through medication, transplants, various levels of critical life support, etc.).

For those that think that changing the body’s hardware would somehow disconnect the presumed soul from that person’s body (or eliminate other elements of their “H”), they should consider that this assumption is strongly challenged by the fact that many of the atoms in the human body are replaced (some of them several times over) throughout one’s lifetime anyway.  Despite this drastic biological “hardware” change, where our material selves are constantly being replaced with new atoms from the food that we eat and the air that we breathe (among other sources), we still manage to maintain our memories and our identity simply because the functional arrangements of the brain cells (i.e. neurons and glial cells) which are composed of those atoms are roughly preserved over time and thus the information contained in such arrangements and/or their resulting processes are preserved over time.  We can analogize this important point by thinking about a computer that has had its hardware replaced, albeit in a way that matches or maintains its original physical state, and understand that as a result of this configuration preservation, it also should be able to maintain its original memory, programs and normal functional operation.  One could certainly argue that the computer in question is technically no longer the “same” computer because it no longer has any of the original hardware.  However, the information regarding the computer’s physical state, that is, the specific configuration and states of parts that allow it to function exactly as it did before the hardware replacement, is preserved.  Thus, for all practical purposes in terms of the identity of that computer, it remained the same regardless of the complete hardware change.

This is an important point to consider for those who think that replacing the hardware of the brain (even if limited to a biologically sustained replacement) is either theoretically impossible, or that it would destroy one’s ability to be conscious, to maintain their identity, to maintain their presumed soul, or any presumed element of “H”.  The body naturally performs these hardware changes (through metabolism, respiration, excretion, etc.) all the time and thus the concept of changing hardware while maintaining the critical aspects of an individual is thoroughly demonstrated throughout one’s lifetime.  On top of this, the physical outer boundary that defines our bodies is also arbitrary in the sense that we exchange atoms between our outer surface and the environment around us (e.g. by shedding skin cells, or through friction, molecular desorption/adsorption/absorption, etc.).  The key idea to keep in mind is that these natural hardware changes imply that “we” are not defined specifically by our hardware or some physical boundary with a set number of atoms, but rather “we” are based on how our hardware is arranged/configured (allowing for some variation of configuration states within some finite acceptable range), and the subsequent processes and functions that result from such an arrangement as mediated by the laws of physics.

Is the type of hardware important?  It may be true that changing a human’s hardware to a non-biological version may never be able to accomplish exactly the same subjective experience and behavior that was possible with the biological hardware, however we simply don’t know that this is the case.  It may be that both the type of hardware as well as the configuration are necessary for a body and brain to produce the same subjective experience and behavior.  However, the old adage “there’s more than one way to skin a cat” has been applicable to so many types of technologies and to the means used to accomplish a number of goals.  There are a number of different hardware types and configurations that can be used to accomplish a particular task, even if, after changing the hardware the configuration must also be changed to accomplish a comparable result.  The question becomes, which parts or aspects of the neural process in the brain produces subjective experience and behavior?  If this becomes known, we should be able to learn how biologically-based hardware and its configuration work together in order to accomplish a subjective experience and behavior, and then also learn if non-biologically-based hardware (perhaps with its own particular configuration) can accomplish the same task.  For the purposes of this thought experiment, let’s assume that we can swap out the hardware with a different type, even if, in order to preserve the same subjective experience and behavior, the configuration must be significantly different than it was with the original biologically-based hardware.

So, if we assume that we can replace a neuron with an efficacious artificial version, and still maintain our identity, our consciousness, any soul that might be present, or any element of “H” for that matter, then even if we replace two neurons with artificial versions, we should still have the same individual.  In fact, even if we replace every neuron, perhaps just one neuron at a time, eventually we would be replacing the entire brain with an artificial version, and yet still have the same individual.  This person would now have a completely non-biologically based “brain”.  In theory, their identity would be the same, and they would subjectively experience reality and their selves as usual.  Having gone this far, let’s assume that we replace the rest of the body with an artificial version.  Replacing the rest of the body, one part at a time, should be far less significant a change than replacing the brain, for the rest of the body is far less complex.

It may be true that the body serves as an integral homeostatic frame of reference necessary for establishing some kind of self-object basis of consciousness (e.g. Damasio’s Theory of Consciousness), but as long as our synthetic brain is sending/receiving the appropriate equivalent of sensory/motor information (i.e. through an interoceptive feedback loop among other requirements) from the new artificial body, the model or map of the artificial body’s internal state provided by the synthetic brain should be equivalent.  It should also be noted that the range of conditions necessary for homeostasis in one human body versus another is far narrower and less individualized than the differences found between the brains of two different people.  This supports the idea that the brain is in fact the most important aspect of our individuality, and thus replacing the rest of the body should be significantly easier to accomplish and also less critical a change.  After replacing the rest of the body, we would now have a completely artificial non-biological substrate for our modified “human being”, or what many people would refer to as a “robot”, or a system of “artificial intelligence” with motor capabilities.  This thought experiment seems to suggest at least one of several implications:

  • Some types of robots can possess “H” (e.g. soul, consciousness, free-will, etc.), and thus “H” are not uniquely human, nor are they forever exclusive to humans.
  • Humans lose some or all of their “H” after some threshold of modification has taken place (likely a modification of the brain)
  • “H”, as it is commonly defined at least, does not exist

The first implication listed above would likely be roundly rejected by most people that believe in the existence of “H” for several reasons including the fact that most people see robots as fundamentally different than living systems, they see “H” as only applicable to human beings, and they see a clear distinction between robots and human beings (although the claim that these distinctions exist has been theoretically challenged by this thought experiment).  The second implication sounds quite implausible (even if we assume that “H” exists) as it would seem to be impossible to define when exactly any elements of “H” were lost based on exceeding some seemingly arbitrary threshold of modification.  For example, would the loss of some element of “H” occur only after the last neuron was replaced with an artificial version?  If the loss of “H” did occur after some specific number of neurons were removed (or after the number of neurons that remained fell below some critical minimum quantity), then what if the last neuron removed (which caused this critical threshold to be met) was biologically preserved and later re-installed, thus effectively reversing the last neuronal replacement procedure?  Would the previously lost “H” then return?

Thought Experiment # 2: Replace a Non-Biological Neuronal Analog With a Real Neuron

We could look at this thought experiment (in terms of the second implication) yet another way by simply reversing the order of the thought experiment.  For example, imagine that we made a robot from scratch that was identical to the robot eventually obtained from the aforementioned thought experiment, and then we began to replace its original non-biologically-based neuronal equivalent with actual biologically-based neurons, perhaps even neurons that were each taken from a separate human brain (say, from one or several cadavers) and preserved for such a task.  Even after this, consider that we proceed to replace the rest of the robot’s “body”, again piecewise (say, from one or several cadavers), until it was completely biologically-based to match the human being we began with in the initial thought experiment.  Would or could this robot acquire “H” at some point, or be considered human?  It seems that there would be no biological reason to claim otherwise.

Does “H” exist?  If So, What is “H”?

I’m well aware of how silly some of these hypothetical questions and considerations sound, however I find it valuable to follow the reasoning all the way through in order to help illustrate the degree of plausibility of these particular implications, and the plausibility or validity of “H”.  In the case of the second implication given previously (that humans lose some or all of “H” after some threshold of modification), if there’s no way to define or know when “H” is lost (or gained), then nobody can ever claim with certainty that an individual has lost their “H”, and thus they would have to assume that all elements of “H” have never been lost (if they want to err on the side of, what some may call, ethical or moral caution).  By that rationale, one would find themselves forced to accept the first implication (some types of robots can possess “H”, and thus “H” isn’t unique to humans).  If anyone denies the first two implications, it seems that they are only left with the third option.  The third implication seems to be the most likely (that “H” as previously defined does not exist), however it should be mentioned that even this third implication may be circumvented by realizing that it has an implicit loophole.  There is a possibility that some or all elements and/or aspects of “H” are not exactly what people assume them to be, and therefore “H” may exist in some other sense.  For example, what if we considered particular patterns themselves, i.e., the brain/neuronal configurations, patterns of brain waves, neuronal firing patterns, patterns of electro-chemical signals emanated throughout the body, etc., to be the “immaterial soul” of each individual?  We could look at these patterns as being immaterial if the physical substrate that employs them is irrelevant, or by simply noting that patterns of physical material states are not physical materials in themselves.

This is analogous to the concept that the information contained in a book can be represented on paper, electronically, in multiple languages, etc., and is not reliant on a specific physical medium.  This would mean that one could accept the first implication that robots or “mechanized humans” possess “H”, although it would also necessarily imply that any elements of “H” aren’t actually unique or exclusive to humans as they were initially assumed to be.  One could certainly accept this first implication by noting that the patterns of information (or patterns of something if we don’t want to call it information per se) that comprise the individual were conserved throughout the neuronal (or body) replacement in these thought experiments, and thus the essence or identity of the individual (whether “human” or “robot”) was preserved as well.

Pragmatic Considerations & Final Thoughts

I completely acknowledge that in order for this hypothetical neuronal replacement to be truly accurate in reproducing normal neuronal function (even with just one neuron), above and beyond the potential necessity of both a specific type of hardware as well as configuration (as mentioned earlier), the non-biologically based version would presumably also have to replicate the neuronal plasticity that the brain normally possesses.  In terms of brain plasticity, there are basically four known factors involved with neuronal change, sometimes referred to as the four R’s: regeneration, reconnection, re-weighting, and rewiring.  So clearly, any synthetic neuronal version would likely involve some kind of malleable processing in order to accomplish at least some of these tasks (if not all of them to some degree), as well as some possible nano-self-assembly processes if actual physical rewiring were needed.  The details of what and how this would be accomplished will become better known over time as we learn more about the possible neuronal dynamic mechanisms involved (e.g. neural darwinism or other means of neuronal differential reproduction, connectionism, Hebbian learning, DNA instruction, etc.).

I think that the most important thing to gain from these thought experiments is the realization of the inability or severe difficulty in taking the idea of souls or “H” seriously given the incompatibility between the traditional  conception of a concrete soul or other “H” and the well-established fluidic or continuous nature of the material substrates that they are purportedly correlated with.  That is, all the “things” in this world, including any forms of life (human or not) are constantly undergoing physical transformation and change, and they possess seemingly arbitrary boundaries that are ultimately defined by our own categorical intuitions and subjective perception of reality.  In terms of any person’s quest for “H”, if what one is really looking for is some form of constancy, essence, or identity of some kind in any of the things around us (let alone in human beings), it seems that it is the patterns of information (or perhaps the patterns of energy to be more accurate) as well as the level of complexity or type of patterns that ultimately constitute that essence and identity.  Now if it is reasonable to conclude that the patterns of information or energy that comprise any physical system aren’t equivalent to the physical constituent materials themselves, one could perhaps say that these patterns are a sort of “immaterial” attribute of a set of physical materials.  This seems to be as close to the concept of an immaterial “soul” as a physicalist or materialist could concede exists, since, at the very least it involves a property of continuity and identity which somewhat transcends the physical materials themselves.

Neuroscience Arms Race & Our Changing World View

At least since the time of Hippocrates, people began to realize that the brain was the physical correlate of consciousness and thought.  Since then, the fields of psychology, neuroscience, and several inter-related fields have emerged.  There have been numerous advancements made within the field of neuroscience during the last decade or so, and in that same time frame there has also been an increased interest in the social, religious, philosophical, and moral implications that have precipitated from such a far-reaching field.  Certainly the medical knowledge we’ve obtained from the neurosciences has been the primary benefit of such research efforts, as we’ve learned quite a bit more about how the brain works, how it is structured, and the ongoing neuropathology that has led to improvements in diagnosing and treating various mental illnesses.  However, it is the other side of neuroscience that I’d like to focus on in this post — the paradigm shift relating to how we are starting to see the world around us (including ourselves), and how this is affecting our goals as well as how to achieve them.

Paradigm Shift of Our World View

Aside from the medical knowledge we are obtaining from the neurosciences, we are also gaining new perspectives on what exactly the “mind” is.  We’ve come a long way in demonstrating that “mental” or “mind” states are correlated with physical brain states, and there is an ever growing plethora of evidence which suggests that these mind states are in fact caused by these brain states.  It should come as no surprise then that all of our thoughts and behaviors are also caused by these physical brain states.  It is because of this scientific realization that society is currently undergoing an important paradigm shift in terms of our world view.

If all of our thoughts and behaviors are mediated by our physical brain states, then many everyday concepts such as thinking, learning, personality, and decision making can take on entirely new meanings.  To illustrate this point, I’d like to briefly mention the well known “nature vs. nurture” debate.  The current consensus among scientists is that people (i.e. their thoughts and behavior) are ultimately products of both their genes and their environment.

Genes & Environment

From a neuroscientific perspective, the genetic component is accounted for by noting that genes have been shown to play a very large role in directing the initial brain wiring schema of an individual during embryological development and through gestation.  During this time, the brain is developing very basic instinctual behavioral “programs” which are physically constituted by vastly complex neural networks, and the body’s developing sensory organs and systems are also connected to particular groups of these neural networks.  These complex neural networks, which have presumably been naturally selected for in order to benefit the survival of the individual, continue being constructed after gestation and throughout the entire ontogenic evolution of the individual (albeit to lesser degrees over time).

As for the environmental component, this can be further split into two parts: the internal and the external environment.  The internal environment within the brain itself, including various chemical concentration gradients partly mediated by random Brownian motion, provides some gene expression constraints as well as some additional guidance to work with the genetic instructions to help guide neuronal growth, migration, and connectivity.  The external environment, consisting of various sensory stimuli, seems to modify this neural construction by providing a form of inputs which may cause the constituent neurons within these neural networks to change their signal strength, change their action potential threshold, and/or modify their connections with particular neurons (among other possible changes).

Causal Constraints

This combination of genetic instructions and environmental interaction and input produces a conscious, thinking, and behaving being through a large number of ongoing and highly complex hardware changes.  It isn’t difficult to imagine why these insights from neuroscience might modify our conventional views of concepts such as thinking, learning, personality, and decision making.  Prior to these developments over the last few decades, the brain was largely seen as a sort of “black box”, with its internal milieu and functional properties remaining mysterious and inaccessible.  From that time and prior to it, for millennia, many people have assumed that our thoughts and behaviors were self-caused or causa sui.  That is, people believed that they themselves (i.e. some causally free “consciousness”, or “soul”, etc.) caused their own thoughts and behavior as opposed to those thoughts and behaviors being ultimately caused by physical processes (e.g. neuronal activity, chemical reactions, etc.).

Neuroscience (as well as biochemistry and its underlying physics) has shed a lot of light on this long-held assumption and, as it stands, the evidence has shown this prior assumption to be false.  The brain is ultimately controlled by the laws of physics since every chemical reaction and neural event that physically produces our thoughts, choices, and behaviors, have never been shown to be causally free from these physically guiding constraints.  I will mention that quantum uncertainty or quantum “randomness” (if ontologically random) does provide some possible freedom from physical determinism.  However, these findings from quantum physics do not provide any support for self-caused thoughts or behaviors.  Rather, it merely shows that those physically constrained thoughts and behaviors may never be completely predictable by physical laws no matter how much data is obtained.  In other words, our thoughts and behaviors are produced by highly predictable (although not necessarily completely predictable) physical laws and constraints as well as some possible random causal factors.

As a result of these physical causal constraints, the conventional perspective of an individual having classical free will has been shattered.  Our traditional views of human attributes including morality, choices, ideology, and even individualism are continuing to change markedly.  Not surprisingly, there are many people uncomfortable with these scientific discoveries including members of various religious and ideological groups that are largely based upon and thus depend on the very presupposition of precepts such as classical free will and moral responsibility.  The evidence that is compiling from the neurosciences is in fact showing that while people are causally responsible for their thoughts, choices, and behavior (i.e. an individual’s thoughts and subsequent behavior are constituents of a causal chain of events), they are not morally responsible in the sense that they can choose to think or behave any differently than they do, for their thoughts and behavior are ultimately governed by physically constrained neural processes.

New World View

Now I’d like to return to what I mentioned earlier and consider how these insights from neuroscience may be drastically modifying how we look at concepts such as thinking, learning, personality, and decision making.  If our brain is operating via these neural network dynamics, then conscious thought appears to be produced by a particular subset of these neural network configurations and processes.  So as we continue to learn how to more directly control or alter these neural network arrangements and processes (above and beyond simply applying electrical potentials to certain neural regions in order to bring memories or other forms of imagery into consciousness, as we’ve done in the past), we should be able to control thought generation from a more “bottom-up” approach.  Neuroscience is definitely heading in this direction, although there is a lot of work to be done before we have any considerable knowledge of and control over such processes.

Likewise, learning seems to consist of a certain type of neural network modification (involving memory), leading to changes in causal pattern recognition (among other things) which results in our ability to more easily achieve our goals over time.  We’ve typically thought of learning as the successful input, retention, and recall of new information, and we have been achieving this “learning” process through the input of environmental stimuli via our sensory organs and systems.  In the future, it may be possible to once again, as with the aforementioned bottom-up thought generation, physically modify our neural networks to directly implant memories and causal pattern recognition information in order to “learn” without any actual sensory input, and/or we may be able to eventually “upload” information in a way that bypasses the typical sensory pathways thus potentially allowing us to catalyze the learning process in unprecedented ways.

If we are one day able to more directly control the neural configurations and processes that lead to specific thoughts as well as learned information, then there is no reason that we won’t be able to modify our personalities, our decision-making abilities and “algorithms”, etc.  In a nutshell, we may be able to modify any aspect of “who” we are in extraordinary ways (whether this is a “good” or “bad” thing is another issue entirely).  As we come to learn more about the genetic components of these neural processes, we may also be able to use various genetic engineering techniques to assist with the necessary neural modifications required to achieve these goals.  The bottom line here is that people are products of their genes and environment, and by manipulating both of those causal constraints in more direct ways (e.g. through the use of neuroscientific techniques), we may be able to achieve previously unattainable abilities and perhaps in a relatively miniscule amount of time.  It goes without saying that these methods will also significantly affect our evolutionary course as a species, allowing us to enter new landscapes through our substantially enhanced ability to adapt.  This may be realized through these methods by finding ways to improve our intelligence, memory, or other cognitive faculties, effectively giving us the ability to engineer or re-engineer our brains as desired.

Neuroscience Arms Race

We can see that increasing our knowledge and capabilities within the neurosciences has the potential for drastic societal changes, some of which are already starting to be realized.  The impact that these fields will have on how we approach the problem of criminal, violent, or otherwise undesirable behavior can not be overstated.  Trying to correct these issues by focusing our efforts on the neural or cognitive substrate that underlie them, as opposed to using less direct and more external means (e.g. social engineering methods) that we’ve been using thus far, may lead to much less expensive solutions as well as solutions that may be realized much, much more quickly.

As with any scientific discovery or subsequent technology produced from it, neuroscience has the power to bestow on us both benefits as well as disadvantages.  I’m reminded of the ground-breaking efforts made within nuclear physics several decades ago, whereby physicists not only gained precious information about subatomic particles (and their binding energies) but also how to release these enormous amounts of energy from nuclear fusion and fission reactions.  It wasn’t long after these breakthrough discoveries were made before they were used by others to create the first atomic bombs.  Likewise, while our increasing knowledge within neuroscience has the power to help society improve by optimizing our brain function and behavior, it can also be used by various entities to manipulate the populace for unethical reasons.

For example, despite the large number of free market proponents who claim that the economy need not be regulated by anything other than rational consumers and their choices of goods and services, corporations have clearly increased their use of marketing strategies that take advantage of many humans’ irrational tendencies (whether it is “buy one get one free” offers, “sales” on items that have artificially raised prices, etc.).  Politicians and other leaders have been using similar tactics by taking advantage of voters’ emotional vulnerabilities on certain controversial issues that serve as nothing more than an ideological distraction in order to reduce or eliminate any awareness or rational analysis of the more pressing issues.

There are already research and development efforts being made by these various entities in order to take advantage of these findings within neuroscience such that they can have greater influence over people’s decisions (whether it relates to consumers’ purchases, votes, etc.).  To give an example of some of these R&D efforts, it is believed that MRI (Magnetic Resonance Imaging) or fMRI (functional Magnetic Resonance Imaging) brain scans may eventually be able to show useful details about a person’s personality or their innate or conditioned tendencies (including compulsive or addictive tendencies, preferences for certain foods or behaviors, etc.).  This kind of capability (if realized) would allow marketers to maximize how many dollars they can squeeze out of each consumer by optimizing their choices of goods and services and how they are advertised. We have already seen how purchases made on the internet, if tracked, begin to personalize the advertisements that we see during our online experience (e.g. if you buy fishing gear online, you may subsequently notice more advertisements and pop-ups for fishing related goods and services).  If possible, the information found using these types of “brain probing” methods could be applied to other areas, including that of political decision making.

While these methods derived from the neurosciences may be beneficial in some cases, for instance, by allowing the consumer more automated access to products that they may need or want (which will likely be a selling point used by these corporations for obtaining consumer approval of such methods), it will also exacerbate unsustainable consumption and other personal or societally destructive tendencies and it is likely to continue to reduce (or eliminate) whatever rational decision making capabilities we still have left.

Final Thoughts

As we can see, neuroscience has the potential to (and is already starting to) completely change the way we look at the world.  Further advancements in these fields will likely redefine many of our goals as well as how to achieve them.  It may also allow us to solve many problems that we face as a species, far beyond simply curing mental illnesses or ailments.  The main question that comes to mind is:  Who will win the neuroscience arms race?  Will it be those humanitarians, scientists, and medical professionals that are striving to accumulate knowledge in order to help solve the problems of individuals and societies as well as to increase their quality of life?  Or will it be the entities that are trying to accumulate similar knowledge in order to take advantage of human weaknesses for the purposes of gaining wealth and power, thus exacerbating the problems we currently face?

The Fermi Paradox

For those less familiar with this term, this paradox is the apparent contradiction between our expectation of intelligent life throughout our universe and the lack of communication or contact from said intelligent life.  That is, if we are not the only intelligent life in the universe, then why haven’t we received distinguishable radio signals from these other intelligent beings?  In addressing this question, most of the ideas I will be discussing in this post are well-known to those familiar with the topic, but I hope this post will encourage the onset of new ideas from those that haven’t given the topic as much consideration.

Expectation of Extra-Terrestrial Intelligence

Many scientists estimate that our solar system may be but one of many billions in the galaxy, let alone the entire observable universe, let alone the entire unobservable universe.  The estimated number of stars in the observable universe is around 300 sextillion (3 × 1023) which as you may notice is on the same order as Avogadro’s number.  So with all of those solar systems out there, it seems reasonable that a fraction of them may be suitable for supporting life.  The number of civilizations capable of communicating with us can be calculated by what is known as the Drake equation.  While we don’t know which numbers to plug in to this equation, it at least gives us a mathematical view of the variables involved to accomplish such a calculation.

Another factor to look at is the age of our Sun relative to other stars in the universe.  The Sun is relatively young as it was formed a mere 4.6 billion years ago whereas most stars are between 1 and 10 billion years old clocking the average star at around 5.5 billion years old.  Either way, we can see that there are presumably a lot of stars out there that are older than the sun and this means that in many cases, intelligent life could have more time to evolve (and potentially much more time than we’ve had on Earth).  So this means that not only are there a large number of solar systems, but many of them are likely to be a billion or more years older than our own.   If any of those solar systems have supported intelligent life with radio technology, then we may expect to detect some radio signals which have already been traveling toward us for a billion or more years.  This would mean that if a radio transmission had enough energy behind it (enormous amounts of energy at that), it may have already traveled billions of light-years in which case we could detect those signals now — even if they are detected long after the originators have become extinct.  It would support the idea of intelligent life existing (or having once existed) elsewhere.

These reasons basically summarize our expectations for receiving communication from extra-terrestrial intelligent life.  Next, I will discuss several factors compatible with two ideas: intelligent life existing elsewhere in the universe, and the lack of evidence for receiving any communication from said intelligent life.  In other words, I will discuss several ways to resolve the Fermi paradox.

Resolving the Paradox

It seems to me that the paradox can be resolved by examining a number of factors including: the statistically small concentration of intelligent life in the universe, self-destruction by technologically-advanced species, the assumption of extra-terrestrial intelligent life behaving in certain ways based on anthropocentric views (including the assumption of extra-terrestrial life desiring contact with outsiders), human’s relatively short time span with post-radio technology, problems with radio signal propagation, the Hubble constant, and other various reasons.

Small Concentration of Intelligent life in the Universe

Life requires a very narrow range of conditions, and so we’d only expect it to exist if there was a planet at the appropriate distance from a star, and for that planet to contain the correct elemental starting materials (e.g. carbon, hydrogen, oxygen, nitrogen, phosphorous, etc.).  If these conditions are within some range of acceptability, Brownian motion, favorable chemical bonds, and an input of energy from a neighboring star could lead to amino acids, proto-cells, DNA, etc.  Basically, we need particular planetary conditions necessary for abiogenesis (whatever that range of conditions may be).  What this means is that we have at least two variables with opposite effects on the outcome:  a large number of prospective solar systems, and a narrow range of conditions suitable for life.  Thus, one would expect that there are a large number of solar systems that are inhabited by life, but this would constitute a tiny percentage relative to the entire population of solar systems.

On top of this, the conditions needed to support intelligent life are even scarcer, especially after we consider that out of the 3.7 billion years of life on this planet, only in the last several thousand years have we had a form of life capable of developing advanced technology (namely radio technology).  Several factors contributed to intelligent life evolving from simple living systems in order to produce advanced technology.  Among them are: a habitable terrestrial environment, a moon with consequential tidal forces that catalyzed ocean-life’s migration onto land, writing systems & language, opposable thumbs, etc.  If we lacked any of these factors, it’s easy to see how the evolution of intelligent life capable of manipulating its environment in order to develop advanced technologies would be extremely unlikely.

So it is difficult to say how likely it is to have conditions suitable for life, let alone intelligent life.  However, I think it is safe to say that the concentration of intelligent life would be rather small, that is, out of the minute concentration of life-supporting solar systems, we’d have an even smaller concentration of intelligent life-supporting solar systems.  In the next section, I’ll discuss why this minute concentration of intelligent life is also compounded with a limited window of time for the species potentially able to send radio communication (due to the eventual extinction of these intelligent species).

Self-Destruction by Technologically-Advanced Species

If there are indeed many regions in the universe that are presently supporting or have previously supported life, then we must ask ourselves another question:  How long do we expect those species to thrive before they self-destruct?  In other words, if all or most technologically-advanced species inevitably get to a certain point in their evolution whereby they either exhaust all of their resources, succumb to nuclear or biological warfare, lose ecological sustainability (some may call this a Malthusian check/catastrophe), etc. — is it very likely that they will be in existence long enough to design and transmit interstellar radio communication?  Or to ask another follow-up question, if those species are alive long enough to design and transmit interstellar radio communication, how long do they have before they become extinct and their radio communication comes to a screeching halt?  The reason why this is important is because intelligent life able to communicate with us in theory may only have a relatively small window of time before the species becomes extinct.  If this happens after just a few hundred or a few thousand years of developing radio technology, then any radio transmission heading our way would only last for this same time duration.  If we are not in existence to receive it (whether it reached Earth before we evolved into intelligent beings or if it reaches us after we’ve become extinct) then we would have no record of it, even if radio transmission has occurred or will occur one day when we are long gone.  In other words, not only are extra-terrestrials with radio technology a necessary prerequisite for receiving a signal on Earth, but we also have to be alive and able to receive it during a particular relatively narrow window of time.  If the average star is older than ours, than only those solar systems at the appropriate distance would have a window of radio communication capable of reaching us.

Time Span of Radio Technology

In 1879, David Edward Hughes discovered that sparks would generate a radio signal, thus leading to the spark-gap transmitter.  Shortly thereafter in 1888, Heinrich Hertz, utilizing a more rigorous scientific approach than Hughes, was the first person to prove and demonstrate that radio waves could be transmitted (and detected) through free space.  This paved a large path for Marconi,  and after conducting various experiments in the 1890’s, he finally produced a technically and commercially viable form of radio technology.

So we can see that radio technology ultimately surfaced in the last 150 years.  Human civilization as we define it today (e.g. the utilization of agriculture, writing, weapons and other advanced technologies) has been around for about 10,000 years.  This means that only in the latter 1.5% of our time as civilized human beings, did we possess radio technology.  Relative to the 50-100,000 years of Homo sapiens’ existence, this is but the latter 0.1%.  Relative still to the 3.7 billion years of having any form of life on this planet, it is a mere blink of an eye.

So in short, we can see that radio technology wasn’t available to us until very, very recently.  What does this say about the Fermi Paradox?  Well it tells me that once intelligent life has evolved, it can take many tens of thousands of years (or longer) for advanced communicative technologies to exist.  Moreover, it tells me that intelligent life may exist elsewhere in the universe, even if extra-terrestrials have yet to discover radio technology (e.g. electromagnetic wave transmission).  After all, intelligent life seems to require many special conditions in order to develop radio technology, including: writing systems, opposable thumbs (not necessary but it makes it much easier), a terrestrial environment (it’s difficult to fathom how our level of technological progress could be attained in an aqueous environment), etc.  It’s easy enough to see that humans may have never discovered radio technology, as many special conditions were needed in order to do so.  Only very recently were all of those conditions met.

Radio Wave Propagation in Space

Due to the inverse square law (of electromagnetic radiation passing through free space), radio waves tend to become indistinguishable around several light years.  They may be clearly transmitted as far as a few hundred light years (if the signal is amplified and aimed in a specific direction), but radio waves require extremely large amounts of power (on the order of gigawatts) for transmission over distances as short as several light-years.   What this means is that many terawatts of power are most likely needed to send a clear signal over the distances required to reach intelligent life elsewhere in the universe.  So not only must we assume that this extra-terrestrial intelligent life has discovered radio wave technology, but also that it has the energy resources available to propagate radio wave signals over vast interstellar distances of many thousands or even millions of light-years.  The key thing to note here is that the radio wave signal has to be strong enough to overcome the average background noise that we’re already receiving constantly.

Radio waves can also be reflected, refracted, diffracted, absorbed, polarized, scattered, etc., by various materials in between its trajectory and our planet.  So in addition to the energy needed to propagate these radio waves, we must also assume that the waves have a free path so there is little or no information lost during its propagation to Earth.  Now we mustn’t forget that the mean free path of outer space, that is, the average distance a photon can travel without being affected (by matter), is around 10 billion light years.  So I’m willing to admit that the idea of radio waves being absorbed or affected by matter in any significant way is unlikely.  However, there is still bound to be constructive and destructive interference between any radio waves sent and any other electromagnetic radiation it crosses paths with.  If there are radio signals being sent from all over the universe and some were heading in our direction, how many of them have interacted with other waves with varying frequencies, amplitudes, etc.?  I think it is safe to say that regardless of the mean free path in space, the true mean free path, that is, the average distance a photon can travel without being affected (by matter OR electro-magnetic radiation) is probably closer to zero.  The universe is full of radiation moving in every (or just about every) direction.  This ultimately means that any radio transmissions sent from afar would most likely be distorted and changed dramatically.  If this was the case, we may end up receiving radio signals that are comparable to the microwave background radiation we detect now, that is, we may detect signals that don’t appear to carry much (if any) information at all.  So we may just have trouble separating “noise” from potential “information”.  For all we know, any and all radio signals sent in our direction may partially constitute the cosmic microwave background radiation.  If this is the case, then there is little or no hope of receiving any true information.

The Hubble Constant

As it turns out, cosmologists have determined that the observable universe is expanding.  Not only is it expanding, but it is also accelerating.  This rate of expansion is represented by the Hubble constant.  One important thing to realize is that space-time itself is expanding, not just the distance between galaxies within that space-time.  This is important to realize because space-time is actually believed to be expanding faster than the speed of light.  While many may think that this would violate Einstein’s Special Theory of Relativity, it is actually completely compatible.  According to Special Relativity, one consequence is that nothing can travel faster than the speed of light within space-time.  Within the field of cosmology, the general consensus is that space-time itself is expanding faster than the speed of light, thus not violating any physical laws.  Since this cosmic expansion does not allow information to travel faster than light, we are further assured that it adheres to all known physical laws.  This expansion is creating a light cone which incidentally puts a boundary on the observable universe from any point of reference.  That is, due to space-time expanding faster than the speed of light, there is a maximum observable distance from any point in the universe.  Past this distance, electro-magnetic radiation that is being transmitted toward the observer is never able to “outrun” the expansion rate of space-time, and thus it is never able to actually reach that observer.  What this ultimately means is that any potential radio waves that are being transmitted beyond this “observable universe” (i.e. the light cone) will never be able to reach that point of reference (e.g. an observer in the center of that light cone).  It does not matter how much power is used for transmission, it is physically impossible for information to reach us if it is transmitted beyond this point.

If we account for the Hubble constant, it turns out that our observable universe has a radius of approximately 46 billion light-years.  Past this point we can’t see anything (including radio signals).  If we take into account various theories of inflation, it is believed that the true universe (that which our “observable universe” is a constituent of) is many orders of magnitude larger.  So even if the universe had intelligent life distributed at a concentration of only 1 planet per “observable universe”, we would still have billions upon billions of intelligent civilizations in existence and yet there would be no possible way to know about their existence.

Summary

So it appears that there are numerous factors that can account for and resolve the Fermi paradox.  Does this mean that there is undoubtedly intelligent life elsewhere in the universe?  Of course not.  It seems clear to me however that the vast size of the universe (especially the unobservable universe) and the degree of homogenization we’ve observed thus far indicates that the probability of extra-terrestrial life is extremely high, regardless of whether or not we have received a radio transmission from them.  So in my opinion, the “Fermi paradox” doesn’t appear to be much of a paradox at all.

Consciousness and the Laws of Physics

When most people hear the word “consciousness”, they tend to think of what I refer to as “mental consciousness”, that is, the mental process of awareness, self-awareness, or experience in general.  However, I prefer to think of consciousness as a fundamental type of awareness (i.e. an ability to respond to stimuli).  On top of this, I believe this property of awareness applies to “non-living” systems as well.  One idea I’d like to discuss in this post is the idea that “consciousness”, or a “universal consciousness” exists as some driving force in the universe such that experience, awareness, response to stimuli (e.g. physical motion), etc., precipitate from it.  In a nutshell, I equate this universal consciousness with the laws of physics.

Traditional Consciousness

If we look at the traditional view of consciousness, it seems to be the “I” (a combination of an unconscious and conscious driver) or more appropriately the “me”, that is, it is some concept of “self” that subsequently experiences and/or drives all of the constituent processes that constitute our experience.  If we look at consciousness from a physicalist perspective, we are led to the idea that consciousness is nothing more than particular physical processes produced and mediated by the brain.  What is important here is that the fundamental physical processes that produce and mediate consciousness, are processes which are ultimately driven by the laws of physics.  That is, the motion of all molecules, atoms, electrons, ions, etc., which are intricately interacting to produce this mental consciousness, are all governed by the laws of physics.

Mental Consciousness

Looking at mentally conscious beings such as ourselves, we have incoming sensory data/stimuli leading to perceptions which eventually coalesce with our pattern recognition systems such that cognitive processes (e.g. concepts/thought, language, problem solving, learning, memory, etc.) begin to drive our behavior based on our brain’s response to not only this incoming information, but also to its relationship with any information that has been previously acquired.  We could summarize this by saying that we have conscious thoughts or motivations (as well as unconscious motivations) serving as complex stimuli which we physically respond to by behaving in various ways.  In other words, we started with elementary stimuli which led to more complex stimuli finally driving our behavior as mentally conscious beings.

Consciousness of Fundamental Living Systems

If we then look at brainless organisms (e.g. bacteria, etc.), we see some similar properties of responding to stimuli thus driving micro-scale motion and any other aspects of behavior at that scale.  We seem to have lost the possibilities of perception and self-awareness with this type of organism, but the property of sensation and awareness, that is, the ability to respond to its environment (through electro-photo-chemical signals), is conserved.

Consciousness of Non-Living Systems

Finally, if we look at energy quanta (e.g. photons, gravitons, etc.) as well as the smaller-scale constituents of matter (e.g. atoms, electrons, subatomic particles, etc.), we see that they respond to the fundamental forces governed by the laws of physics.  If the magnitude and direction of those forces change, the response changes.  Once again, the property of awareness (i.e. an ability to respond to stimuli) is conserved.

An Evolving Consciousness

Looking at this in terms of consciousness evolution, we started with particles and energy quanta that were fundamentally “aware” of the fundamental forces.  When organized a particular way (given a particular environment), this led to a higher level of awareness (e.g. electro-photo-chemical sensation) as seen in cellular organisms.  Then upon further organization, an even higher level of awareness was reached (e.g. perception and thought) as is seen in the multi-cellular organisms that possess brains.  Eventually, this led to particular brain configurations which yielded the highest level of awareness we’ve observed thus far (e.g. self-awareness).  It is at this point (self awareness) that a being’s mental consciousness includes the experience of realizing that it is a mentally conscious being.  One could perhaps describe this type of awareness as a profound way that the universe has become aware of itself.

Final Thoughts and Questions

We could say that every “level” of consciousness or awareness that seems to exist is but one step in a series driven by the fundamental universal consciousness which increasingly approximates complete awareness of the universe (or at least some maximal level).  This leads me to several questions:

– Are there any other levels of awareness that we are not aware of?

– If there are other types of awareness in which we are constituents of some “higher” level, can we come to know those higher levels, or are certain epistemological limitations in place to prevent this kind of knowledge (analogous to brain cells being unaware of the self-producing brain that they constitute)?

– If the universe has a finite amount of time before all “higher” levels of awareness are reduced to the fundamental form from which they came (due to the second law of thermodynamics leading to an inevitable heat death), what will the climax of awareness consist of, or at the very least, what are some plausible climaxes of awareness?

– If the universe is cyclical (e.g. Big Bang and Big Crunch ad infinitum), will every iteration consist of the same climax of awareness, even if the laws of physics change (e.g. physical constants), and despite the possibility of there being some level of ontological randomness?

Essay on Time – Part III: Time Travel and its Limitations

Time travel and the Laws of Physics

As the “Twin Paradox” and Einstein’s Theory of Relativity implied, time travel to the future is possible if enough energy is available.  As for time travel to the past, while it seems to be the most envied hypothetical time-travel capability, it also seems to be the only one that is impossible (in my opinion).  I will discuss why I believe this to be the case, specifically how it pertains to certain physical laws and theories including: Einstein’s Theory of Relativity, The Law of Conservation of Mass and Energy, and The Law of Causality.

One method proposed by some, in order to be able to travel back in time, is to utilize Einstein’s theory of relativity to take time dilation “one step further”, that is, by traveling faster than the speed of light the time dilation may theoretically reverse the arrow of time.  To better picture this, recall that traveling closer to the speed of light slowed down the passage of time, and reaching the speed of light appeared to stop it.  If time dilation or the deceleration of time were to continue in the direction implied (slowing down to a stop), then continuing this deceleration by traveling faster than light would cause the arrow of time to reverse, thus making time travel to the past possible.  Unfortunately this “faster than light-speed” travel would violate Einstein’s Theory of Relativity, as one of the primary elements of the theory is the assertion that the speed of light is the fastest speed that can be attained by anything moving in space.  Furthermore, Relativity asserts mathematically that it would take an infinite amount of energy to accelerate a mass (e.g. a time traveler) to the speed of light, which implies that it would take even more energy to accelerate a mass to a speed higher than that of light.  Since you can’t have an infinite amount of energy, let alone more than an infinite amount of energy, traveling at or faster than the speed of light is impossible.

Relativity aside, if we found some other way to travel back in time and were to able to exist in a previous “version” of the universe, I think that we would violate the Law of Conservation of Mass and Energy, because we as the time traveler would be adding our own mass to a previous version of the universe which should have a fixed amount of mass and energy over time.  The only way around this would be to somehow sacrifice matter in the previous version of the universe that one travels to, that is, the time traveler’s body would have to be assembled out of matter already located in the past version of the universe.  If this occurred however, we would no longer exist in a previous version of the universe and would by definition have failed to time travel to “the past”.  It would appear to be close to matching the past, but it would be a moment in time that had never existed, and the causal chain would be altered beyond what we can possibly comprehend.  So time travel to the past appears to be impossible even if this particular law of physics was upheld, as we would be forced to alter the past (in order to satisfy the law) thus preventing us from traveling to a real moment of the past.

Finally, if we were to find a way to travel back in time and somehow solve the aforementioned issues, we’d still have a problem with causality.  If a time traveler were to go back to the past, and actually exist in that causal chain, the “Butterfly Effect” would immediately change the course of history such that the “present” time from which the time traveler came from would no longer exist.  If this was the case, then it seems unreasonable to assume that the time traveler would still have time traveled in the first place.  Let’s take a look at a simple causal diagram to appreciate this scenario.

From this diagram, we can see that time traveling to the past would create a new causal chain up to and including a new “present”.  This new causal chain would no longer be causally connected to the old “present”.  This would mean that the time from which the time traveler initially left (i.e. the old present) would no longer exist.  Wouldn’t this imply that the time traveler (their sense of self, the “I”, the “me”, etc.) as well as the trip itself never would have been?  I see no way around this dilemma.

So it appears that time travel to the past is physically impossible for a number of reasons.  At least time travel to the future has some promise as it doesn’t appear to violate any of these physical laws and is implied as a possibility due to consequences of Relativity.  This type of time travel seems to only be limited by the energy requirements needed to accelerate the time traveling matter to a high enough velocity for a long enough period of time, and return the time traveler back to the previous frame of reference (e.g. Earth).  Or if the time traveler utilized the effect of gravitational time dilation, their time travel would be limited by the gravitational field of the celestial body they chose to travel to as well as the time it would take them to get back to the previous frame of reference (e.g. Earth).  Either way, time travel to the future is possible simply by moving through space in a particular way.

Final thoughts

Regarding temporal experiences, it appears to me that memory is the most important of the mental requirements in order to have a mental frame of reference, that is, to make an experience of the past and present possible (as well as a concept of the future).  I think that how this memory is stored and retrieved in the brain, the amount or types of memory available as well as the psycho-pharmacological substance-induced or otherwise caused physiological changes to this memory no doubt affect our temporal experiences in profound ways.  Memory also appears to transcend physical time by providing a means for experiencing an ever-changing temporal rate.

The Theory of Relativity suggests that physical time does not exist for entities moving at the speed of light because entities moving at the speed of light (which is a constant in all frames of reference) have no physical frame of reference, and have an infinite time dilation between themselves and all inertial frames of reference.  All physical time for entities that are not moving at the speed of light would be relative to one another based on relative velocity, acceleration, and gravity.  Time also appears to pop in and out of existence due to Einstein’s mass-energy equivalence, as matter is converted into energy and vice versa.

Thus, both mental and physical frames of reference are needed in order for a temporal experience to exist.

As for time travel, it appears to be possible but only if traveling into the future, if we are to uphold the Law of Conservation of Mass and Energy, Einstein’s Theory of Relativity and the Law of Causality.

Essay on Time – Part II: Temporal Experience and Space-time

Physical Frame of Reference

Relativity

Memory may be one of several mental requirements for any experience of the temporal dimension, but how exactly is time objectively related to the physical universe (i.e. 3D-space)?  As I mentioned in the first post, I believe that we can consider time to be a dimension if we find a proper way to relate time to the existing dimensions, such that we have a foundation to work off of.  To begin, let’s consider an inductive definition of a dimension so we can at least see one foundation we have for defining our three spatial dimensions.

If we start out with an empty space and place one discrete point in it, we can refer to that point as a zero-dimensional object.  If we take this object and drag it in any direction, the path it takes can be collectively described as a one-dimensional object (e.g. line, ray, or segment).  By dragging this object in a new direction, the path it takes can be collectively described as a two-dimensional object (i.e. a plane).  Finally, by dragging this object in yet another direction, the path it takes can be collectively described as a three-dimensional object (e.g. polyhedron, ellipsoid, etc.).  In general, we can drag an n-dimensional object in a new direction and collectively describe the path this object takes as an (n+1)-dimensional object.

So this is one foundation for our three spatial dimensions.  It doesn’t appear to be possible to take this induction one step further, as we have trouble even trying to conceptualize a four-dimensional (or higher dimensional) object.  So we can assume for now that our spatial dimensions are limited to a quantity of three.  Now this begs the question:  How can we reconcile these spatial dimensions with time?  Isn’t time independent of space?  Not exactly.

I believe that time has been reconciled with the three spatial dimensions in at least one way, most notably within Einstein’s Theory of Relativity.  Within this theory, Einstein suggested that these four dimensions were unified, and were thus eventually referred to as “space-time”.  Up until relativity was discovered, all physical motion and causality in space were seen to operate or progress uni-directionally along an arrow of time (i.e. from past-to-present-to-future) and presumed to elapse at a fixed rate throughout the entire universe.  The three extensions of space were our physical universe and the rate of all motion within that space was, or was mediated by, time.

So classical physics (i.e. “pre-relativistic” physics) implied that there was indeed an “absolute time” or “absolute present” that existed.  It was believed that if a person experienced one minute of time (and even confirmed it with an extremely precise atomic clock), that everyone else in the world (let alone any location throughout the universe) also experienced or underwent one minute of time elapse.  To put it another way, it was believed that “clock time” or “proper time” was a 100% objective attribute that was also constant in any frame of reference.  Once relativity was discovered, the intuitive concept of an objective (and constant) time was replaced with the much less intuitive concept of a relative time (albeit still objective in some ways).  This relativity is demonstrated in several bizarre phenomena including relative velocity time-dilation, gravitational (and other non-inertial) time-dilation, and length contraction.  I plan to discuss the first two of these phenomena.

Relativity and the speed of light

It should be noted that one of the main reasons for this physical/temporal relativity and the resultant phenomena is the fact that the speed of light is the fastest speed that anything can travel in space and is also a constant speed measured by all frames of reference.  To illustrate the importance of this, consider the following example.

If a driver were in a race-car driving at 200 m.p.h. and a bullet was traveling head-on toward the car also moving at 200 m.p.h., a stationary bystander would measure the speed of the bullet to be 200 m.p.h., but the driver would measure the bullet to be traveling at 400 m.p.h.  This is because the race-car is moving toward the bullet and thus the velocities (car and bullet) are additive from the driver’s inertial frame of reference.  The impact on the car would be the same (ignoring wind resistance) if the car was stationary with the bullet moving towards it at 400 m.p.h., or if the car and bullet were traveling towards each other at the same speed of 200 m.p.h.  If we replace the bullet in this example with a pulse of light, this additive property of velocities disappears.  Both the race-car driver and the stationary bystander would measure the light pulse traveling at the speed of light (roughly 300,000 km/s), although the frequency of the light pulse would be measured to be higher for the driver in the race-car.  It is the time dilation that compensates for this, that is, time appears to pass by more slowly for any frame of reference in motion relative to the observer, such that the “additive velocity” paradox is resolved.  If both the driver and the stationary bystander were holding clocks that the other person could see, both would see the other person’s clock as ticking more slowly than their own.  It makes no difference whether we say that the driver or the bystander is the frame of reference that is “moving”.  The point is that there is motion relative to one another.  If we start the observations after the driver has reached a constant speed, we could just as easily assume that the race-car driver is “stationary” and it is the bystander, race track, and earth that are “moving” relative to the driver.

Motion is relative, and thus time is relative as well.  This temporal relativity is a concept that goes completely against all common sense and everyday experience, but has been empirically verified to be true many times over.  As opposed to the example with the race-car driver traveling at a constant speed of 200 m.p.h., the consequences of relativity are dramatically different when any of the frames of reference under consideration are non-inertial frames of reference, that is, if the frame of reference is accelerating (i.e. non-inertial) relative to any other.  When non-inertial frames of reference are considered, relativity has much more bizarre consequences.

Space-time and the “Twin Paradox”

The most bizarre example of non-inertial frames of reference, coinciding with the Theory of Relativity, is that of the supposed “Twin Paradox” or “Traveling Twin”.  There are two basic versions of this story, so I’ll start with the most commonly used.

Let’s imagine that there are two 20-year old identical twin sisters, Mary and Alice, where one twin travels into outer space (e.g. Mary) at near light speed and the other remains on Earth (e.g. Alice).  It just so happens that the speed that Mary was traveling at, in combination with her non-inertial motion (i.e. acceleration) when leaving and when returning to Earth, caused a permanent time dilation such that she aged less when she finally returns to Earth (faster travel speed creates a more noticeable effect).  Let’s say that at some point Mary stops her journey in outer space, turns around, and eventually makes it back to Earth with Alice having waited for 50 years.  We can also assume that Mary traveled at a speed such that she has only aged 1 year by the time she returns to Earth.  Alice is now 70 years old, but Mary steps off of the space shuttle and is only 21 years old!  It is worth noting that both twins experienced their time elapsing in a normal fashion (i.e. neither of them would experience a feeling of time moving in slow-motion).  To both Mary and Alice, nothing strange is going on as they wait.  Mary ages one year and Alice ages 50 years.  We can define the “time” that Mary experiences (or observes passing by on her clock) during her space travel as the “proper time”, while the “time” that Alice experiences (or observes passing by on her clock) on Earth as the “coordinate time” (the “proper time” within a defined stationary frame of reference).  It is the relative difference between these two times that is the measured time dilation.  This time dilation is one of the consequences of relativity and it demonstrates a very clear relationship between space and time.  What I find most amazing is that the only requirement to accomplish this “time-travel” to the future was the ability to move through space at high enough speeds (and return back to the stationary reference frame).  This would require large amounts of energy, but the point is that it is physically possible nevertheless.  It should be noted that this same result could have been accomplished if the traveling twin (i.e. Mary) simply went to a planet that had a significantly larger gravitational potential than that of Earth.  Any non-inertial frame of reference, whether due to a changing velocity or due to gravity, produces this time dilation (and future time travel) phenomenon relative to any inertial frame of reference.

Time is not independent of the entities in the universe

The “Traveling Twin” scenario illustrates several interesting things about our universe.  It shows that physical time as well as temporal experiences are elapsing at different rates across the entire universe as a consequence of Relativity.  It also suggests that rather than existing independent of us, time actually “travels with us” in a way because it is unified with the space we are moving through (and how we move through it) as well as the curvature of that space due to gravity.  In my opinion, since the time dilation is infinite if the velocity of an entity is equal to the speed of light, this suggests that no time exists for anything moving at light-speed, that is, the proper time is zero even if the coordinate time (of a sub-light-speed frame of reference) is infinite.  This may imply that all bosons (e.g. photons, gluons, gravitons, etc.) exist with no proper time even though an infinite amount of time may have passed for all inertial frames of reference.  So time does not appear to exist for all entities, only for matter (e.g. fermions, hadrons, etc.).

Time creation and destruction

For those that are familiar with Einstein’s mass-energy equivalence, we can see that photons with sufficient energy can give rise to matter/anti-matter pair production.  For example, two gamma ray photons can combine to form an electron-positron pair.  This means that particles traveling at the speed of light (e.g. photons) can combine to form matter which travels much slower than light.  If this is true, then particles that exist with a proper time of zero can change to particles that exist with a non-zero proper-time, that is, due to the pair production, the proper time becomes non-zero and thus time is created (due to a reference frame being definable at sub-light-speed) as soon as bosons are transformed into matter.  If proper time is limited to matter, then temporal experiences must also be limited to matter, as no processes or experience can exist if there is no proper time elapsed for that experience.

There is a flip side to this coin however.  Let’s say we take the aforementioned electron-positron pair that was produced and they are re-combined within a certain range of momentum, etc.  They will collide, annihilate each other, and two gamma ray photons will be produced.  So, as matter is transformed into energy (e.g. photons, etc.), time is destroyed in a sense (due to the absence of a reference frame at light-speed).  Thus, matter and photons are interchangeable and that means that proper time (a requirement for temporal experiences) can “pop” in and out of existence for the entities considered.  Thus there appears to be no conservation of proper time (unlike mass and energy which must be conserved in a closed system).  If we imagine all matter transforming into a form of bosonic energy, and thus all proper time disappearing, this lack of conservation of time becomes quite clear.

Here is the link for Part III: Time Travel and its Limitations