Non-separability vs. Reductionism: an Inference of Quantum Mechanics

Reductionism has been an extremely effective approach for learning a lot about the world around us.  The amount of information that we’ve obtained as a result of this method has allowed us to more effectively predict the course of nature and thus appears to have been invaluable in our quest for better understanding the universe.  However, I believe that several limitations of reductionism have been revealed through Quantum Mechanics, thus demonstrating it to be inapplicable to the fundamental nature of the universe.  This inapplicability results from the “non-separability” property that appears to emerge from within Quantum mechanical theory.  While this is just my inference of some of the quantum physical findings, the implications of this property of “non-separability” (i.e. “one-ness”) are vast, far-reaching and are incredibly significant in terms of the “lens” we use to look at the world around us.

I feel that I need to define what I mean by “reductionism” as there are a few interpretations (some more ambiguous than others) as well as specific types of reductionism that people may refer to more than others.  Among these types are: theoretical, methodological and ontological reductionism.  I’m mostly concerned with ontological and methodological reductionism.  When I use the term “reductionism”, I am referring to the view that the nature of complex things can be reduced to the interaction of their parts.  An alternate definition that I’ll use in combination with the former is the view that the whole (a complex system) is simply the sum of its parts (some may see this as the opposite of “holism” or “synergy” — in which the whole is greater than the sum of it’s parts).  The use of the word “parts” is what I find most important in these definitions.  We can see an obvious problem if we attempt to unify the concept of “parts” (a required component of reductionism) with the concept of “no parts” (an implied component of non-separability).  My use of the word “non-separability” implies that any parts that may appear to exist are illusory parts due to the fact that the concept “parts” implies that it is separate from everything else (separate from “other parts”, etc.).  To simplify matters, one could equate the concept of non-separability that I’m referring to with the concept of “one-ness” (i.e. there are no “parts”, there is only the whole).

Reductionism is an idea that appears to have been around for centuries.  The idea has penetrated the realms of science and philosophy from the time of Thales from Miletus — who held the belief that the world was or was composed of water at a fundamental level, to the time of the scientific revolution and onward with the advent of Isaac Newton’s classical mechanics (or more appropriately “Newtonian” mechanics) and the new yet equivalent views of Lagrangian and Hamiltonian mechanics that ensued.  I think it would be safe to say that most scientists from that point forward were convinced that anything could be broken down into fundamental parts providing us with a foundation for 100% predictability (determinism).  After all, if a system seemed too complex to predict, breaking it down into simpler components seemed to be the easiest route to succeed.  Even though scientists couldn’t predict anything with 100% certainty, it was believed by many that it was possible if the instrumentation used was sufficient enough.  They thought that perhaps there were fundamental constituent building blocks that followed a deterministic model.  This view would change drastically in the early part of the 20th century when the foundations for Quantum Mechanics were first established by Bohr, de Broglie, Heisenberg, Planck, Shrodinger and many others.  New discoveries such as Heisenberg’s famous “Uncertainty Principle”, as well as the concepts of wave-particle duality, quantum randomness, superposition, wave function collapse, entanglement, non-locality and others, began to repudiate classical ideas of causality and determinism.  These discoveries also implied that certain classical concepts like “location”, “object”, and “separate” needed to be reconsidered.  This appeared to be the beginning of the end of reductionism in my view, or more specifically, it demonstrated the fundamental limitations of such an approach.  One thing I want to point out is that I completely acknowledge that reductionism was the approach that eventually led to these quantum discoveries.  After all, we wouldn’t have been able to discover the quantum realm at all had we not “scaled” down our research (i.e. investigated physically smaller and smaller regimes) using reductionism.  While it did in fact lead us to the underlying quantum realm, it is what we choose to do with this new information that will effect our overall view of the world around us.

One interesting thing about the concept of reductionism is it’s relationship to the scientific method.  To clarify, at the very least, we can say that in order to have separate objects and observers (as we require in the scientific method) we must dismiss the idea of non-separability, thus employing the idea of separateness (required for reductionism).  The effectiveness of the scientific method presumes that the observer is not affecting the observed (i.e. they are isolated from each other).  Scientists are well aware of the “Observer effect”, whereby the minimization of this effect defines a better-controlled experiment and the elimination of the effect altogether defines a perfectly controlled experiment.  Experimentation within Quantum mechanics is no exception to this effect.  In fact, quantum mechanical measurements appear to have what I like to call a “100% observer effect”.  Allow me to clarify.  The superposition principle in quantum mechanics states that a physical system (e.g. an electron) exists in all it’s possible states simultaneously and only upon measuring that system will it give a result that corresponds to one of the possible states or configurations.  This unique effect produced by measurement is referred to as the “collapse of the wave-function”, at least according to the Copenhagen interpretation of quantum mechanics.  I think that this “100% observer effect” is worth noting when considering the scientific method and the reductionist elements within it.  There appears to be an inescapable interconnectedness between the observer and the observed which forces us to recognize the limitations of the scientific method within this realm.  At the very least, within this quantum realm, our classical idea of a “controlled” experiment no longer exists.

Another interesting discovery within Quantum mechanics was Heisenberg’s Uncertainty principle which states that we can’t know any two complementary properties of a “particle” simultaneously without some minimum degree of uncertainty.  One example of complementary properties of a particle are position and momentum, whereby when we measure one property to an arbitrary accuracy we lose accuracy in our ability to measure the other property and vice versa.  Here again within the quantum realm we are able to see an interconnectedness, although this time it is between physical properties themselves.  Just as we saw in the case of the observer and the observed, what we once thought of as being completely separate and independent of one another, turn out to be unavoidably interconnected.

The last concept I want to discuss within quantum theory is that of quantum entanglement.  If two particles come into physical contact with each other, interact or are created in a very specific way, they can become entangled or linked together in very specific ways (much like that of the aforementioned complementary properties: position and momentum).  In this case, the linkage between the two particles is a shared quantum state where one particle can’t be fully described without considering the state of the other.  This shared quantum state “collapses” as soon as one of the particle’s states is measured.  For example, if we have two electrons that become entangled, and then we measure one of them to be a spin up electron, by default the other will be a spin down electron.  Prior to measurement, there is an equal probability that the first particle’s state measured will be spin up or spin down.  Again, according to the Copenhagen interpretation of quantum mechanics, the quantum state that this electron assumes upon measurement is completely random.  However, once the electron’s spin has been measured, there is a 100% chance that the other electron in the entangled pair will have the anti-correlated spin, even though in theory it should still be a 50% chance as was the case for the first electron.  At the time this was discovered, most physicists (most notably Einstein) were convinced that there was some form of a hidden variables theory which could determine which state the electron would assume upon measurement.  Unfortunately, Bell’s theorem suggested that local hidden variables were impossible.  This left only one alternative for an underlying determinism (if there was one) and that was the theory of non-local hidden variables.  Non-locality however is something that is so counter-intuitive due to its acausal nature, even if it was somehow deterministic, it would not make sense from a reductionist (i.e. locally interacting parts) perspective.  Thus, all of the reductionist approaches — that is, the assumption that complex systems can be explained by breaking them down into smaller, yet causally connected (locally interacting) parts, is inadequate for explaining the phenomena pervading the quantum realm.

The unique yet explicit case of quantum entanglement phenomena not only suggests that the idea of separate “parts” is flawed but also that the interconnectedness exhibited is non-local, which I believe to be extremely significant from not only a scientific and philosophical viewpoint, but from a spiritual perspective as well.  If “something” is non-locally interconnected with “something” else, does it not imply that the two are actually “one” ?  Perhaps this is an example of merely seeing “two different faces of the same coin”.  While they may appear to be separated in 3D-space, it does not mean that they have to be separated in a dimension outside our physical reality or experience.  To put it another way, non-locality suggests the possibility of extra spatial dimensions if we are to negate the alternative — that is, that the speed of light is being violated within 3D space by faster-than-light information transfer (e.g. one electron communicates to the other instantaneously such that the appropriate anti-correlation exists between their spin state).  If two particles that seem to be “separate” in our physical reality are in fact “one”, then we could surmise that other sets of particles or even all particles for that matter (that we do not currently define to be “entangled”) are entangled in an unknown way.  There may be an infinite number of extra spatial dimensions to account for all unknown types of “entanglement”.  We could hypothesize in this case that every seemingly separate particle is actually interconnected as “one” entity.  While we may never know whether or not this is the case, it’s an interesting thought.  It would imply that the degrees of separation that we observe are fallacious, and simply a result of the connection being hidden.  It would be analogous to our thinking that the continents are separate pieces of land, when in fact, deep in the ocean, we can clearly see the continuity between all of those continents.

If we consider the Big Bang Theory, the “singularity” that supposedly existed prior to expansion would be a more intuitive way of looking at this homogenous entity I describe as “one”.  Perhaps after expansion, it started to take on an appearance of reducible separateness (within three dimensions of space anyways), and my theory that this separateness is illusory appears to be demonstrated in quantum phenomena despite the fact that this idea is entirely counter-intuitive to our everyday experience in a 3D-limited scope of our physical reality.  I feel that it’s important to recognize this illusion of separateness from a philosophical perspective and utilize this view to help realize that we are all “one” with each other and the entire universe.  While I may have no prescription for how to use this information (as I believe it will require self-discovery to have any real meaning to you), it’s just the way I feel on the matter — take it or leave it.  Life goes on.

Advertisement

Knowledge and the “Brain in a Vat” scenario

Many philosophers have stumbled across the possibility of the BIV scenario — that is, that our experience of reality could be a result of our brains being in a vat, connected to electrodes which are controlled by some mad scientist, thus creating the illusion of a physical body, universe, etc.  Many variations of this idea have been presented which appear to be similar to or based off of the “Evil Demon” proposed by Descartes in his Meditations on First Philosophy back in 1641.

There is no way to prove or disprove this notion because all of our “knowledge” (if any exists at all depending on how we define it) would be limited to the mad scientist’s program of sensory inputs, etc.  What exactly is “knowledge” ?  If we can decide on what qualifies as “knowledge”, then another question I’ve wondered about is:

How do we modify or shape the definition of “knowledge” to make it compatible in the BIV scenario? 

Many might say that we have to have an agreed upon definition of “knowledge” in the first place before we modify it for some crazy hypothetical scenario.  I understand that there is disagreement in the philosophical/epistemological community regarding how we define “knowledge”.  To simplify matters, let’s start with Plato’s well known (albeit controversial) definition which basically says:

Knowledge = Justified True Belief

It would help if Plato had spelled out an agreed upon convention for what is considered “justified”, “true”, and “belief”.  It seems reasonable that we can assume that “belief” is any proposition or premise an individual holds to be true.  We can also assume that “true” implies that the belief is factual — that is, that the “belief” has been confirmed to correspond with “reality”.  The term “justified” is a bit trickier and concepts such as “externalism vs. internalism”, probability, evidentialism, etc., indeed arise when theories of justification are discussed.  It is the “true” and “justified” concepts that I think are of primary importance (as I think most people if not everyone can agree on what a “belief” is) — and how they relate to the BIV scenario (or any similar variation of this scenario) in our quest for properly defining “knowledge”.

Is this definition creation/modification even something that can be accomplished?  Or will any modification necessary to make it compatible in the BIV scenario completely strip the word “knowledge” of all practical meaning?  If the modification does strip the word of most of its meaning, then does that say anything profound about what “knowledge” really is?  I think that, if anything, it may allow us to step back and look at the concept of “knowledge” in a different way.  My contention is that it will demonstrate that knowledge is really just another subjective concept which we can claim to possess based on nothing more than an understanding of and adherence to definitions, and the definitions are based on hierarchical patterns observed in nature.  This is in contrast to the current assumption that knowledge is some form of objective truth that we strive to obtain with a deeper meaning that may in fact (but not necessarily) be metaphysical in some way.

How does the concept of “true” differ in the “real world” vs. the BIV’s “virtual world”?

Can any beliefs acquired within the BIV scenario be considered “true”?  If not, then we would have to consider the possibility that nothing “true” exists.  This isn’t necessarily the end of the world, although for most people it may be a pill that is a bit too hard to swallow.  Likewise, if we consider the possibility that at least one belief in our reality is true (I’m betting that most people believe this to be the case), then we must accept that “true beliefs” may exist within the BIV scenario.  The bottom line is that we can never know if we are BIVs or not.  So this means that if we believe that true beliefs exist, we should modify the definition to apply within the BIV scenario.  Perhaps we can say that something is “true” only insofar as it corresponds to “our reality”.  If there are multiple levels of reality (e.g. our reality as well as that of the mad scientist who is simulating our reality), then perhaps there are multiple levels of truth.  If this is the case then either some levels of truth are more or less “true” than others, or they all contain an equally valid truth.  This latter possibility seems most reasonable as the former seems to negate the strict meaning of “true”, since it seems that something is either true, false, or neither (if a third option exists).   There is no state that lies in between that of “true” and “false”.  By this rationale, our concept of what is true and what isn’t is as valid as the mad scientists.  Perhaps the mad scientist would disagree with this, but based on how we define true and false, our claim of “equally valid truth” seems to be reasonable.  This may illustrate (to some) that our idea of “true beliefs” does not hold any fundamentally significant value as they are only “true beliefs” based on our definitions of particular words and concepts (again, ultimately based on hierarchical patterns observed in nature).  If I define something to be “true”, then it is true.  If I point to what looks like a tree (even if it’s a hologram and I don’t know this) and say “I’m defining that to be a “tree”, then it is in fact a tree and I can refer to it as such.  Likewise, if I define it’s location to be called “there”, then it is in fact there.  Based on these definitions (among others) I think it would be a completely true statement (and thus a true belief) to say “I KNOW that there is a tree over there”.  However if I failed to define (in detail) what a “tree” was, or if my definition of “tree” somehow excluded “a hologram of a tree” (because the definition was too specific), then I could easily make errors in terms of what I inferred to know or not know.  This may seem obvious, but I believe it is extremely important to establish these foundations before making any claims about knowledge.

Turning our attention to the concept of “justified”, one could argue that in this case I wouldn’t need any justification for the aforementioned belief to be true as my belief is true by definition (i.e. there isn’t anything that could make this particular belief false other than changing the definitions of “there”, “tree”, etc.).  Or we could say that the justification is the fact that I’ve defined the terms such that I am correct.  In my opinion this would imply that I would have 100% justification (for this belief) which I don’t think can be accomplished in any way other than through this definitive method.  In every other case of justification we will be less than 100% justified, and there will only be an arbitrary way of grading or quantifying that justification (100% is not arbitrary at all, but to say that we are “50% justified” or “23% justified” or any other value other than that 100% would be arbitrary and thus meaningless).  However, I will add that it is possible to have a belief that is MORE or LESS justified than another, but we could never quantify how much MORE or LESS justified it may be, nor will we ALWAYS be able to say that one belief is more or less justified than another.  Perhaps most importantly regarding the importance of “justification” with regard to knowledge, we must realize that our most common means of justification is through empiricism, the scientific method, etc.  That is, our justification seems to be based inductively on the hierarchical patterns which we observe in nature.  If certain patterns are observed and are repeatable, then the beliefs ascertained from them will not only be pragmatically useful, but they will be most justified.

When we undergo the task of defining knowledge or more specifically the task of analyzing Plato’s definition, I believe that it is useful for us to play the devil’s advocate and recognize that scenarios can be devised whereby the requirements of “justified”, “true”, and “belief” have been met and yet no “true knowledge” exists.  These hypothetical scenarios can be represented by the well known “Gettier Problem”.  Without going into too much detail, let’s examine a possible scenario to see where the problem arises.

Let’s imagine that I am in my living room and I see my wife sitting on the couch, but it just so happens that what I’m looking at is actually nothing more than a perfect hologram of my wife.  However my wife IS actually in the room (she’s hiding behind the couch even though I can’t see her hiding).  Now if I were to say “My wife is in the living room”, technically my belief would be “justified” to some degree (I do see her on the couch after all and I don’t know that what I’m looking at is actually a hologram).  My belief would also be “true” because she is actually in the living room (she’s hiding behind the couch).  This could be inferred as fulfilling Plato’s requirements for knowledge: truth and justification.  However most people would argue that I don’t actually possess any knowledge in this particular example because it is a COINCIDENCE that my belief is true, or to be more specific, it’s “true-ness” is not related to my justification (i.e. that I see her on the couch).  It’s “true-ness” is based on the coincidence of her actual presence behind the couch and thus meeting the requirements of “being in the living room” (albeit without me knowing it).

How do we get around this dilemma, and can we do so such that we at least  partially (if not completely) preserve Plato’s requirements?  One way to solve this dilemma would be to remove the element of COINCIDENCE entirely.  If we change our definition to:

Knowledge = Justified Non-Coincidentally True Belief

then we have solved the problem presented by Gettier.  Removing this element of coincidence in order to properly define knowledge (or at least in order to define it more appropriately) has been accomplished in one way or another by various responses to the “Gettier Problem” including that of Goldman’s “causal theory”, Lehrer-Paxson’s “defeasibility condition“, and many others which I won’t discuss in detail at this time, but the links provided will show a basic summary of some of those responses.  Overall, I think it’s an important distinction and a necessary one to make (i.e. non-coincidence) in order to properly define knowledge.

Alternatively, another response to the “Gettier Problem” is to say that while it appears that I didn’t actually possess true knowledge in the example illustrated above (because of the coincidence), it could be due to the fact that I never actually had sufficient justification to begin with as I failed to consider the possibility that I was being deceived on a number of possible levels with varying significance (e.g. hologram, brain in a vat, etc.).  This lack of justification would mean that I’ve failed to satisfy the Platonic definition of knowledge and thus no “Gettier Problem” would exist in that case.  However, I think it’s more pragmatic to trust our senses on a basic level and say that “seeing my wife in the living room” (regardless of whether or not it may have actually been a hologram I was seeing and thus wasn’t “real”) warrants a high level of justification for all practical purposes.  After all, Plato never seemed to have specified how “justified” something needs to be before it meets the requirement necessary to call it knowledge.  Had Plato considered the possibility of the “Brain in a vat” scenario (or a related scenario more befitting to the century he lived in), would he have said that no justification exists and thus no knowledge exists?  I highly doubt it, since it seems like an exercise of futility to define “knowledge” at all if it was assumed to be something non-existent or unattainable.  Lastly, if one’s belief justification is in fact based on hierarchical patterns observed in nature, then the more detailed or thoroughly defined those patterns are (i.e. the more scientific/empirical data, models, and explanatory power that are available from them), the more justified my beliefs will be and the more useful that knowledge will be.

In summary, I think that knowledge is nothing more than an attribute that we possess solely based on an understanding of and adherence to definitions and thus is entirely compatible with the BIV scenario.  Definitions do not require a metaphysical explanation nor do they need to apply in any way to anything outside of our reality (i.e. metaphysically).  They are nothing more than human constructs.  As long as we are careful in how we define terms, adhere to those definitions without contradiction and understand those definitions that we create (through the use of hierarchical patterns observed in nature), then we have the ability to possess knowledge — regardless of if we are nothing more than BIVs.  The only knowledge that we can claim to know must be “justified non-coincidentally true belief” (to use my modified version of the Platonic definition, thus accounting for the “Gettier Problem”).  The most significant realization here is that the only knowledge that we can PROVE to know is based on and limited by the definitions we create and our use of them within our claims of knowledge.  This proven knowledge would include but NOT be limited to Descartes’ famous claim “cogito ergo sum” (i.e. “I think therefore I am”), which in much of Western Philosophy is believed to be the foundation for all knowledge.  I disagree that this is the foundation for all knowledge, as I believe the foundation to be based on the understanding and proper use of definitions and nothing more.  For Descartes, he had to define (or use a commonly accepted definition of) every word in that claim including what “I” is, what “think” is, what “am” is, etc.  Did his definition of “I” include the unconscious portion of his self or only the conscious portion?  I’m aware that Freudian theory didn’t exist at the time but my question still stands.  Personally I would define “I” as the culmination of both our conscious and unconscious minds.  Yet, the only “thinking” that we’re aware of is conscious (which I would call the “me”), so based on my definitions of “I” and “think”, I would disagree with his claim, because my unconscious mind doesn’t “think” at all (and even if it did, the conscious mind doesn’t perceive it so I wouldn’t say “I think” when referring to my unconscious mind).  When I am thinking, I am conscious of those thoughts.  So there are potential problems that exist with Descartes’ assertion depending on definitions he chooses for the words used in that assertion.  This further illustrates the importance of definitions on a foundational level, and the hierarchical patterns observed in nature that led to those definitions.

My views of knowledge may seem to be unacceptable or incomplete, but these are the conclusions I’ve arrived at in order to avoid the “Gettier Problem” as well as to remain compatible with the BIV scenario or any other metaphysical possibilities that we have no way of falsifying.  While I’m aware that anything discussed about the metaphysical is seen by some philosophers to be completely and utterly pointless, my goal in making the definition of knowledge compatible with the BIV scenario is merely to illustrate that if knowledge exists in both “worlds” (and our world is nothing but a simulation), then the only knowledge we can prove has to be based on definitions — which is a human construct based on hierarchical patterns observed in our reality.