Looking Beyond Good & Evil

Nietzche’s Beyond Good and Evil serves as a thorough overview of his philosophy and is definitely one of his better works (although I think Thus Spoke Zarathustra is much more fun to read).  It’s definitely worth reading (if you haven’t already) to put a fresh perspective on many widely held philosophical assumptions that are often taken for granted.  While I’m not interested in all of the claims he makes (and he makes many, in nearly 300 aphorisms spread over nine chapters), there are at least several ideas that I think are worth analyzing.

One theme presented throughout the book is Nietzsche’s disdain for classical conceptions of truth and any form of absolutism or dogmatism.  With regard to truth or his study on the nature of truth (what we could call his alethiology), he subscribes to a view that he coined as perspectivism.  For Nietzsche, there are no such things as absolute truths but rather there are only different perspectives about reality and our understanding of it.  So he insists that we shouldn’t get stuck in the mud of dogmatism, and should instead try to view what is or isn’t true with an open mind and from as many points of view as possible.

Nietzsche’s view here is in part fueled by his belief that the universe is in a state of constant change, as well as his belief in the fixity of language.  Since the universe is in a state of constant change, language generally fails to capture this dynamic essence.  And since philosophy is inherently connected to the use of language, false inferences and dogmatic conclusions will often manifest from it.  Wittgenstein belonged to a similar school of thought (likely building off of Nietzsche’s contributions) where he claimed that most of the problems in philosophy had to do with the limitations of language, and therefore, that those philosophical problems could only be solved (if at all) through an in-depth evaluation of the properties of language and how they relate to its use.

This school of thought certainly has merit given the facts of language having more of a fixed syntactic structure yet also having a dynamic or probabilistic semantic structure.  We need language to communicate our thoughts to one another, and so this requires some kind of consistency or stability in its syntactic structure.  But the meaning behind the words we use is something that is established through use, through context, and ultimately through associations between probabilistic conceptual structures and relatively stable or fixed visual and audible symbols (written or spoken words).  Since the semantic structure of language has fuzzy boundaries, and yet is attached to relatively fixed words and grammar, it produces the illusion of a reality that is essentially unchanging.  And this results in the kinds of philosophical problems and dogmatic thinking that Nietzsche warns us of.

It’s hard to disagree with Nietzsche’s view that dogmatism is to be avoided at all costs, and that absolute truths are generally specious at best (let alone dangerous), and philosophy owes a lot to Nietzsche for pointing out the need to reject this kind of thinking.  Nietzsche’s rejection of absolutism and dogmatism is made especially clear in his views on the common conceptions of God and morality.  He points out how these concepts have changed a lot over the centuries (where, for example, the meaning of good has undergone a complete reversal throughout its history), and this is despite the fact that throughout that time, the proponents of those particular views of God or morality believe that these concepts have never changed and will never change.

Nietzsche believes that all change is ultimately driven by a will to power, where this will is a sort of instinct for autonomy, and which also consists of a desire to impose one’s will onto others.  So the meaning of these concepts (such as God or morality) and countless others have only changed because they’ve been re-appropriated by different or conflicting wills to power.  As such, he thinks that the meaning and interpretation of a concept illustrate the attributes of the particular will making use of those concepts, rather than some absolute truth about reality.  I think this idea makes a lot of sense if we regard the will to power as not only encompassing the desires belonging to any individual (most especially their desire for autonomy), but also the inferences they’ve made about reality, it’s apparent causal relations, etc., which provide the content for those desires.  So any will to power is effectively colored by the world view held by the individual, and this world view or set of inferred causal relations includes one’s inferences pertaining to language and any meaning ascribed to the words and concepts that one uses.

Even more interesting to me is the distinction Nietzsche makes between what he calls an unrefined use or version of the will to power and one that is refined.  The unrefined form of a will to power takes the desire for autonomy and directs it outward (perhaps more instinctually) in order to dominate the will of others.  The refined version on the other hand takes this desire for autonomy and directs it inward toward oneself, manifesting itself as a kind of cruelty which causes a person to constantly struggle to make themselves stronger, more independent, and to provide them with a deeper character and perhaps even a deeper understanding of themselves and their view of the world.  Both manifestations of the will to power seem to try and simply maximize one’s power over as much as possible, but the latter refined version is believed by Nietzsche to be superior and ultimately a more effective form of power.

We can better illustrate this view by considering a person who tries to dominate others to gain power in part because they lack the ability to gain power over their own autonomy, and then compare this to a person who gains control over their own desires and autonomy and therefore doesn’t need to compensate for any inadequacy by dominating others.  A person who feels a need to dominate others is in effect dependent on those subordinates (and dependence implies a certain lack of power), but a person who increases their power over themselves gains more independence and thus a form of freedom that is otherwise not possible.

I like this internal/external distinction that Nietzsche makes, but I’d like to build on it a little and suggest that both expressions of a will to power can be seen as complementary strategies to fulfill one’s desire for maximal autonomy, but with the added caveat that this autonomy serves to fulfill a desire for maximal causal power by harnessing as much control over our experience and understanding of the world as possible.  On the one hand, we can try and change the world in certain ways to fulfill this desire (including through the domination of other wills to power), or we can try and change ourselves and our view of the world (up to and including changing our desires if we find them to be misdirecting us away from our greatest goal).  We may even change our desires such that they are compatible with an external force attempting to dominate us, thus rendering the external domination powerless (or at least less powerful than it was), and then we could conceivably regain a form of power over our experience and understanding of the world.

I’ve argued elsewhere that I think that our view of the world as well as our actions and desires can be properly described as predictions of various causal relations (this is based on my personal view of knowledge combined with a Predictive Processing account of brain function).  Reconciling this train of thought with Nietzsche’s basic idea of a will to power, I think we could say that our will to power depends on our predictions of the world and its many apparent causal relations.  We could equate maximal autonomy with maximal predictive success (including the predictions pertaining to our desires). Looking at autonomy and a will to power in this way, we can see that one is more likely to make successful predictions about the actions of another if they subjugate the other’s will to power by their own.  And one can also increase the success of their predictions by changing them in the right ways, including increasing their complexity to better match the causal structure of the world, and by changing our desires and actions as well.

Another thing to consider about a desire for autonomy is that it is better formulated if it includes whatever is required for its own sustainability.  Dominating other wills to power will often serve to promote a sustainable autonomy for the dominator because then those other wills aren’t as likely to become dominators themselves and reverse the direction of dominance, and this preserves the autonomy of the dominating will to power.  This shows how this particular external expression of a will to power could be naturally selected for (under certain circumstances at least) which Nietzsche himself even argued (though in an anti-Darwinian form since genes are not the unit of selection here, but rather behaviors).  This type of behavioral selection would explain it’s prevalence in the animal kingdom including in a number of primate species aside from human beings.  I do think however that we’ve found many ways of overcoming the need or impulse to dominate and it has a lot to do with having adopted social contract theory, since in my view it provides a way of maximizing the average will to power for all parties involved.

Coming back to Nietzsche’s take on language, truth, and dogmatism, we can also see that an increasingly potent will to power is more easily achievable if it is able to formulate and test new tentative predictions about the world, rather than being locked in to some set of predictions (which is dogmatism at it’s core).  Being able to adapt one’s predictions is equivalent to considering and adopting a new point of view, a capability which Nietzsche described as inherent in any free spirit.  It also amounts to being able to more easily free ourselves from the shackles of language, just as Nietzsche advocated for, since new points of view affect the meaning that we ascribe to words and concepts.  I would add to this, the fact that new points of view can also increase our chances of making more successful predictions that constitute our understanding of the world (and ourselves), because we can test them against our previous world view and see if this leads to more or less error, better parsimony, and so on.

Nietzsche’s hope was that one day all philosophy would be flexible enough to overcome its dogmatic prejudices, its claims of absolute truths, including those revolving around morality and concepts like good and evil.  He sees these concepts as nothing more than superficial expressions of one particular will to power or another, and thus he wants philosophy to eventually move itself beyond good and evil.  Personally, I am a proponent of an egoistic goal theory of morality, which grounds all morality on what maximizes the satisfaction and life fulfillment of the individual (which includes cultivating virtues such as compassion, honesty, and reasonableness), and so I believe that good and evil, when properly defined, are more than simply superficial expressions.

But I agree with Nietzsche in part, because I think these concepts have been poorly defined and misused such that they appear to have no inherent meaning.  And I also agree with Nietzsche in that I reject moral absolutism and its associated dogma (as found in religion most especially), because I believe morality to be dynamic in various ways, contingent on the specific situations we find ourselves in and the ever-changing idiosyncrasies of our human psychology.  Human beings often find themselves in completely novel situations and cultural evolution is making this happen more and more frequently.  And our species is also changing because of biological evolution as well.  So even though I agree that moral facts exist (with an objective foundation), and that a subset of these facts are likely to be universal due to the overlap between our biology and psychology, I do not believe that any of these moral facts are absolute because there are far too many dynamic variables for an absolute morality to work even in principle.  Nietzsche was definitely on the right track here.

Putting this all together, I’d like to think that our will to power has an underlying impetus, namely a drive for maximal satisfaction and life fulfillment.  If this is true then our drive for maximal autonomy and control over our experience and understanding of the world serves to fulfill what we believe will maximize this overall satisfaction and fulfillment.  However, when people are irrational, dogmatic, and/or are not well-informed on the relevant facts (and this happens a lot!), this is more likely to lead to an unrefined will to power that is less conducive to achieving that goal, where dominating the wills of others and any number of other immoral behaviors overtakes the character of the individual.  Our best chance of finding a fulfilling path in life (while remaining intellectually honest) is going to require an open mind, a willingness to criticize our own beliefs and assumptions, and a concerted effort to try and overcome our own limitations.  Nietzsche’s philosophy (much of it at least) serves as a powerful reminder of this admirable goal.

Predictive Processing: Unlocking the Mysteries of Mind & Body (Part III)

In the first post in this series I re-introduced the basic concepts underling the Predictive Processing (PP) theory of perception, in particular how the brain uses a form of active Bayesian inference to form predictive models that effectively account for perception as well as action.  I also mentioned how the folk psychological concepts of beliefs, desires and emotions can fit within the PP framework.  In the second post in this series I expanded the domain of PP a bit to show how it relates to language and ontology, and how we perceive the world to be structured as discrete objects, objects with “fuzzy boundaries”, and other more complex concepts all stemming from particular predicted causal relations.

It’s important to note that this PP framework differs from classical computational frameworks for brain function in a very big way, because the processing and learning steps are no longer considered separate stages within the PP framework, but rather work at the same time (learning is effectively occurring all the time).  Furthermore, classical computational frameworks for brain function treat the brain more or less like a computer which I think is very misguided, and the PP framework offers a much better alternative that is far more creative, economical, efficient, parsimonious and pragmatic.  The PP framework holds the brain to be more of a probabilistic system rather than a deterministic computational system as held in classical computationalist views.  Furthermore, the PP framework puts a far greater emphasis on the brain using feed-back loops whereas traditional computational approaches tend to suggest that the brain is primarily a feed-forward information processing system.

Rather than treating the brain like a generic information processing system that passively waits for new incoming data from the outside world, it stays one step ahead of the game, by having formed predictions about the incoming sensory data through a very active and creative learning process built-up by past experiences through a form of active Bayesian inference.  Rather than utilizing some kind of serial processing scheme, this involves primarily parallel neuronal processing schemes resulting in predictions that have a deeply embedded hierarchical structure and relationship with one another.  In this post, I’d like to explore how traditional and scientific notions of knowledge can fit within a PP framework as well.

Knowledge as a Subset of Predicted Causal Relations

It has already been mentioned that, within a PP lens, our ontology can be seen as basically composed of different sets of highly differentiated, probabilistic causal relations.  This allows us to discriminate one object or concept from another, as they are each “composed of” or understood by the brain as causes that can be “explained away” by different sets of predictions.  Beliefs are just another word for predictions, with higher-level beliefs (including those that pertain to very specific contexts) consisting of a conjunction of a number of different lower level predictions.  When we come to “know” something then, it really is just a matter of inferring some causal relation or set of causal relations about a particular aspect of our experience.

Knowledge is often described by philosophers using various forms of the Platonic definition: Knowledge = Justified True Belief.  I’ve talked about this to some degree in previous posts (here, here) and I came up with what I think is a much better working definition for knowledge which could be defined as such:

Knowledge consists of recognized patterns of causality that are stored into memory for later recall and use, that positively and consistently correlate with reality, and for which that correlation has been validated by empirical evidence (i.e. successful predictions made and/or goals accomplished through the use of said recalled patterns).

We should see right off the bat how this particular view of knowledge fits right in to the PP framework, where knowledge consists of predictions of causal relations, and the brain is in the very business of making predictions of causal relations.  Notice my little caveat however, where these causal relations should positively and consistently correlate with “reality” and be supported by empirical evidence.  This is because I wanted to distinguish all predicted causal relations (including those that stem from hallucinations or unreliable inferences) from the subset of predicted causal relations that we have a relatively high degree of certainty in.

In other words, if we are in the game of trying to establish a more reliable epistemology, we want to distinguish between all beliefs and the subset of beliefs that have a very high likelihood of being true.  This distinction however is only useful for organizing our thoughts and claims based on our level of confidence in their truth status.  And for all beliefs, regardless of the level of certainty, the “empirical evidence” requirement in my definition given above is still going to be met in some sense because the incoming sensory data is the empirical evidence (the causes) that support the brain’s predictions (of those causes).

Objective or Scientific Knowledge

Within a domain like science however, where we want to increase the reliability or objectivity of our predictions pertaining to any number of inferred causal relations in the world, we need to take this same internalized strategy of modifying the confidence levels of our predictions by comparing them to the predictions of others (third-party verification) and by testing these predictions with externally accessible instrumentation using some set of conventions or standards (including the scientific method).

Knowledge then, is largely dependent on or related to our confidence levels in our various beliefs.  And our confidence level in the truth status of a belief is just another way of saying how probable such a belief is to explain away a certain set of causal relations, which is equivalent to our brain’s Bayesian prior probabilities (our “priors”) that characterize any particular set of predictions.  This means that I would need a lot of strong sensory evidence to overcome a belief with high Bayesian priors, not least because it is likely to be associated with a large number of other beliefs.  This association between different predictions seems to me to be a crucial component not only for knowledge generally (or ontology as mentioned in the last post), but also for our reasoning processes.  In the next post of this series, I’m going to expand a bit on reasoning and how I view it through a PP framework.

Knowledge: An Expansion of the Platonic Definition

In the first post I ever wrote on this blog, titled: Knowledge and the “Brain in a Vat” scenario, I discussed some elements concerning the Platonic definition of knowledge, that is, that knowledge is ultimately defined as “justified true belief”.  I further refined the Platonic definition (in order to account for the well-known Gettier Problem) such that knowledge could be better described as “justified non-coincidentally-true belief”.  Beyond that, I also discussed how one’s conception of knowledge (or how it should be defined) should consider the possibility that our reality may be nothing more than the product of a mad scientist feeding us illusory sensations/perceptions with our brain in a vat, and thus, that how we define things and adhere to those definitions plays a crucial role in our conception and mutual understanding of any kind of knowledge.  My concluding remarks in that post were:

“While I’m aware that anything discussed about the metaphysical is seen by some philosophers to be completely and utterly pointless, my goal in making the definition of knowledge compatible with the BIV scenario is merely to illustrate that if knowledge exists in both “worlds” (and our world is nothing but a simulation), then the only knowledge we can prove has to be based on definitions — which is a human construct based on hierarchical patterns observed in our reality.”

While my views on what knowledge is or how it should be defined have changed somewhat in the past three years or so since I wrote that first blog post, in this post, I’d like to elaborate on this key sentence, specifically with regard to how knowledge is ultimately dependent on the recall and use of previously observed patterns in our reality as I believe that this is the most important aspect regarding how to define knowledge.  After making a few related comments on another blog (https://nwrickert.wordpress.com/2015/03/07/knowledge-vs-belief/), I decided to elaborate on some of those comments accordingly.

I’ve elsewhere mentioned how there is a plethora of evidence that suggests that intelligence is ultimately a product of pattern recognition (1, 2, 3).  That is, if we recognize patterns in nature and then commit them to memory, we can later use those remembered patterns to our advantage in order to accomplish goals effectively.  The more patterns that we can recognize and remember, specifically those that do in fact correlate with reality (as opposed to erroneously “recognized” patterns that are actually non-existent), the better our chances of predicting the consequences of our actions accurately, and thus the better chances we have at obtaining our goals.  In short, the more patterns that we can recognize and remember, the greater our intelligence.  It is therefore no coincidence that intelligence tests are primarily based on gauging one’s ability to recognize patterns (e.g. solving Raven’s Progressive Matrices, puzzles, etc.).

To emphasize the role of pattern recognition as it applies to knowledge, if we use my previously modified Platonic definition of knowledge, that is,  that knowledge is defined as “justified, non-coincidentally-true belief”, then I must break down the individual terms of this definition as follows, starting with “belief”:

  • Belief = Recognized patterns of causality that are stored into memory for later recall and use.
  • Non-Coincidentally-True = The belief positively and consistently correlates with reality, and thus not just through luck or chance.
  • Justified = Empirical evidence exists to support said belief.

So in summary, I have defined knowledge (more specifically) as:

“Recognized patterns of causality that are stored into memory for later recall and use, that positively and consistently correlate with reality, and for which that correlation has been validated by empirical evidence (e.g. successful predictions made and/or goals accomplished through the use of said recalled patterns)”.

This means that if we believe something to be true that is unfalsifiable (such as religious beliefs that rely on faith), since it has not met the justification criteria, it fails to be considered knowledge (even if it is still considered a “belief”).  Also, if we are able to make a successful prediction with the patterns we’ve recognized, yet are only able to do so once, due to the lack of consistency, we likely just got lucky and didn’t actually correctly identify a pattern that correlates with reality, and thus this would fail to count as knowledge.  Finally, one should also note that the patterns that are recognized were not specifically defined as “consciously” recognized/remembered, nor was it specified that the patterns couldn’t be innately acquired/stored into memory (through DNA coded or other pre-sensory neural developmental mechanisms).  Thus, even procedural knowledge like learning to ride a bike or other forms of “muscle memory” used to complete a task, or any innate form of knowledge (acquired before/without sensory input) would be an example of unconscious or implicit knowledge that still fulfills this definition I’ve given above.  In the case of unconscious/implicit knowledge, we would have to accept that “beliefs” can also be unconscious/implicit (in order to remain consistent with the definition I’ve chosen), and I don’t see this as being a problem at all.  One just has to keep in mind that when people use the term “belief”, they are likely going to be referring to only those that are in our consciousness, a subset of all beliefs that exist, and thus still correct and adherent to the definition laid out here.

This is how I prefer to define “knowledge”, and I think it is a robust definition that successfully solves many (though certainly not all) of the philosophical problems that one tends to encounter in epistemology.

An Evolved Consciousness Creating Conscious Evolution

Two Evolutionary Leaps That Changed It All

As I’ve mentioned in a previous post, human biological evolution has led to the emergence of not only consciousness but also a co-existing yet semi-independent cultural evolution (through the unique evolution of the human brain).  This evolutionary leap has allowed us to produce increasingly powerful technologies which in turn have provided a means for circumventing many natural selection pressures that our physical bodies would otherwise be unable to handle.

One of these technologies has been the selective breeding of plants and animals, with this process often referred to as “artificial” selection, as opposed to “natural” selection since human beings have served as an artificial selection pressure (rather than the natural selection pressures of the environment in general).  In the case of our discovery of artificial selection, by choosing which plants and animals to cultivate and raise, we basically just catalyzed the selection process by providing a selection pressure based on the plant or animal traits that we’ve desired most.  By doing so, rather than the selection process taking thousands or even millions of years to produce what we have today (in terms of domesticated plants and animals), it only took a minute fraction of that time since it was mediated through a consciously guided or teleological process, unlike natural selection which operates on randomly differentiating traits leading to differential reproductive success (and thus new genomes and species) over time.

This second evolutionary leap (artificial selection that is) has ultimately paved the way for civilization, as it has increased the landscape of our diet and thus our available options for food, and the resultant agriculture has allowed us to increase our population density such that human collaboration, complex distribution of labor, and ultimately the means for creating new and increasingly complex technologies, have been made possible.  It is largely because of this new evolutionary leap that we’ve been able to reach the current pinnacle of human evolution, the newest and perhaps our last evolutionary leap, or what I’ve previously referred to as “engineered selection”.

With artificial selection, we’ve been able to create new species of plants and animals with very unique and unprecedented traits, however we’ve been limited by the rate of mutations or other genomic differentiating mechanisms that must arise in order to create any new and desirable traits. With engineered selection, we can simply select or engineer the genomic sequences required to produce the desired traits, effectively allowing us to circumvent any genomic differentiation rate limitations and also allowing us instant access to every genomic possibility.

Genetic Engineering Progress & Applications

After a few decades of genetic engineering research, we’ve gained a number of capabilities including but not limited to: producing recombinant DNA, producing transgenic organisms, utilizing in vivo trans-species protein production, and even creating the world’s first synthetic life form (by adding a completely synthetic or human-constructed bacterial genome to a cell containing no DNA).  The plethora of potential applications for genetic engineering (as well as those applications currently in use) has continued to grow as scientists and other creative thinkers are further discovering the power and scope of areas such as mimetics, micro-organism domestication, nano-biomaterials, and many other inter-related niches.

Domestication of Genetically Engineered Micro and Macro-organisms

People have been genetically modifying plants and animals for the same reasons they’ve been artificially selecting them — in order to produce species with more desirable traits. Plants and animals have been genetically engineered to withstand harsher climates, resist harmful herbicides or pesticides (or produce their own pesticides), produce more food or calories per acre (or more nutritious food when all else is equal), etc.  Plants and animals have also been genetically modified for the purposes of “pharming”, where substances that aren’t normally produced by the plant or animal (e.g. pharmacological substances, vaccines, etc.) are expressed, extracted, and then purified.

One of the most compelling applications of genetic engineering within agriculture involves solving the “omnivore’s dilemma”, that is, the prospect of growing unconscious livestock by genetically inhibiting the development of certain parts of the brain so that the animal doesn’t experience any pain or suffering.  There have also been advancements made with in vitro meat, that is, producing cultured meat cells so that no actual animal is needed at all other than some starting cells taken painlessly from live animals (which are then placed into a culture media to grow into larger quantities of meat), however it should be noted that this latter technique doesn’t actually require any genetic modification, although genetic modification may have merit in improving these techniques.  The most important point here is that these methods should decrease the financial and environmental costs of eating meat, and will likely help to solve the ethical issues regarding the inhumane treatment of animals within agriculture.

We’ve now entered a new niche regarding the domestication of species.  As of a few decades ago, we began domesticating micro-organisms. Micro-organisms have been modified and utilized to produce insulin for diabetics as well as other forms of medicine such as vaccines, human growth hormone, etc.  There have also been certain forms of bacteria genetically modified in order to turn cellulose and other plant material directly into hydrocarbon fuels.  This year (2014), E. coli bacteria have been genetically modified in order to turn glucose into pinene (a high energy hydrocarbon used as a rocket fuel).  In 2013, researchers at the University of California, Davis, genetically engineered cyanobacteria (a.k.a. blue-green algae) by adding particular DNA sequences to its genome which coded for specific enzymes such that it can use sunlight and the process of photosynthesis to turn CO2 into 2,3 butanediol (a chemical that can be used as a fuel, or to make paint, solvents, and plastics), thus producing another means of turning our over abundant carbon emissions back into fuel.

On a related note, there are also efforts underway to improve the efficiency of certain hydro-carbon eating bacteria such as A. borkumensis in order to clean up oil spills even more effectively.  Imagine one day having the ability to use genetically engineered bacteria to directly convert carbon emissions back into mass-produced fuel, and if the fuel spills during transport, also having the counterpart capability of cleaning it up most efficiently with another form of genetically engineered bacteria.  These capabilities are being further developed and are only the tip of the iceberg.

In theory, we should also be able to genetically engineer bacteria to decompose many other materials or waste products that ordinarily decompose extremely slowly. If any of these waste products are hazardous, bacteria could be genetically engineered to breakdown or transform the waste products into a safe and stable compound.  With these types of solutions we can make many new materials and have a method in line for their proper disposal (if needed).  Additionally, by utilizing some techniques mentioned in the next section, we can also start making more novel materials that decompose using non-genetically-engineered mechanisms.

It is likely that genetically modified bacteria will continue to provide us with many new types of mass-produced chemicals and products. For those processes that do not work effectively (if at all) in bacterial (i.e. prokaryotic) cells, then eukaryotic cells such as yeast, insect cells, and mammalian cells can often be used as a viable option. All of these genetically engineered domesticated micro-organisms will likely be an invaluable complement to the increasing number of genetically modified plants and animals that are already being produced.

Mimetics

In the case of mimetics, scientists are discovering new ways of creating novel materials using a bottom-up approach at the nano-scale by utilizing some of the self-assembly techniques that natural selection has near-perfected over millions of years.  For example, mollusks form sea shells with incredibly strong structural/mechanical properties by their DNA coding for the synthesis of specific proteins, and those proteins bonding the raw materials of calcium and carbonate into alternating layers until a fully formed shell is produced.  The pearls produced by clams are produced with similar techniques. We could potentially use the same DNA sequence in combination with a scaffold of our choosing such that a similar product is formed with unique geometries, or through genetic engineering techniques, we could modify the DNA sequence so that it performs the same self-assembly with completely different materials (e.g. silicon, platinum, titanium, polymers, etc.).

By combining the capabilities of scaffolding as well as the production of unique genomic sequences, one can further increase the number of possible nanomaterials or nanostructures, although I’m confident that most if not all scaffolding needs could eventually be accomplished by the DNA sequence alone (much like the production of bone, exoskeleton, and other types of structural tissues in animals).  The same principles can be applied by looking at how silk is produced by spiders, how the cochlear hair cells are produced in mammals, etc.  Many of these materials are stronger, lighter, and more defect-free than some of the best human products ever engineered.  By mimicking and modifying these DNA-induced self-assembly techniques, we can produce entirely new materials with unprecedented properties.

If we realize that even the largest plants and animals use these same nano-scale assembly processes to build themselves, it isn’t hard to imagine using these genetic engineering techniques to effectively grow complete macro-scale consumer products.  This may sound incredibly unrealistic with our current capabilities, but imagine one day being able to grow finished products such as clothing, hardware, tools, or even a house.  There are already people working on these capabilities to some degree (for example using 3D printed scaffolding or other scaffolding means and having plant or animal tissue grow around it to form an environmentally integrated bio-structure).  If this is indeed realizable, then perhaps we could find a genetic sequence to produce almost anything we want, even a functional computer or other device.  If nature can use DNA and natural selection to produce macro-scale organisms with brains capable of pattern recognition, consciousness, and computation (and eventually the learned capability of genetic engineering in the case of the human brain), then it seems entirely reasonable that we could eventually engineer DNA sequences to produce things with at least that much complexity, if not far higher complexity, and using a much larger selection of materials.

Other advantages from using such an approach include the enormous energy savings gained by adopting the naturally selected economically efficient process of self-assembly (including less changes in the forms of energy used, and thus less loss) and a reduction in specific product manufacturing infrastructure. That is, whereas we’ve typically made industrial scale machines individually tailored to produce specific components which are later assembled into a final product, by using DNA (and the proteins it codes for) to do the work for us, we will no longer require nearly as much manufacturing capital, for the genetic engineering capital needed to produce any genetic sequence is far more versatile.

Transcending the Human Species

Perhaps the most important application of genetic engineering will be the modification of our own species.  Many of the world’s problems are caused by sudden environmental changes (many of them anthropogenic), and if we can change ourselves and/or other species biologically in order to adapt to these unexpected and sudden environmental changes (or to help prevent them altogether), then the severity of those problems can be reduced or eliminated.  In a sense, we would be selecting our own as well as other species by providing the proper genes to begin with, rather than relying on extremely slow genomic differentiation mechanisms and the greater rates of suffering and loss of life that natural selection normally follows.

Genetic Enhancement of Existing Features

With power over the genome, we may one day be able to genetically increase our life expectancy, for example, by modifying the DNA polymerase-g enzyme in our mitochondria such that they make less errors (i.e. mutations) during DNA replication, by genetically altering telomeres in our nuclear DNA such that they can maintain their length and handle more mitotic divisions, or by finding ways to preserve nuclear DNA, etc. If we also determine which genes lead to certain diseases (as well as any genes that help to prevent them), genetic engineering may be the key to extending the length of our lives perhaps indefinitely.  It may also be the key to improving the quality of that extended life by replacing the techniques we currently use for health and wellness management (including pharmaceuticals) with perhaps the most efficacious form of preventative medicine imaginable.

If we can optimize our brain’s ability to perform neuronal regeneration, reconnection, rewiring, and/or re-weighting based on the genetic instructions that at least partially mediate these processes, this optimization should drastically improve our ability to learn by improving the synaptic encoding and consolidation processes involved in memory and by improving the combinatorial operations leading to higher conceptual complexity.  Thinking along these lines, by increasing the number of pattern recognition modules that develop in the neo-cortex, or by optimizing their configuration (perhaps by increasing the number of hierarchies), our general intelligence would increase as well and would be an excellent complement to an optimized memory.  It seems reasonable to assume that these types of cognitive changes will likely have dramatic effects on how we think and thus will likely affect our philosophical beliefs as well.  Religious beliefs are also likely to change as the psychological comforts provided by certain beliefs may no longer be as effective (if those comforts continue to exist at all), especially as our species continues to phase out non-naturalistic explanations and beliefs as a result of seeing the world from a more objective perspective.

If we are able to manipulate our genetic code in order to improve the mechanisms that underlie learning, then we should also be able to alter our innate abilities through genetic engineering. For example, what if infants could walk immediately after birth (much like a newborn calf)? What if infants had adequate motor skills to produce (at least some) spoken language much more quickly? Infants normally have language acquisition mechanisms which allow them to eventually learn language comprehension and productivity but this typically takes a lot of practice and requires their motor skills to catch up before they can utter a single word that they do in fact understand. Circumventing the learning requirement and the motor skill developmental lag (at least to some degree) would be a phenomenal evolutionary advancement, and this type of innate enhancement could apply to a large number of different physical skills and abilities.

Since DNA ultimately controls the types of sensory receptors we have, we should eventually be able to optimize these as well.  For example, photoreceptors could be modified such that we would be able to see new frequencies of electro-magnetic radiation (perhaps a more optimized range of frequencies if not a larger range altogether).  Mechano-receptors of all types could be modified, for example, to hear a different if not larger range of sound frequencies or to increase tactile sensitivity (i.e. touch).  Olfactory or gustatory receptors could also be modified in order to allow us to smell and taste previously undetectable chemicals.  Basically, all of our sensations could be genetically modified and, when combined with the aforementioned genetic modifications to the brain itself, this would allow us to have greater and more optimized dimensions of perception in our subjective experiences.

Genetic Enhancement of Novel Features

So far I’ve been discussing how we may be able to use genetic engineering to enhance features we already possess, but there’s no reason we can’t consider using the same techniques to add entirely new features to the human repertoire. For example, we could combine certain genes from other animals such that we can re-grow damaged limbs or organs, have gills to breathe underwater, have wings in order to fly, etc.  For that matter, we may even be able to combine certain genes from plants such that we can produce (at least some of) our own chemical energy from the sun, that is, create at least partially photosynthetic human beings.  It is certainly science fiction at the moment, but I wouldn’t discount the possibility of accomplishing this one day after considering all of the other hybrid and transgenic species we’ve created already, and after considering the possible precedent mentioned in the endosymbiotic theory (where an ancient organism may have “absorbed” another to produce energy for it, e.g. mitochondria and chloroplasts in eukaryotic cells).

Above and beyond these possibilities, we could also potentially create advanced cybernetic organisms.  What if we were able to integrate silicon-based electronic devices (or something more biologically compatible if needed) into our bodies such that the body grows or repairs some of these technologies using biological processes?  Perhaps if the body is given the proper diet (i.e. whatever materials are needed in the new technological “organ”) and has the proper genetic code such that the body can properly assimilate those materials to create entirely new “organs” with advanced technological features (e.g. wireless communication or wireless access to an internet database activated by particular thoughts or another physiological command cue), we may eventually be able to get rid of external interface hardware and peripherals altogether.  It is likely that electronic devices will first become integrated into our bodies through surgical implantation in order to work with our body’s current hardware (including the brain), but having the body actually grow and/or repair these devices using DNA instruction would be the next logical step of innovation if it is eventually feasible.

Malleable Human Nature

When people discuss complex issues such as social engineering, sustainability, crime-reduction, etc., it is often mentioned that there is a fundamental barrier between our current societal state and where we want or need to be, and this barrier is none other than human nature itself.  Many people in power have tried to change human behavior with brute force while operating under the false assumption that human beings are analogous to some kind of blank slate that can simply learn or be conditioned to behave in any way without limits. This denial of human nature (whether implicit or explicit) has led to a lot of needless suffering and has also led to the de-synchronization of biological and cultural evolution.

Humans often think that they can adapt to any cultural change, but we often lose sight of the detrimental power that technology and other cultural inventions and changes can have over our physiological and psychological well-being. In a nutshell, the speed of cultural evolution can often make us feel like a fish out of water, perhaps better suited to live in an environment closer to our early human ancestors.  Whatever the case, we must embrace human nature and realize that our efforts to improve society (or ourselves) will only have long term efficacy if we work with human nature rather than against it.  So what can we do if our biological evolution is out-of-sync with our cultural evolution?  And what can we do if we have no choice but to accept human nature, that is, our (often selfish) biologically-driven motivations, tendencies, etc.?  Once again, genetic engineering may provide a solution to many of these previously insoluble problems.  To put it simply, if we can change our genome as desired, then we may be able to not only synchronize our biological and cultural evolution, but also change human nature itself in the process.  This change could not only make us feel better adjusted to the modern cultural environment we’re living in, but it could also incline us to instinctually behave in ways that are more beneficial to each other and to the world as a whole.

It’s often said that we have selfish genes in some sense, that is, many if not all of our selfish behaviors (as well as instinctual behaviors in general) are a reflection of the strategy that genes implement in their vehicles (i.e. our bodies) in order for the genes to maintain themselves and reproduce.  That genes possess this kind of strategy does not require us to assume that they are conscious in any way or have actual goals per se, but rather that natural selection simply selects genes that code for mechanisms which best maintain and spread those very genes.  Natural selection tends toward effective self-replicators, and that’s why “selfish” genes (in large part) cause many of our behaviors.  Improving reproductive fitness and successful reproduction has been the primary result of this strategy and many of the behaviors and motivations that were most advantageous to accomplish this are no longer compatible with modern culture including the long-term goals and greater good that humans often strive for.

Humans no longer exclusively live under the law of the jungle or “survival of the fittest” because our humanistic drives and their cultural reinforcements have expanded our horizons beyond simple self-preservation or a Machiavellian mentality.  Many humans have tried to propagate principles such as honesty, democracy, egalitarianism, immaterialism, sustainability, and altruism around the world, and they are often high-jacked by our often short-sighted sexual and survival-based instinctual motivations to gain sexual mates, power, property, a higher social status, etc.  Changing particular genes should also allow us to change these (now) disadvantageous aspects of human nature and as a result this would completely change how we look at every problem we face. No longer would we have to say “that solution won’t work because it goes against human nature”, or “the unfortunate events in human history tend to recur in one way or another because humans will always…”, but rather we could ask ourselves how we want or need to be and actually make it so by changing our human nature. Indeed, if genetic engineering is used to accomplish this, history would no longer have to repeat itself in the ways that we abhor. It goes without saying that a lot of our behavior can be changed for the better by an appropriate form of environmental conditioning, but for those behaviors that can’t be changed through conditioning, genetic engineering may be the key to success.

To Be or Not To Be?

It seems that we have been given a unique opportunity to use our ever increasing plethora of experiential data and knowledge and combine it with genetic engineering techniques to engineer a social organism that is by far the best adapted to its environment.  Additionally, we may one day find ourselves living in a true global utopia, if the barriers of human nature and the de-synchronization of biological and cultural evolution are overcome, and genetic engineering may be the only way of achieving such a goal.  One extremely important issue that I haven’t mentioned until now is the ethical concerns regarding the continued use and development of genetic engineering technology.  There are obviously concerns over whether or not we should even be experimenting with this technology.  There are many reasonable arguments both for and against using this technology, but I think that as a species, we have been driven to manipulate our environment in any way that we are capable of and this curiosity is a part of human nature itself.  Without genetic engineering, we can’t change any of the negative aspects of human nature but can only let natural selection run its course to modify our species slowly over time (for better or for worse).

If we do accept this technology, there are other concerns such as the fact that there are corporations and interested parties that want to use genetic engineering primarily if not exclusively for profit gain (often at the expense of actual universal benefits for our species) and which implement questionable practices like patenting plant and animal food sources in a potentially monopolized future agricultural market.  Perhaps an even graver concern is the potential to patent genes that become a part of the human genome, and the (at least short term) inequality that would ensue from the wealthier members of society being the primary recipients of genetic human enhancement. Some people may also use genetic engineering to create new bio-warfare weaponry and find other violent or malicious applications.  Some of these practices could threaten certain democratic or other moral principles and we need to be extremely cautious with how we as a society choose to implement and regulate this technology.  There are also numerous issues regarding how these technologies will affect the environment and various ecosystems, whether caused by people with admirable intentions or not.  So it is definitely prudent that we proceed with caution and get the public heavily involved with this cultural change so that our society can move forward as responsibly as possible.

As for the feasibility of the theoretical applications mentioned earlier, it will likely be computer simulation and computing power that catalyze the knowledge base and capability needed to realize many of these goals (by decoding the incredibly complex interactions between genes and the environment) and thus will likely be the primary limiting factor. If genetic engineering also involves expanding the DNA components we have to work with, for example, by expanding our base-four system (i.e. four nucleotides to choose from) to a higher based system through the use of other naturally occurring nucleotides or even the use of UBPs (i.e. “Unnatural Base Pairs”), while still maintaining low rates of base-pair mismatching and while maintaining adequate genetic information processing rates, we may be able to utilize previously inaccessible capabilities by increasing the genetic information density of DNA.  If we can overcome some of the chemical natural selection barriers that were present during abiogenesis and the evolution of DNA (and RNA), and/or if we can change the very structure of DNA itself (as well as the proteins and enzymes that are required for its implementation), we may be able to produce an entirely new type of genetic information storage and processing system, potentially circumventing many of the limitations of DNA in general, and thus creating a vast array of new species (genetically coded by a different nucleic acid or other substance).  This type of “nucleic acid engineering”, if viable, may complement the genetic engineering we’re currently performing on DNA and help us to further accomplish some of the aforementioned goals and applications.

Lastly, while some of the theoretical applications of genetic engineering that I’ve presented in this post may not sound plausible at all to some, I think it’s extremely important and entirely reasonable (based on historical precedent) to avoid underestimating the capabilities of our species.  We may one day be able to transform ourselves into whatever species we desire, effectively taking us from trans-humanism to some perpetual form of conscious evolution and speciation.  What I find most beautiful here is that the evolution of consciousness has actually led to a form of conscious evolution. Hopefully our species will guide this evolution in ways that are most advantageous to our species, and to the entire diversity of life on this planet.

Neuroscience Arms Race & Our Changing World View

At least since the time of Hippocrates, people began to realize that the brain was the physical correlate of consciousness and thought.  Since then, the fields of psychology, neuroscience, and several inter-related fields have emerged.  There have been numerous advancements made within the field of neuroscience during the last decade or so, and in that same time frame there has also been an increased interest in the social, religious, philosophical, and moral implications that have precipitated from such a far-reaching field.  Certainly the medical knowledge we’ve obtained from the neurosciences has been the primary benefit of such research efforts, as we’ve learned quite a bit more about how the brain works, how it is structured, and the ongoing neuropathology that has led to improvements in diagnosing and treating various mental illnesses.  However, it is the other side of neuroscience that I’d like to focus on in this post — the paradigm shift relating to how we are starting to see the world around us (including ourselves), and how this is affecting our goals as well as how to achieve them.

Paradigm Shift of Our World View

Aside from the medical knowledge we are obtaining from the neurosciences, we are also gaining new perspectives on what exactly the “mind” is.  We’ve come a long way in demonstrating that “mental” or “mind” states are correlated with physical brain states, and there is an ever growing plethora of evidence which suggests that these mind states are in fact caused by these brain states.  It should come as no surprise then that all of our thoughts and behaviors are also caused by these physical brain states.  It is because of this scientific realization that society is currently undergoing an important paradigm shift in terms of our world view.

If all of our thoughts and behaviors are mediated by our physical brain states, then many everyday concepts such as thinking, learning, personality, and decision making can take on entirely new meanings.  To illustrate this point, I’d like to briefly mention the well known “nature vs. nurture” debate.  The current consensus among scientists is that people (i.e. their thoughts and behavior) are ultimately products of both their genes and their environment.

Genes & Environment

From a neuroscientific perspective, the genetic component is accounted for by noting that genes have been shown to play a very large role in directing the initial brain wiring schema of an individual during embryological development and through gestation.  During this time, the brain is developing very basic instinctual behavioral “programs” which are physically constituted by vastly complex neural networks, and the body’s developing sensory organs and systems are also connected to particular groups of these neural networks.  These complex neural networks, which have presumably been naturally selected for in order to benefit the survival of the individual, continue being constructed after gestation and throughout the entire ontogenic evolution of the individual (albeit to lesser degrees over time).

As for the environmental component, this can be further split into two parts: the internal and the external environment.  The internal environment within the brain itself, including various chemical concentration gradients partly mediated by random Brownian motion, provides some gene expression constraints as well as some additional guidance to work with the genetic instructions to help guide neuronal growth, migration, and connectivity.  The external environment, consisting of various sensory stimuli, seems to modify this neural construction by providing a form of inputs which may cause the constituent neurons within these neural networks to change their signal strength, change their action potential threshold, and/or modify their connections with particular neurons (among other possible changes).

Causal Constraints

This combination of genetic instructions and environmental interaction and input produces a conscious, thinking, and behaving being through a large number of ongoing and highly complex hardware changes.  It isn’t difficult to imagine why these insights from neuroscience might modify our conventional views of concepts such as thinking, learning, personality, and decision making.  Prior to these developments over the last few decades, the brain was largely seen as a sort of “black box”, with its internal milieu and functional properties remaining mysterious and inaccessible.  From that time and prior to it, for millennia, many people have assumed that our thoughts and behaviors were self-caused or causa sui.  That is, people believed that they themselves (i.e. some causally free “consciousness”, or “soul”, etc.) caused their own thoughts and behavior as opposed to those thoughts and behaviors being ultimately caused by physical processes (e.g. neuronal activity, chemical reactions, etc.).

Neuroscience (as well as biochemistry and its underlying physics) has shed a lot of light on this long-held assumption and, as it stands, the evidence has shown this prior assumption to be false.  The brain is ultimately controlled by the laws of physics since every chemical reaction and neural event that physically produces our thoughts, choices, and behaviors, have never been shown to be causally free from these physically guiding constraints.  I will mention that quantum uncertainty or quantum “randomness” (if ontologically random) does provide some possible freedom from physical determinism.  However, these findings from quantum physics do not provide any support for self-caused thoughts or behaviors.  Rather, it merely shows that those physically constrained thoughts and behaviors may never be completely predictable by physical laws no matter how much data is obtained.  In other words, our thoughts and behaviors are produced by highly predictable (although not necessarily completely predictable) physical laws and constraints as well as some possible random causal factors.

As a result of these physical causal constraints, the conventional perspective of an individual having classical free will has been shattered.  Our traditional views of human attributes including morality, choices, ideology, and even individualism are continuing to change markedly.  Not surprisingly, there are many people uncomfortable with these scientific discoveries including members of various religious and ideological groups that are largely based upon and thus depend on the very presupposition of precepts such as classical free will and moral responsibility.  The evidence that is compiling from the neurosciences is in fact showing that while people are causally responsible for their thoughts, choices, and behavior (i.e. an individual’s thoughts and subsequent behavior are constituents of a causal chain of events), they are not morally responsible in the sense that they can choose to think or behave any differently than they do, for their thoughts and behavior are ultimately governed by physically constrained neural processes.

New World View

Now I’d like to return to what I mentioned earlier and consider how these insights from neuroscience may be drastically modifying how we look at concepts such as thinking, learning, personality, and decision making.  If our brain is operating via these neural network dynamics, then conscious thought appears to be produced by a particular subset of these neural network configurations and processes.  So as we continue to learn how to more directly control or alter these neural network arrangements and processes (above and beyond simply applying electrical potentials to certain neural regions in order to bring memories or other forms of imagery into consciousness, as we’ve done in the past), we should be able to control thought generation from a more “bottom-up” approach.  Neuroscience is definitely heading in this direction, although there is a lot of work to be done before we have any considerable knowledge of and control over such processes.

Likewise, learning seems to consist of a certain type of neural network modification (involving memory), leading to changes in causal pattern recognition (among other things) which results in our ability to more easily achieve our goals over time.  We’ve typically thought of learning as the successful input, retention, and recall of new information, and we have been achieving this “learning” process through the input of environmental stimuli via our sensory organs and systems.  In the future, it may be possible to once again, as with the aforementioned bottom-up thought generation, physically modify our neural networks to directly implant memories and causal pattern recognition information in order to “learn” without any actual sensory input, and/or we may be able to eventually “upload” information in a way that bypasses the typical sensory pathways thus potentially allowing us to catalyze the learning process in unprecedented ways.

If we are one day able to more directly control the neural configurations and processes that lead to specific thoughts as well as learned information, then there is no reason that we won’t be able to modify our personalities, our decision-making abilities and “algorithms”, etc.  In a nutshell, we may be able to modify any aspect of “who” we are in extraordinary ways (whether this is a “good” or “bad” thing is another issue entirely).  As we come to learn more about the genetic components of these neural processes, we may also be able to use various genetic engineering techniques to assist with the necessary neural modifications required to achieve these goals.  The bottom line here is that people are products of their genes and environment, and by manipulating both of those causal constraints in more direct ways (e.g. through the use of neuroscientific techniques), we may be able to achieve previously unattainable abilities and perhaps in a relatively miniscule amount of time.  It goes without saying that these methods will also significantly affect our evolutionary course as a species, allowing us to enter new landscapes through our substantially enhanced ability to adapt.  This may be realized through these methods by finding ways to improve our intelligence, memory, or other cognitive faculties, effectively giving us the ability to engineer or re-engineer our brains as desired.

Neuroscience Arms Race

We can see that increasing our knowledge and capabilities within the neurosciences has the potential for drastic societal changes, some of which are already starting to be realized.  The impact that these fields will have on how we approach the problem of criminal, violent, or otherwise undesirable behavior can not be overstated.  Trying to correct these issues by focusing our efforts on the neural or cognitive substrate that underlie them, as opposed to using less direct and more external means (e.g. social engineering methods) that we’ve been using thus far, may lead to much less expensive solutions as well as solutions that may be realized much, much more quickly.

As with any scientific discovery or subsequent technology produced from it, neuroscience has the power to bestow on us both benefits as well as disadvantages.  I’m reminded of the ground-breaking efforts made within nuclear physics several decades ago, whereby physicists not only gained precious information about subatomic particles (and their binding energies) but also how to release these enormous amounts of energy from nuclear fusion and fission reactions.  It wasn’t long after these breakthrough discoveries were made before they were used by others to create the first atomic bombs.  Likewise, while our increasing knowledge within neuroscience has the power to help society improve by optimizing our brain function and behavior, it can also be used by various entities to manipulate the populace for unethical reasons.

For example, despite the large number of free market proponents who claim that the economy need not be regulated by anything other than rational consumers and their choices of goods and services, corporations have clearly increased their use of marketing strategies that take advantage of many humans’ irrational tendencies (whether it is “buy one get one free” offers, “sales” on items that have artificially raised prices, etc.).  Politicians and other leaders have been using similar tactics by taking advantage of voters’ emotional vulnerabilities on certain controversial issues that serve as nothing more than an ideological distraction in order to reduce or eliminate any awareness or rational analysis of the more pressing issues.

There are already research and development efforts being made by these various entities in order to take advantage of these findings within neuroscience such that they can have greater influence over people’s decisions (whether it relates to consumers’ purchases, votes, etc.).  To give an example of some of these R&D efforts, it is believed that MRI (Magnetic Resonance Imaging) or fMRI (functional Magnetic Resonance Imaging) brain scans may eventually be able to show useful details about a person’s personality or their innate or conditioned tendencies (including compulsive or addictive tendencies, preferences for certain foods or behaviors, etc.).  This kind of capability (if realized) would allow marketers to maximize how many dollars they can squeeze out of each consumer by optimizing their choices of goods and services and how they are advertised. We have already seen how purchases made on the internet, if tracked, begin to personalize the advertisements that we see during our online experience (e.g. if you buy fishing gear online, you may subsequently notice more advertisements and pop-ups for fishing related goods and services).  If possible, the information found using these types of “brain probing” methods could be applied to other areas, including that of political decision making.

While these methods derived from the neurosciences may be beneficial in some cases, for instance, by allowing the consumer more automated access to products that they may need or want (which will likely be a selling point used by these corporations for obtaining consumer approval of such methods), it will also exacerbate unsustainable consumption and other personal or societally destructive tendencies and it is likely to continue to reduce (or eliminate) whatever rational decision making capabilities we still have left.

Final Thoughts

As we can see, neuroscience has the potential to (and is already starting to) completely change the way we look at the world.  Further advancements in these fields will likely redefine many of our goals as well as how to achieve them.  It may also allow us to solve many problems that we face as a species, far beyond simply curing mental illnesses or ailments.  The main question that comes to mind is:  Who will win the neuroscience arms race?  Will it be those humanitarians, scientists, and medical professionals that are striving to accumulate knowledge in order to help solve the problems of individuals and societies as well as to increase their quality of life?  Or will it be the entities that are trying to accumulate similar knowledge in order to take advantage of human weaknesses for the purposes of gaining wealth and power, thus exacerbating the problems we currently face?

The Co-Evolution of Language and Complex Thought

Language appears to be the most profound feature that has arisen during the evolution of the human mind.  This feature of humanity has led to incredible thought complexity, and also provided the foundation for the most simplistic thoughts imaginable.  Many of us may wonder how exactly language is related to thought and also how the evolution of language has affected the evolution of thought complexity.  In this post, I plan to discuss what I believe to be some evolutionary aspects of psycholinguistics.

Mental Languages

It is clear that humans think in some form of language, whether it is accomplished as an interior monologue using our native spoken language and/or some form of what many call “mentalese” (i.e. a means of thinking about concepts and propositions without the use of words).  Our thoughts are likely accomplished by a combination of these two “types” of language.  The fact that young infants and aphasics (for example) are able to think, clearly implies that not all thoughts are accomplished through a spoken language.  It is also likely that the aforementioned “mentalese” is some innate form of mental symbolic representation that is primary in some sense, supported by the fact that it appears to be necessary in order for spoken language to develop or exist at all.  Considering that words and sentences do not have any intrinsic semantic content or value (at least non-iconic forms) illustrates that this “mentalese” is in fact a prerequisite for understanding or assigning the meaning of words and sentences.  Complex words can always be defined by a number of less complex words, but at some point a limit is reached whereby the most simple units of definition are composed of seemingly irreducible concepts and propositions.  Furthermore, those irreducible concepts and propositions do not require any words to have meaning (for if they did, we would have an infinite regress of words being defined by words being defined by words, ad infinitum).  The only time we theoretically require symbolic representation of semantic content using words is if the concepts are to be easily (if at all) communicated to others.

While some form of mentalese is likely the foundation or even ultimate form of thought, it is my contention that communicable language has likely had a considerable impact on the evolution of the human mind — not only in the most trivial or obvious way whereby communicated words affect our thoughts (e.g. through inspiration, imagination, and/or reflection of new knowledge or perspectives), but also by serving as a secondary multidimensional medium for symbolic representation. That is, spoken language (as well as its subsequent allotrope, written language) has provided a form of combinatorial leverage somewhat independent of (although working in harmony with) the mental or cognitive faculties that innately exist for thought.

To be sure, spoken language has likely co-evolved with our mentalese, as they seem to affect one another in various ways.  As new types or combinations of propositions and concepts are discovered, the spoken language has to adapt in order to make those new propositions and concepts communicable to others.  What interests me more however, is how communicable language (spoken or written) has affected the evolution of thought complexity itself.

Communicable Language and Thought Complexity

Words and sentences, which primarily developed in order to communicate instances of our mental language to others, have also undoubtedly provided a secondary multidimensional medium for symbolic representation.  For example, when we use words, we are able to compress a large amount of information (i.e. many concepts and propositions) into small tokens with varying densities.  This type of compression has provided a way to maximize our use of short-term and long-term memory in order for more complex thoughts and mental capabilities to develop (whether that increase in complexity is defined as longer strings of concepts or propositions, or otherwise).

When we think of a sentence to ourselves, we end up utilizing a phonological/auditory loop, whereby we can better handle and organize information at any single moment by internally “hearing” it.  We can also visualize the words in multiple ways including how the mouth movements of people speaking those words would look like (and we can use our tactile and/or motor memory to mentally simulate how our mouth feels when these words are spoken), and if a written form of the communicable language exists, we can actually visualize the words as they would appear in their written form (as well as the aforementioned tactile/motor memory to mentally simulate how it feels to write those words).  On top of this, we can often visualize each glyph in multiple formats (i.e. different sizes, shapes, fonts, etc.).  This has provided a multidimensional memory tool, because it serves to represent the semantic information in a way that our brain can perceive and associate with multiple senses (in this case through our auditory, visual, and somatosensory cortices).  In some cases, when a particular written language uses iconic glyphs (as opposed to arbitrary symbols), the user can also visualize the concept represented by the symbol in an idealized fashion.  Associating information with multiple cognitive faculties or perceptual systems means that more neural network patterns of the brain will be involved with the attainment, retention, and recall of that information.  For those of us that have successfully used various pneumonic devices and other memory-enhancing “tricks”, we can clearly see the efficacy and importance of communicable language and its relationship to how we think about and combine various concepts and propositions.

By enhancing our memory, communicable language has served as an epistemic catalyst allowing us to build upon our previous knowledge in ways that would have likely been impossible without said language.  Once written language was developed, we were no longer limited by our own short-term and long-term memory, for we had a way of recording as much information as possible, and this also allowed us to better formulate new ideas and consider thoughts that would have otherwise been too complex to mentally visualize or keep track of.  Mathematics, for example, exponentially increased in complexity once we were able to represent the relationships between variables in a written form.  While previously we would have been limited by our short-term and long-term memory, written language allowed us to eventually formulate incredibly long (sometimes painfully long) mathematical expressions.  Once written language was converted further into an electro-mechanical language (i.e. through the use of computers), our “writing” mediums, information acquisition mechanisms, and pattern recognition capabilities, were further aided and enhanced exponentially thus providing yet another platform for an increased epistemic or cognitive “breathing space”.  If our brains underwent particular mutations after communicable language evolved, it may have provided a way to ratchet our way into entirely new cognitive niches or capabilities.  That is, by communicable language providing us with new strings of concepts and propositions, there may have been an unprecedented natural selection pressure/opportunity (if an advantageous brain mutation accompanied this new cognitive input) in order for our brain to obtain an entirely new and possibly more complex fundamental concept or way of thinking.

Summary

It seems evident to me that communicable language, once it had developed, served as an extremely important epistemic catalyst and multidimensional cognitive tool that likely had a great influence on the evolution of the human brain.  While some form of mentalese was likely a prerequisite and precursor to any subsequent forms of communicable language, the cognitive “breathing space” that communicable language provided, seems to have had a marked impact on the evolution of human thought complexity, and on the amount of knowledge that we’ve been able to obtain from the world around us.  I have no doubt that the current external linguistic tools we use (i.e. written and electronic forms of handling information) will continue to significantly alter the ongoing evolution of the human mind.  Our biological forms of memory will likely adapt in order to be economically optimized and better work with those external media.  Likewise, our increasing access to new types of information may have provided (and may continue to provide) a natural selective pressure or opportunity for our brains to evolve in order to think about entirely new and potentially more complex concepts, thereby periodically increasing the lexicon or conceptual database of our “mentalese” (assuming that those new concepts provide a survival/reproductive advantage).

Dreams, Dialogue, and the Unconscious

It has long been believed that our mental structure consists of both a conscious and an unconscious element.  While the conscious element has been studied exhaustively, there seems to be relatively little known about the unconscious.  We can certainly infer that it exists as every part of the self that we can’t control or are not aware of must necessarily be mediated by the unconscious.  To be sure, the fields of neuroscience and psychology (among others) have provided a plethora of evidence related to the unconscious in terms of neuronal structures and activity, and the influence it has on our behavior, respectively.  However, trying to actually access the unconscious mind has proven to be quite difficult.  How can one hope to access this hidden yet incredibly powerful portion of themselves?  In this post, I plan to discuss what I believe to be two effective ways with which we can learn more about ourselves and access that which seems to elude us day-in and day-out.

Concept of Self

It is clear that we have an idea of who we are as individuals.  We consciously know what many of our interests are, what our philosophical and/or religious beliefs are, and we also have a subjective view of what we believe to be our personality traits.  I prefer to define this aspect of the self as the “Me”.  In short, the “Me” is the conscious subjective view one holds about themselves.

Another aspect of the self is the “You”, or the way others see you from their own subjective perspective.  It goes without saying that others view us very differently than we view ourselves.  People see things about us that we just don’t notice or that we deny to be true, whether they are particular personality traits or various behavioral tendencies.  Due to the fact that most people put on a social mask when they interact with others, the “You” ends up including not only some real albeit unknown aspects of the self, but also how you want to be seen by others and how they want to see you.  So I believe that the “You” is the social self — that which is implied by the individual and that which is inferred by another person.  I believe that the implied self and the inferred self involve both a conscious and unconscious element from each party, and thus the implication and inference will generally be quite different regardless of any of the limitations of language.

Finally, we have the aspect of the self which is typically unreachable and seems to be operating in the background.   I believe that this portion of the self ultimately drives us to think and behave the way we do, and accounts for what we may describe to be a form of “auto-pilot”.  This of course is the unconscious portion of the self.  I would call this aspect of the self the “I”.  In my opinion, it is the “I” that represents who we really are as a person (independent of subjective perspectives), as I believe everything conscious about the self is ultimately derived from this “I”.  The “I” includes the beliefs, interests, disinterests, etc., that we are not aware of yet are likely to exist based on some of our behaviors that conflict with our conscious intentions.  This aspect in particular is what I would describe as the objective self, and consequently it is that which we can never fully access or know about with any certainty.

Using the “You” to Access the “I”

I believe that the “You” is in fact a portal to access the “I”, for the portion of this “You” that is not derived from one’s artificial social mask will certainly contain at least some truths about one’s self that are either not consciously evident or are not believed by the “Me” to be true, even if they are in fact true.  Thus, in my opinion it is the inter-subjective communication with others that allows us to learn more about our unconscious self than any other method or action.  I also believe that this in fact accounts for most of the efficacy provided by mental health counseling.  That is, by having a discourse with someone else, we are getting another subjective perspective of the self that is not tainted with our own predispositions.  Even if the conversation isn’t specifically about you, by another person simply sharing their subjective perspective about anything at all, they are providing you with novel ways of looking at things, and if these perspectives weren’t evident in your conscious repertoire, they may in fact probe the unconscious (by providing recognition cues for unconscious concepts or beliefs).

The key lies in analyzing those external perspectives with an open mind, so that denial and the fear of knowing ourselves do not dominate and hinder this access.  Let’s face it, people often hear what they want to hear (whether about themselves or anything else for that matter), and we often unknowingly ignore the rest in order to feel comfortable and secure.  This sought-out comfort severely inhibits one’s personal growth and thus, at least periodically, we need to be able to depart from our comfort zone so that we can be true to others and be true to ourselves.

It is also important for us to strive to really listen to what others have to say rather than just waiting for our turn to speak.  In doing so, we will gain the most knowledge and get the most out of the human experience.  In particular, by critically listening to others we will learn the most about our “self” including the unconscious aspect.  While I certainly believe that inter-subjective communication is an effective way for us to access the “I”, it is generally only effective if those whom we’re speaking with are open and honest as well.  If they are only attempting to tell you what you want to hear, then even if you embrace their perspective with an open mind, it will not have much of any substance nor be nearly as useful.  There needs to be a mutual understanding that being open and honest is absolutely crucial for a productive discourse to transpire.  All parties involved will benefit from this mutual effort, as everyone will have a chance to gain access to their unconscious.

Another way that inter-subjective communication can help in accessing the unconscious is through mutual projection.  As I mentioned earlier, the “You” is often distorted by others hearing what they want to hear and by your social mask giving others a false impression of who you are.  However, they also tend to project their own insecurities into the “You”.  That is, if a person talking with you says specific things about you, they may in fact be a result of that person unknowingly projecting their own attributes onto you.  If they are uncomfortable with some aspect of themselves, they may accuse you of possessing the aspect, thus using projection as a defense mechanism.  Thus, if we pay attention to ourselves in terms of how we talk about others, we may learn more about our own unconscious projections.  Fortunately, if the person you’re speaking with knows you quite well and senses that you are projecting, they may point it out to you and vice versa.

Dream Analysis

Another potentially useful method for accessing the unconscious is an analysis of one’s dreams.  Freud, Jung and other well-known psychologists have endorsed this method as an effective psychoanalytic tool.  When we are dreaming, our brain is in a reduced-conscious if not unconscious state (although the brain is highly active within the dream-associated REM phase).  I believe that due to the decreased sensory input and stimulation during sleep, the brain has more opportunities to “fill in the blanks” and make an alternate conceptualization of reality.  This may provide a platform for unconscious expression.  When our brain constructs the dream content it seems to be utilizing a mixture of memories, current sensory stimuli constituting the sleeper’s environment (albeit a minimal amount — and perhaps necessarily so), and elements from the unconscious.  By analyzing our dreams, we have a chance to try and interpret symbolic representations likely stemming from the unconscious.  While I don’t believe that we can ever know for sure that which came from the unconscious, by asking ourselves questions relating to the dream content and making a concerted effort to analyze the dream, we will likely discover at least some elements of our unconscious, even if we have no way of confirming the origin or significance of each dream component.

Again, just as we must be open-minded and willing to face previously unknown aspects of ourselves during the aforementioned inter-subjective experience, we must also be willing to do the same during any dream analysis.  You must be willing to identify personal weaknesses, insecurities, and potentially repressed emotions.  Surely there can be aspects of our unconscious that we’d like and appreciate if discovered, but there will likely be a tendency to repress that which we find repulsive about ourselves.  Thus, I believe that the unconscious contains more negative things about our self than positive things (as implied by Jung’s “Shadow” archetype).

How might one begin such an analysis?  Obviously we must first obtain some data by recording the details of our dreams.  As soon as you wake up after a dream, take advantage of the opportunity to record as many details as you can in order to be more confident with the analysis.  The longer you wait, the more likely the information will become distorted or lost altogether (as we’ve all experienced at one time or another).  As you record these details, try and include different elements of the dream so that you aren’t only recording your perceptions, but also how the setting or events made you feel emotionally.  Note any ambiguities no matter how trivial, mundane, or irrelevant they may seem.  For example, if you happen to notice groups of people or objects in your dreams, try to note how many there are as that number may be significant.  If it seems that the dream is set in the past, try to infer the approximate date.  Various details may be subtle indicators of unconscious material.

Often times dreams are not very easy to describe because they tend to deviate from reality and have a largely irrational and/or emotional structure.  All we can do is try our best to describe what we can remember even if it seems non-sensical or is difficult to articulate.

As for the analysis of the dream content, I try and ask myself specific questions within the context of the dream.  The primary questions include:

  • What might this person, place, or thing symbolize, if they aren’t taken at face value?  That is, what kinds of emotions, qualities, or properties do I associate with these dream contents?
  • If I think my associations for the dream contents are atypical, then what associations might be more common?  In other words, what would I expect the average person to associate the dream content with?  (Collective or personal opinions may present themselves in dreams)

Once these primary questions are addressed, I ask myself questions that may or may not seem to relate to my dream, in order to probe the psyche.  For example:

  • Are there currently any conflicts in my life? (whether involving others or not)
  • If there are conflicts with others, do I desire some form of reconciliation or closure?
  • Have I been feeling guilty about anything lately?
  • Do I have any long term goals set for myself, and if so, are they being realized?
  • What do I like about myself, and why?
  • What do I dislike about myself, and why?  Or perhaps, what would I like to change about myself?
  • Do certain personality traits I feel I possess remind me of anyone else I know?  If so, what is my overall view of that person?
  • Am I envious of anyone else’s life, and if so, what aspects of their life are envied?
  • Are there any childhood experiences I repeatedly think about (good or bad)?
  • Are there any recurring dreams or recurring elements within different dreams?  If so, why might they be significant?
  • Are there any accomplishments that I’m especially proud of?
  • What elements of my past do I regret?
  • How would I describe the relationships with my family and friends?
  • Do I have anyone in my life that I would consider an enemy?  If so, why do I consider them an enemy?
  • How would I describe my sexuality, and my sex life?
  • Am I happy with my current job or career?
  • Do I feel that my life has purpose or that I am well fulfilled?
  • What types of things about myself would I be least comfortable sharing with others?
  • Do I have undesired behaviors that I feel are out of my control?
  • Do I feel the need to escape myself or the world around me?  If so, what might I be doing in order to escape? (e.g. abusing drugs, abusing television or other virtual-reality media, anti-social seclusion, etc.)
  • Might I be suffering from some form of cognitive dissonance as a result of me having conflicting values or beliefs?  Are there any beliefs which I’ve become deeply invested in that I may now doubt to be true, or that may be incompatible with my other beliefs?  If the answer is “no”, then I would ask:  Are there any beliefs that I’ve become deeply invested in, and if so, in what ways could they be threatened?

These questions are intended to probe one’s self beneath the surface.  By asking ourselves specific questions like this, particularly in relation to our dream contents, I believe that we can gain access to the unconscious simply by addressing concepts and potential issues that are often left out-of-sight and out-of-mind.  How we answer these questions isn’t as important as asking them in the first place.  We may deny that we have problems or personal weaknesses as we answer these questions, but asking them will continue to bring our attention to these subjects and elements of ourselves that we often take for granted or prefer not to think about.  In doing so, I believe one will at least have a better chance at accessing the unconscious than if they hadn’t made an attempt at all.

In terms of answering the various questions listed above, the analysis will likely be more useful if you go over the questions a second time, and reverse or change your previous instinctual answer while trying to justify the reversal or change.  This exercise will force you to think about yourself in new ways that might improve access to the unconscious, since you are effectively minimizing the barriers brought on through rationalization and denial.

Final Thoughts

So as we can see, while the unconscious mind may seem inaccessible, there appear to be at least two ways with which we can gain some access.  Inter-subjective communication allows us access to the “I” via the “You”, and access to both the speaker’s and the listener’s unconscious is accomplished via mutual projection.  Dreams and the analysis of such appears to be yet another method for accessing the unconscious.  Since our brains appear to be in a semi-conscious state, the brain may be capable of cognitive processes that aren’t saturated by sensory input from the outside world.  This reduction in sensory input may in fact give the brain more opportunities to “fill in the blanks” (or so to speak), and this may provide a platform for unconscious expression.  So in short, it appears that there are at least a few effective methods for accessing the unconscious self.  The bigger question is:  Are we willing to face this hidden side of ourselves?

Knowledge and the “Brain in a Vat” scenario

Many philosophers have stumbled across the possibility of the BIV scenario — that is, that our experience of reality could be a result of our brains being in a vat, connected to electrodes which are controlled by some mad scientist, thus creating the illusion of a physical body, universe, etc.  Many variations of this idea have been presented which appear to be similar to or based off of the “Evil Demon” proposed by Descartes in his Meditations on First Philosophy back in 1641.

There is no way to prove or disprove this notion because all of our “knowledge” (if any exists at all depending on how we define it) would be limited to the mad scientist’s program of sensory inputs, etc.  What exactly is “knowledge” ?  If we can decide on what qualifies as “knowledge”, then another question I’ve wondered about is:

How do we modify or shape the definition of “knowledge” to make it compatible in the BIV scenario? 

Many might say that we have to have an agreed upon definition of “knowledge” in the first place before we modify it for some crazy hypothetical scenario.  I understand that there is disagreement in the philosophical/epistemological community regarding how we define “knowledge”.  To simplify matters, let’s start with Plato’s well known (albeit controversial) definition which basically says:

Knowledge = Justified True Belief

It would help if Plato had spelled out an agreed upon convention for what is considered “justified”, “true”, and “belief”.  It seems reasonable that we can assume that “belief” is any proposition or premise an individual holds to be true.  We can also assume that “true” implies that the belief is factual — that is, that the “belief” has been confirmed to correspond with “reality”.  The term “justified” is a bit trickier and concepts such as “externalism vs. internalism”, probability, evidentialism, etc., indeed arise when theories of justification are discussed.  It is the “true” and “justified” concepts that I think are of primary importance (as I think most people if not everyone can agree on what a “belief” is) — and how they relate to the BIV scenario (or any similar variation of this scenario) in our quest for properly defining “knowledge”.

Is this definition creation/modification even something that can be accomplished?  Or will any modification necessary to make it compatible in the BIV scenario completely strip the word “knowledge” of all practical meaning?  If the modification does strip the word of most of its meaning, then does that say anything profound about what “knowledge” really is?  I think that, if anything, it may allow us to step back and look at the concept of “knowledge” in a different way.  My contention is that it will demonstrate that knowledge is really just another subjective concept which we can claim to possess based on nothing more than an understanding of and adherence to definitions, and the definitions are based on hierarchical patterns observed in nature.  This is in contrast to the current assumption that knowledge is some form of objective truth that we strive to obtain with a deeper meaning that may in fact (but not necessarily) be metaphysical in some way.

How does the concept of “true” differ in the “real world” vs. the BIV’s “virtual world”?

Can any beliefs acquired within the BIV scenario be considered “true”?  If not, then we would have to consider the possibility that nothing “true” exists.  This isn’t necessarily the end of the world, although for most people it may be a pill that is a bit too hard to swallow.  Likewise, if we consider the possibility that at least one belief in our reality is true (I’m betting that most people believe this to be the case), then we must accept that “true beliefs” may exist within the BIV scenario.  The bottom line is that we can never know if we are BIVs or not.  So this means that if we believe that true beliefs exist, we should modify the definition to apply within the BIV scenario.  Perhaps we can say that something is “true” only insofar as it corresponds to “our reality”.  If there are multiple levels of reality (e.g. our reality as well as that of the mad scientist who is simulating our reality), then perhaps there are multiple levels of truth.  If this is the case then either some levels of truth are more or less “true” than others, or they all contain an equally valid truth.  This latter possibility seems most reasonable as the former seems to negate the strict meaning of “true”, since it seems that something is either true, false, or neither (if a third option exists).   There is no state that lies in between that of “true” and “false”.  By this rationale, our concept of what is true and what isn’t is as valid as the mad scientists.  Perhaps the mad scientist would disagree with this, but based on how we define true and false, our claim of “equally valid truth” seems to be reasonable.  This may illustrate (to some) that our idea of “true beliefs” does not hold any fundamentally significant value as they are only “true beliefs” based on our definitions of particular words and concepts (again, ultimately based on hierarchical patterns observed in nature).  If I define something to be “true”, then it is true.  If I point to what looks like a tree (even if it’s a hologram and I don’t know this) and say “I’m defining that to be a “tree”, then it is in fact a tree and I can refer to it as such.  Likewise, if I define it’s location to be called “there”, then it is in fact there.  Based on these definitions (among others) I think it would be a completely true statement (and thus a true belief) to say “I KNOW that there is a tree over there”.  However if I failed to define (in detail) what a “tree” was, or if my definition of “tree” somehow excluded “a hologram of a tree” (because the definition was too specific), then I could easily make errors in terms of what I inferred to know or not know.  This may seem obvious, but I believe it is extremely important to establish these foundations before making any claims about knowledge.

Turning our attention to the concept of “justified”, one could argue that in this case I wouldn’t need any justification for the aforementioned belief to be true as my belief is true by definition (i.e. there isn’t anything that could make this particular belief false other than changing the definitions of “there”, “tree”, etc.).  Or we could say that the justification is the fact that I’ve defined the terms such that I am correct.  In my opinion this would imply that I would have 100% justification (for this belief) which I don’t think can be accomplished in any way other than through this definitive method.  In every other case of justification we will be less than 100% justified, and there will only be an arbitrary way of grading or quantifying that justification (100% is not arbitrary at all, but to say that we are “50% justified” or “23% justified” or any other value other than that 100% would be arbitrary and thus meaningless).  However, I will add that it is possible to have a belief that is MORE or LESS justified than another, but we could never quantify how much MORE or LESS justified it may be, nor will we ALWAYS be able to say that one belief is more or less justified than another.  Perhaps most importantly regarding the importance of “justification” with regard to knowledge, we must realize that our most common means of justification is through empiricism, the scientific method, etc.  That is, our justification seems to be based inductively on the hierarchical patterns which we observe in nature.  If certain patterns are observed and are repeatable, then the beliefs ascertained from them will not only be pragmatically useful, but they will be most justified.

When we undergo the task of defining knowledge or more specifically the task of analyzing Plato’s definition, I believe that it is useful for us to play the devil’s advocate and recognize that scenarios can be devised whereby the requirements of “justified”, “true”, and “belief” have been met and yet no “true knowledge” exists.  These hypothetical scenarios can be represented by the well known “Gettier Problem”.  Without going into too much detail, let’s examine a possible scenario to see where the problem arises.

Let’s imagine that I am in my living room and I see my wife sitting on the couch, but it just so happens that what I’m looking at is actually nothing more than a perfect hologram of my wife.  However my wife IS actually in the room (she’s hiding behind the couch even though I can’t see her hiding).  Now if I were to say “My wife is in the living room”, technically my belief would be “justified” to some degree (I do see her on the couch after all and I don’t know that what I’m looking at is actually a hologram).  My belief would also be “true” because she is actually in the living room (she’s hiding behind the couch).  This could be inferred as fulfilling Plato’s requirements for knowledge: truth and justification.  However most people would argue that I don’t actually possess any knowledge in this particular example because it is a COINCIDENCE that my belief is true, or to be more specific, it’s “true-ness” is not related to my justification (i.e. that I see her on the couch).  It’s “true-ness” is based on the coincidence of her actual presence behind the couch and thus meeting the requirements of “being in the living room” (albeit without me knowing it).

How do we get around this dilemma, and can we do so such that we at least  partially (if not completely) preserve Plato’s requirements?  One way to solve this dilemma would be to remove the element of COINCIDENCE entirely.  If we change our definition to:

Knowledge = Justified Non-Coincidentally True Belief

then we have solved the problem presented by Gettier.  Removing this element of coincidence in order to properly define knowledge (or at least in order to define it more appropriately) has been accomplished in one way or another by various responses to the “Gettier Problem” including that of Goldman’s “causal theory”, Lehrer-Paxson’s “defeasibility condition“, and many others which I won’t discuss in detail at this time, but the links provided will show a basic summary of some of those responses.  Overall, I think it’s an important distinction and a necessary one to make (i.e. non-coincidence) in order to properly define knowledge.

Alternatively, another response to the “Gettier Problem” is to say that while it appears that I didn’t actually possess true knowledge in the example illustrated above (because of the coincidence), it could be due to the fact that I never actually had sufficient justification to begin with as I failed to consider the possibility that I was being deceived on a number of possible levels with varying significance (e.g. hologram, brain in a vat, etc.).  This lack of justification would mean that I’ve failed to satisfy the Platonic definition of knowledge and thus no “Gettier Problem” would exist in that case.  However, I think it’s more pragmatic to trust our senses on a basic level and say that “seeing my wife in the living room” (regardless of whether or not it may have actually been a hologram I was seeing and thus wasn’t “real”) warrants a high level of justification for all practical purposes.  After all, Plato never seemed to have specified how “justified” something needs to be before it meets the requirement necessary to call it knowledge.  Had Plato considered the possibility of the “Brain in a vat” scenario (or a related scenario more befitting to the century he lived in), would he have said that no justification exists and thus no knowledge exists?  I highly doubt it, since it seems like an exercise of futility to define “knowledge” at all if it was assumed to be something non-existent or unattainable.  Lastly, if one’s belief justification is in fact based on hierarchical patterns observed in nature, then the more detailed or thoroughly defined those patterns are (i.e. the more scientific/empirical data, models, and explanatory power that are available from them), the more justified my beliefs will be and the more useful that knowledge will be.

In summary, I think that knowledge is nothing more than an attribute that we possess solely based on an understanding of and adherence to definitions and thus is entirely compatible with the BIV scenario.  Definitions do not require a metaphysical explanation nor do they need to apply in any way to anything outside of our reality (i.e. metaphysically).  They are nothing more than human constructs.  As long as we are careful in how we define terms, adhere to those definitions without contradiction and understand those definitions that we create (through the use of hierarchical patterns observed in nature), then we have the ability to possess knowledge — regardless of if we are nothing more than BIVs.  The only knowledge that we can claim to know must be “justified non-coincidentally true belief” (to use my modified version of the Platonic definition, thus accounting for the “Gettier Problem”).  The most significant realization here is that the only knowledge that we can PROVE to know is based on and limited by the definitions we create and our use of them within our claims of knowledge.  This proven knowledge would include but NOT be limited to Descartes’ famous claim “cogito ergo sum” (i.e. “I think therefore I am”), which in much of Western Philosophy is believed to be the foundation for all knowledge.  I disagree that this is the foundation for all knowledge, as I believe the foundation to be based on the understanding and proper use of definitions and nothing more.  For Descartes, he had to define (or use a commonly accepted definition of) every word in that claim including what “I” is, what “think” is, what “am” is, etc.  Did his definition of “I” include the unconscious portion of his self or only the conscious portion?  I’m aware that Freudian theory didn’t exist at the time but my question still stands.  Personally I would define “I” as the culmination of both our conscious and unconscious minds.  Yet, the only “thinking” that we’re aware of is conscious (which I would call the “me”), so based on my definitions of “I” and “think”, I would disagree with his claim, because my unconscious mind doesn’t “think” at all (and even if it did, the conscious mind doesn’t perceive it so I wouldn’t say “I think” when referring to my unconscious mind).  When I am thinking, I am conscious of those thoughts.  So there are potential problems that exist with Descartes’ assertion depending on definitions he chooses for the words used in that assertion.  This further illustrates the importance of definitions on a foundational level, and the hierarchical patterns observed in nature that led to those definitions.

My views of knowledge may seem to be unacceptable or incomplete, but these are the conclusions I’ve arrived at in order to avoid the “Gettier Problem” as well as to remain compatible with the BIV scenario or any other metaphysical possibilities that we have no way of falsifying.  While I’m aware that anything discussed about the metaphysical is seen by some philosophers to be completely and utterly pointless, my goal in making the definition of knowledge compatible with the BIV scenario is merely to illustrate that if knowledge exists in both “worlds” (and our world is nothing but a simulation), then the only knowledge we can prove has to be based on definitions — which is a human construct based on hierarchical patterns observed in our reality.