Technology, Mass-Culture, and the Prospects of Human Liberation

Cultural evolution is arguably just as fascinating as biological evolution (if not more so), with new ideas and behaviors stemming from the same kinds of natural selective pressures that lead to new species along with their novel morphologies and capacities.  And as with biological evolution where it, in a sense, takes off on its own unbeknownst to the new organisms it produces and independent of the intentions they may have (with our species being the notable exception given our awareness of evolutionary history and our ever-growing control over genetics), so too cultural evolution takes off on its own, where cultural changes are made manifest through a number of causal influences that we’re largely unaware of, despite our having some conscious influence over this vastly transformative process.

Alongside these cultural changes, human civilizations have striven to find new means of manipulating nature and to better predict the causal structure that makes up our reality.  One unfortunate consequence of this is that, as history has shown us, within any particular culture’s time and place, people have a decidedly biased overconfidence in the perceived level of truth or justification for the status quo and their present world view (both on an individual and collective level).  Undoubtedly, the “group-think” or “herd mentality” that precipitates from our simply having social groups often reinforces this overconfidence, and this is so in spite of the fact that what actually influences a mass of people to believe certain things or to behave as they do is highly contingent, unstable, and amenable to irrational forms of persuasion including emotive, sensationalist propaganda that prey on our cognitive biases.

While we as a society have an unprecedented amount of control over the world around us, this type of control is perhaps best described as a system of bureaucratic organization and automated information processing, that gives less and less individual autonomy, liberty, and basic freedom, as it further expands its reach.  How much control do we as individuals really have in terms of the information we have access to, and given the implied picture of reality that is concomitant with this information in the way it’s presented to us?  How much control do we have in terms of the number of life trajectories and occupations made available to us, what educational and socioeconomic resources we have access to given the particular family, culture, and geographical location we’re born and raised in?

As more layers of control have been added to our way of life and as certain criteria for organizational efficiency are continually implemented, our lives have become externally defined by increasing layers of abstraction, and our modes of existence are further separated cognitively and emotionally from an aesthetically and otherwise psychologically valuable sense of meaning and purpose.

While the Enlightenment slowly dragged our species, kicking and screaming, out of the theocratic, anti-intellectual epistemologies of the Medieval period of human history, the same forces that unearthed a long overdue appreciation for (and development of) rationality and technological progress, unknowingly engendered a vulnerability to our misusing this newfound power.  There was an overcompensation of rationality when it was deployed to (justifiably) respond to the authoritarian dogmatism of Christianity and to the demonstrably unreliable nature of superstitious beliefs and of many of our intuitions.

This overcompensatory effect was in many ways accounted for, or anticipated within the dialectical theory of historical development as delineated by the German philosopher Georg Hegel, and within some relevant reformulations of this dialectical process as theorized by the German philosopher Karl Marx (among others).  Throughout history, we’ve had an endless clash of ideas whereby the prevailing worldviews are shown to be inadequate in some way, failing to account for some notable aspect of our perceived reality, or shown to be insufficient for meeting our basic psychological or socioeconomic needs.  With respect to any problem we’ve encountered, we search for a solution (or wait for one to present itself to us), and then we become overconfident in the efficacy of the solution.  Eventually we end up overgeneralizing its applicability, and then the pendulum swings too far the other way, thereby creating new problems in need of a solution, with this process seemingly repeating itself ad infinitum.

Despite the various woes of modernity, as explicated by the modern existentialist movement, it does seem that history, from a long-term perspective at least, has been moving in the right direction, not only with respect to our heightened capacity of improving our standard of living, but also in terms of the evolution of our social contracts and our conceptions of basic and universal human rights.  And we should be able to plausibly reconcile this generally positive historical trend with the Hegelian view of historical development, and the conflicts that arise in human history, by noting that we often seem to take one step backward followed by taking two steps forward in terms of our moral and epistemological progress.

Regardless of the progress we’ve made, we seem to be at a crucial point in our history where the same freedom-limiting authoritarian reach that plagued humanity (especially during the Middle Ages) has undergone a kind of morphogenesis, having been reinstantiated albeit in a different form.  The elements of authoritarianism have become built into the very structure of mass-culture, with an anti-individualistic corporatocracy largely mediating the flow of information throughout this mass-culture, and also mediating its evolution over time as it becomes more globalized, interconnected, and cybernetically integrated into our day-to-day lives.

Coming back to the kinds of parallels in biology that I opened up with, we can see human autonomy and our culture (ideas and behaviors) as having evolved in ways that are strikingly similar to the biological jump that life made long ago, where single-celled organisms eventually joined forces with one another to become multi-cellular.  This biological jump is analogous to the jump we made during the early onset of civilization, where we employed an increasingly complex distribution of labor and occupational specialization, allowing us to survive many more environmental hurdles than ever before.  Once civilization began, the spread of culture became much more effective for transmitting ideas both laterally within a culture and longitudinally from generation to generation, with this process heavily enhanced by our having adopted various forms of written language, allowing us to store and transmit information in much more robust ways, similar to genetic information storage and transfer via DNA, RNA, and proteins.

Although the single-celled bacterium or amoeba (for example) may be thought of as having more “autonomy” than a cell that is forcefully interconnected within a multi-cellular organism, we can see how the range of capacities available to single cells were far more limited before making the symbiotic jump, just as humans living before the onset of civilization had more “freedom” (at least of a certain type) and yet the number of possible life trajectories and experiences was minuscule when compared to a human living in a post-cultural world.  But once multi-cellular organisms began to form a nervous system and eventually a brain, the entire collection of cells making up an organism became ultimately subservient to a centralized form of executive power — just as humans have become subservient to the executive authority of the state or government (along with various social pressures of conformity).

And just as the fates of each cell in a multi-cellular organism became predetermined and predictable by its particular set of available resources and the specific information it received from neighboring cells, similarly our own lives are becoming increasingly predetermined and predictable by the socioeconomic resources made available to us and the information we’re given which constitutes our mass-culture.  We are slowly morphing from individual brains into something akin to individual neurons within a global brain of mass-consciousness and mass-culture, having our critical thinking skills and creative aspirations exchanged for rehearsed responses and docile expectations that maintain the status quo and which continually transfers our autonomy to an oligarchic power structure.

We might wonder if this shift has been inevitable, possibly being yet another example of a “fractal pattern” recapitulated in sociological form out of the very same freely floating rationales that biological evolution has been making use of for eons.  In any case, it’s critically important that we become aware of this change, so we can try and actively achieve and effectively maintain the liberties and level of individual autonomy that we so highly cherish.  We ought to be thinking about what kinds of ways we can remain cognizant of, and critical to, our culture and its products; how we can reconcile or transform technological rationality and progress with a future world comprised of truly liberated individuals; and how to transform our corporatocratic capitalist society into one that is based on a mixed economy with a social safety net that even the wealthiest citizens would be content with living under, so as to maximize the actual creative freedom people have once their basic existential needs have been met.

Will unchecked capitalism, social-media, mass-media, and the false needs and epistemological bubbles they’re forming lead to our undoing and destruction?  Or will we find a way to rise above this technologically-induced setback, and take advantage of the opportunities it has afforded us, to make the world and our technology truly compatible with our human psychology?  Whatever the future holds for us, it is undoubtedly going to depend on how many of us begin to critically think about how we can seriously restructure our educational system and how we disseminate information, how we can re-prioritize and better reflect on what our personal goals ought to be, and also how we ought to identify ourselves as free and unique individuals.

Advertisement

It’s Time For Some Philosophical Investigations

Ludwig Wittgenstein’s Philosophical Investigations is a nice piece of work where he attempts to explain his views on language and the consequences of this view on various subjects like logic, semantics, cognition, and psychology.  I’ve mentioned some of his views very briefly in a couple of earlier posts, but I wanted to delve into his work in a little more depth here and comment on what strikes me as most interesting.  Lately, I’ve been looking back at some of the books I’ve read from various philosophers and have been wanting to revisit them so I can explore them in more detail and share how they connect to some of my own thoughts.  Alright…let’s begin.

Language, Meaning, & Their Probabilistic Attributes

He opens his Philosophical Investigations with a quote from St. Augustine’s Confessions that describes how a person learns a language.  St. Augustine believed that this process involved simply learning the names of objects (for example, by someone else pointing to the objects that are so named) and then stringing them together into sentences, and Wittgenstein points out that this is true to some trivial degree but it overlooks a much more fundamental relationship between language and the world.  For Wittgenstein, the meaning of words can not simply be attached to an object like a name can.  The meaning of a word or concept has much more of a fuzzy boundary as it depends on a breadth of context or associations with other concepts.  He analogizes this plurality in the meanings of words with the relationship between members of a family.  While there may be some resemblance between different uses of a word or concept, we aren’t able to formulate a strict definition to fully describe this resemblance.

One problem then, especially within philosophy, is that many people assume that the meaning of a word or concept is fixed with sharp boundaries (just like the fixed structure of words themselves).  Wittgenstein wants to dispel people of this false notion (much as Nietzsche tried to do before him) so that they can stop misusing language, as he believed that this misuse was the cause of many (if not all) of the major problems that had cropped up in philosophy over the centuries, particularly in metaphysics.  Since meaning is actually somewhat fluid and can’t be accounted for by any fixed structure, Wittgenstein thinks that any meaning that we can attach to these words is ultimately going to be determined by how those words are used.  Since he ties meaning with use, and since this use is something occurring in our social forms of life, it has an inextricably external character.  Thus, the only way to determine if someone else has a particular understanding of a word or concept is through their behavior, in response to or in association with the use of the word(s) in question.  This is especially important in the case of ambiguous sentences, which Wittgenstein explores to some degree.

Probabilistic Shared Understanding

Some of what Wittgenstein is trying to point out here are what I like to refer to as the inherently probabilistic attributes of language.  And it seems to me to be probabilistic for a few different reasons, beyond what Wittgenstein seems to address.  First, there is no guarantee that what one person means by a word or concept exactly matches the meaning from another person’s point of view, but at the very least there is almost always going to be some amount of semantic overlap (and possibly 100% in some cases) between the two individual’s intended meanings, and so there is going to be some probability that the speaker and the listener do in fact share a complete understanding.  It seems reasonable to argue that simpler concepts will have a higher probability of complete semantic overlap whereas more complex concepts are more likely to leave a gap in that shared understanding.  And I think this is true even if we can’t actually calculate what any of these probabilities are.

Now my use of the word meaning here differs from Wittgenstein’s because I am referring to something that is not exclusively shared by all parties involved and I am pointing to something that is internal (a subjective understanding of a word) rather than external (the shared use of a word).  But I think this move is necessary if we are to capture all of the attributes that people explicitly or implicitly refer to with a concept like meaning.  It seems better to compromise with Wittgenstein’s thinking and refer to the meaning of a word as a form of understanding that is intimately connected with its use, but which involves elements that are not exclusively external.

We can justify this version of meaning through an example.  If I help teach you how to ride a bike and explain that this activity is called biking or to bike, then we can use Wittgenstein’s conception of meaning and it will likely account for our shared understanding, and so I have no qualms about that, and I’d agree with Wittgenstein that this is perhaps the most important function of language.  But it may be the case that you have an understanding that biking is an activity that can only happen on Tuesdays, because that happened to be the day that I helped teach you how to ride a bike.  Though I never intended for you to understand biking in this way, there was no immediate way for me to infer that you had this misunderstanding on the day I was teaching you this.  I could only learn of this fact if you explicitly explained to me your understanding of the term with enough detail, or if I had asked you additional questions like whether or not you’d like to bike on a Wednesday (for example), with you answering “I don’t know how to do that as that doesn’t make any sense to me.”  Wittgenstein doesn’t account for this gap in understanding in his conception of meaning and I think this makes for a far less useful conception.

Now I think that Wittgenstein is still right in the sense that the only way to determine someone else’s understanding (or lack thereof) of a word is through their behavior, but due to chance as well as where our attention is directed at any given moment, we may never see the right kinds of behavior to rule out any number of possible misunderstandings, and so we’re apt to just assume that these misunderstandings don’t exist because language hides them to varying degrees.  But they can and in some cases do exist, and this is why I prefer a conception of meaning that takes these misunderstandings into account.  So I think it’s useful to see language as probabilistic in the sense that there is some probability of a complete shared understanding underlying the use of a word, and thus there is a converse probability of some degree of misunderstanding.

Language & Meaning as Probabilistic Associations Between Causal Relations

A second way of language being probabilistic is due to the fact that the unique meanings associated with any particular use of a word or concept as understood by an individual are derived from probabilistic associations between various inferred causal relations.  And I believe that this is the underlying cause of most of the problems that Wittgenstein was trying to address in this book.  He may not have been thinking about the problem in this way, but it can account for the fuzzy boundary problem associated with the task of trying to define the meaning of words since this probabilistic structure underlies our experiences, our understanding of the world, and our use of language such that it can’t be represented by static, sharp definitions.  When a person is learning a concept, like redness, they experience a number of observations and infer what is common to all of those experiences, and then they extract a particular subset of what is common and associate it with a word like red or redness (as opposed to another commonality like objectness, or roundness, or what-have-you).  But in most cases, separate experiences of redness are going to be different instantiations of redness with different hues, textures, shapes, etc., which means that redness gets associated with a large range of different qualia.

If you come across a new qualia that seems to more closely match previous experiences associated with redness rather than orangeness (for example), I would argue that this is because the brain has assigned a higher probability to that qualia being an instance of redness as opposed to, say, orangeness.  And the brain may very well test a hypothesis of the qualia matching the concept of redness versus the concept of orangeness, and depending on your previous experiences of both, and the present context of the experience, your brain may assign a higher probability of orangeness instead.  Perhaps if a red-orange colored object is mixed in with a bunch of unambiguously orange-colored objects, it will be perceived as a shade of orange (to match it with the rest of the set), but if the case were reversed and it were mixed in with a bunch of unambiguously red-colored objects, it will be perceived as a shade of red instead.

Since our perception of the world depends on context, then the meanings we assign to words or concepts also depends on context, but not only in the sense of choosing a different use of a word (like Wittgenstein argues) in some language game or other, but also by perceiving the same incoming sensory information as conceptually different qualia (like in the aforementioned case of a red-orange colored object).  In that case, we weren’t intentionally using red or orange in a different way but rather were assigning one word or the other to the exact same sensory information (with respect to the red-orange object) which depended on what else was happening in the scene that surrounded that subset of sensory information.  To me, this highlights how meaning can be fluid in multiple ways, some of that fluidity stemming from our conscious intentions and some of it from unintentional forces at play involving our prior expectations within some context which directly modify our perceived experience.

This can also be seen through Wittgenstein’s example of what he calls a duckrabbit, an ambiguous image that can be perceived as a duck or a rabbit.  I’ve taken the liberty of inserting this image here along with a copy of it which has been rotated in order to more strongly invoke the perception of a rabbit.  The first image no doubt looks more like a duck and the second image, more like a rabbit.

Now Wittgenstein says that when one is looking at the duckrabbit and sees a rabbit, they aren’t interpreting the picture as a rabbit but are simply reporting what they see.  But in the case where a person sees a duck first and then later sees a rabbit, Wittgenstein isn’t sure what to make of this.  However, he claims to be sure that whatever it is, it can’t be the case that the external world stays the same while an internal cognitive change takes place.  Wittgenstein was incorrect on this point because the external world doesn’t change (in any relevant sense) despite our seeing the duck or seeing the rabbit.  Furthermore, he never demonstrates why two different perceptions would require a change in the external world.  The fact of the matter is, you can stare at this static picture and ask yourself to see a duck or to see a rabbit and it will affect your perception accordingly.  This is partially accomplished by you mentally rotating the image in your imagination and seeing if that changes how well it matches one conception or the other, and since it matches both conceptions to a high degree, you can easily perceive it one way or the other.  Your brain is simply processing competing hypotheses to account for the incoming sensory information, and the top-down predictions of rabbitness or duckness (which you’ve acquired over past experiences) actually changes the way you perceive it with no change required in the external world (despite Wittgenstein’s assertion to the contrary).

To give yet another illustration of the probabilistic nature of language, just imagine the head of a bald man and ask yourself, if you were to add one hair at a time to this bald man’s head, at what point does he lose the property of baldness?  If hairs were slowly added at random, and you could simply say “Stop!  Now he’s no longer bald!” at some particular time, there’s no doubt in my mind that if this procedure were repeated (even if the hairs were added non-randomly), you would say “Stop!  Now he’s no longer bald!” at a different point in this transition.  Similarly if you were looking at a light that was changing color from red to orange, and were asked to say when the color has changed to orange, you would pick a point in the transition that is within some margin of error but it wouldn’t be an exact, repeatable point in the transition.  We could do this thought experiment with all sorts of concepts that are attached to words, like cat and dog and, for example, use a computer graphic program to seamlessly morph a picture of a cat into a picture of a dog and ask at what point did the cat “turn into” a dog?  It’s going to be based on a probability of coincident features that you detect which can vary over time.  Here’s a series of pictures showing a chimpanzee morphing into Bill Clinton to better illustrate this point:

At what point do we stop seeing a picture of a chimpanzee and start seeing a picture of something else?  When do we first see Bill Clinton?  What if I expanded this series of 15 images into a series of 1000 images so that this transition happened even more gradually?  It would be highly unlikely to pick the exact same point in the transition two times in a row if the images weren’t numbered or arranged in a grid.  We can analogize this phenomenon with an ongoing problem in science, known as the species problem.  This problem can be described as the inherent difficulty of defining exactly what a species is, which is necessary if one wants to determine if and when one species evolves into another.  This problem occurs because the evolutionary processes giving rise to new species are relatively slow and continuous whereas sorting those organisms into sharply defined categories involves the elimination of that generational continuity and replacing it with discrete steps.

And we can see this effect in the series of images above, where each picture could represent some large number of generations in an evolutionary timeline, where each picture/organism looks roughly like the “parent” or “child” of the picture/organism that is adjacent to it.  Despite this continuity, if we look at the first picture and the last one, they look like pictures of distinct species.  So if we want to categorize the first and last picture as distinct species, then we create a problem when trying to account for every picture/organism that lies in between that transition.  Similarly words take on an appearance of strict categorization (of meaning) when in actuality, any underlying meaning attached is probabilistic and dynamic.  And as Wittgenstein pointed out, this makes it more appropriate to consider meaning as use so that the probabilistic and dynamic attributes of meaning aren’t lost.

Now you may think you can get around this problem of fluidity or fuzzy boundaries with concepts that are simpler and more abstract, like the concept of a particular quantity (say, a quantity of four objects) or other concepts in mathematics.  But in order to learn these concepts in the first place, like quantity, and then associate particular instances of it with a word, like four, one had to be presented with a number of experiences and infer what was common to all of those experiences (as was the case with redness mentioned earlier).  And this inference (I would argue) involves a probabilistic process as well, it’s just that the resulting probability of our inferring particular experiences as an instance of four objects is incredibly high and therefore repeatable and relatively unambiguous.  Therefore that kind of inference is likely to be sustained no matter what the context, and it is likely to be shared by two individuals with 100% semantic overlap (i.e. it’s almost certain that what I mean by four is exactly what you mean by four even though this is almost certainly not the case for a concept like love or consciousness).  This makes mathematical concepts qualitatively different from other concepts (especially those that are more complex or that more closely map on to reality), but it doesn’t negate their having a probabilistic attribute or foundation.

Looking at the Big Picture

Though this discussion of language and meaning is not an exhaustive analysis of Wittgenstein’s Philosophical Investigations, it represents an analysis of the main theme present throughout.  His main point was to shed light on the disparity between how we often think of language and how we actually use it.  When we stray away from the way it is actually used in our everyday lives, in one form of social life or other, and instead misuse it such as in philosophy, this creates all sorts of problems and unwarranted conclusions.  He also wants his readers to realize that the ultimate goal of philosophy should not be to try and make metaphysical theories and deep explanations underlying everyday phenomena, since these are often born out of unwarranted generalizations and other assumptions stemming from how our grammar is structured.  Instead we ought to subdue these temptations to generalize and subdue our temptations to be dogmatic and instead use philosophy as a kind of therapeutic tool to keep our abstract thinking in check and to better understand ourselves and the world we live in.

Although I disagree with some of Wittgenstein’s claims about cognition (in terms of how intimately it is connected to the external world) and take some issue with his arguably less useful conception of meaning, he makes a lot of sense overall.  Wittgenstein was clearly hitting upon a real difference between the way actual causal relations in our experience are structured and how those relations are represented in language.  Personally, I think that work within philosophy is moving in the right direction if the contributions made therein lead us to make more successful predictions about the causal structure of the world.  And I believe this to be so even if this progress includes generalizations that may not be exactly right.  As long as we can structure our thinking to make more successful predictions, then we’re moving forward as far as I’m concerned.  In any case, I liked the book overall and thought that the interlocutory style gave the reader a nice break from the typical form seen in philosophical argumentation.  I highly recommend it!

Some Thoughts on “Fear & Trembling”

I’ve been meaning to write this post for quite some time, but haven’t had the opportunity until now, so here it goes.  I want to explore some of Kierkegaard’s philosophical claims or themes in his book Fear and Trembling.  Kierkegaard regarded himself as a Christian and so there are a lot of literary themes revolving around faith and the religious life, but he also centers a lot of his philosophy around the subjective individual, all of which I’d like to look at in more detail.

In Fear and Trembling, we hear about the story found in Genesis (Ch. 22) where Abraham attempts to sacrifice his own beloved son Isaac, after hearing God command him to do so.  For those unfamiliar with this biblical story, after they journey out to mount Moriah he binds Isaac to an alter, and as Abraham draws his knife to slit the throat of his beloved son, an angel appears just in time and tells him to stop “for now I know that you fear God”.  Then Abraham sees a goat nearby and sacrifices it instead of his son.  Kierkegaard uses this story in various ways, and considers four alternative versions of it (and their consequences), to explicate the concept of faith as admirable though fundamentally incomprehensible and unintelligible.

He begins his book by telling us about a man who has deeply admired this story of Abraham ever since he first heard it as a child, with this admiration for it growing stronger over time while understanding the story less and less.  The man considers four alternative versions of the story to try and better understand Abraham and how he did what he did, but never manages to obtain this understanding.

I’d like to point out here that an increased confusion would be expected if the man has undergone moral and intellectual growth during his journey from childhood to adulthood.  We tend to be more impulsive, irrational and passionate as children, with less regard for any ethical framework to live by.  And sure enough, Kierkegaard even mentions the importance of passion in making a leap of faith.  Nevertheless, as we continue to mature and accumulate life experience, we tend to develop some control over our passions and emotions, we build up our intellect and rationality, and also further develop an ethic with many ethical behaviors becoming habituated if cultivated over time.  If a person cultivates moral virtues like compassion, honesty, and reasonableness, then it would be expected that they’d find Abraham’s intended act of murder (let alone filicide) repugnant.  But, regardless of the reasons for the man’s lack of understanding, he admires the story more and more, likely because it reveres Abraham as the father of faith, and portrays faith itself as a most honorable virtue.

Kierkegaard’s main point in Fear and Trembling is that one has to suspend their relation to the ethical (contrary to Kant and Hegel), in order to make any leap of faith, and that there’s no rational decision making process involved.  And so it seems clear that Kierkegaard knows that what Abraham did in this story was entirely unethical (attempting to kill an innocent child) in at least one sense of the word ethical, but he believes nevertheless that this doesn’t matter.

To see where he’s coming from, we need to understand Kierkegaard’s idea that there are basically three ways or stages of living, namely the aesthetic, the ethical, and the religious.  The aesthetic life is that of sensuous or felt experience, infinite potentiality through imagination, hiddenness or privacy, and an overarching egotism focused on the individual.  The ethical life supersedes or transcends this aesthetic way of life by relating one to “the universal”, that is, to the common good of all people, to social contracts, and to the betterment of others over oneself.  The ethical life, according to Kierkegaard, also consists of public disclosure or transparency.  Finally, the religious life supersedes the ethical (and thus also supersedes the aesthetic) but shares some characteristics of both the aesthetic and the ethical.

The religious, like the aesthetic, operates on the level of the individual, but with the added component of the individual having a direct relation to God.  And just like the ethical, the religious appeals to a conception of good and evil behavior, but God is the arbiter in this way of life rather than human beings or their nature.  Thus the sphere of ethics that Abraham might normally commit himself to in other cases is thought to be superseded by the religious sphere, the sphere of faith.  Within this sphere of faith, Abraham assumes that anything that God commands is Abraham’s absolute duty to uphold, and he also has faith that this will lead to the best ends.  This of course, is known as a form of divine command theory, which is actually an ethical and meta-ethical theory.  Although Kierkegaard claims that the religious is somehow above the ethical, it is for the most part just another way of living that involves another ethical principle.  In this case, the ethical principle is for one to do whatever God commands them to (even if these commands are inconsistent or morally repugnant from a human perspective), and this should be done rather than abiding by our moral conscience or some other set of moral rules, social mores, or any standards based on human judgment, human nature, etc.

It appears that the primary distinction between the ethical and the religious is the leap of faith that is made in the latter stage of living which involves an act performed “in virtue of the absurd”.  For example, Abraham’s faith in God was really a faith that God wouldn’t actually make him kill his son Isaac.  Had Abraham been lacking in this particular faith, Kierkegaard seems to argue that Abraham’s conscience and moral perspective (which includes “the universal”) would never have allowed him to do what he did.  Thus, Abraham’s faith, according to Kierkegaard, allowed him to (at least temporarily) suspend the ethical in virtue of the absurd notion that somehow the ethical would be maintained in the end.  In other words, Abraham thought that he could obey God’s command, even if this command was prima facie immoral, because he had faith that God wouldn’t actually make Abraham perform an unethical act.

I find it interesting that this particular function or instantiation of faith, as outlined by Kierkegaard, makes for an unusual interpretation of divine command theory.  If divine command theory attempts to define good or moral behavior as that which God commands, and if a leap of faith (such as that which Abraham took) can involve a belief that the end result of an unconscionable commandment is actually its negation or retraction, then a leap of faith such as that taken by Abraham would serve to contradict divine command theory to at least some degree.  It would seem that Kierkegaard wants to believe in the basic premise of divine command theory and therefore have an absolute duty to obey whatever God commands, and yet he also wants to believe that if this command goes against a human moral system or the human conscience, it will not end up doing so when one goes to carry out what has actually been commanded of them.  This seems to me to be an unusual pair of beliefs for one to hold simultaneously, for divine command theory allows for Abraham to have actually carried out the murder of his son (with no angel stopping him at the last second), and this heinous act would have been considered a moral one under such an awful theory.  And yet, Abraham had faith that this divine command would somehow be nullified and therefore reconciled with his own conscience and relation to the universal.

Kierkegaard has something to say about beliefs, and how they differ from faith-driven dispositions, and it’s worth noting this since most of us use the term “belief” as including that which one has faith in.  For Kierkegaard, belief implies that one is assured of its truth in some way, whereas faith requires one to accept the possibility that what they have faith in could be proven wrong.  Thus, it wasn’t enough for Abraham to believe in an absolute duty to obey whatever God commanded of him, because that would have simply been a case of obedience, and not faith.  Instead, Abraham also had to have faith that God would let Abraham spare his son Isaac, while accepting the possibility that he may be proven wrong and end up having to kill his son after all.  As such, Kierkegaard wouldn’t accept the way the term “faith” is often used in modern religious parlance.  Religious practitioners often say that they have faith in something and yet “know it to be true”, “know it for certain”, “know it will happen”, etc.  But if Abraham truly believed (let alone knew for certain) that God wouldn’t make him kill Isaac, then God’s command wouldn’t have served as any true test of faith.  So while Abraham may have believed that he had to kill his son, he also had faith that his son wouldn’t die, hence making a leap of faith in virtue of the absurd.

This distinction between belief and faith also seems to highlight Kierkegaard’s belief in some kind of prophetic consequentialist ethical framework.  Whereas most Christians tend to side with a Kantian deontological ethical system, Kierkegaard points out that ethical systems have rules which are meant to promote the well-being of large groups of people.  And since humans lack the ability to see far into the future, it’s possible that some rules made under this kind of ignorance may actually lead to an end that harms twenty people and only helps one.  Kierkegaard believes that faith in God can answer this uncertainty and circumvent the need to predict the outcome of our moral rules by guaranteeing a better end given the vastly superior knowledge that God has access to.  And any ethical system that appeals to the ends as justifying the means is a form of consequentialism (utilitarianism is perhaps the most common type of ethical consequentialism).

Although I disagree with Kiergegaard on a lot of points, such as his endorsement of divine command theory, and his appeal to an epistemologically bankrupt behavior like taking a leap of faith, I actually agree with Kierkegaard on his teleological ethical reasoning.  He’s right in his appealing to the ends in order to justify the means, and he’s right to want maximal knowledge involved in determining how best to achieve those ends.  It seems clear to me that all moral systems ultimately break down to a form of consequentialism anyway (a set of hypothetical imperatives), and any disagreement between moral systems is really nothing more than a disagreement about what is factual or a disagreement about which consequences should be taken into account (e.g. happiness of the majority, happiness of the least well off, self-contentment for the individual, how we see ourselves as a person, etc.).

It also seems clear that if you are appealing to some set of consequences in determining what is and is not moral behavior, then having maximal knowledge is your best chance of achieving those ends.  But we can only determine the reliability of the knowledge by seeing how well it predicts the future (through inferred causal relations), and that means we can only establish the veracity of any claimed knowledge through empirical means.  Since nobody has yet been able to establish that a God (or gods) exists through any empirical means, it goes without saying that nobody has been able to establish the veracity of any God-knowledge.

Lacking the ability to test this, one would also need to have faith in God’s knowledge, which means they’ve merely replaced one form of uncertainty (the predicted versus actual ends of human moral systems) with another form of uncertainty (the predicted versus actual knowledge of God).  Since the predicted versus actual ends of our moral systems can actually be tested, while the knowledge of God cannot, then we have a greater uncertainty in God’s knowledge than in the efficacy and accuracy of our own moral systems.  This is a problem for Kierkegaard, because his position seems to be that the leap of faith taken by Abraham was essentially grounded on the assumption that God had superior knowledge to achieve the best telos, and thus his position is entirely unsupportable.

Aside from the problems inherent in Kierkegaard’s beliefs about faith and God, I do like his intense focus on the priority of the individual.  As mentioned already, both the aesthetic and religious ways of life that have been described operate on this individual level.  However, one criticism I have to make about Kierkegaard’s life-stage trichotomy is that morality/ethics actually does operate on the individual level even if it also indirectly involves the community or society at large.  And although it is not egotistic like the aesthetic life is said to be, it is egoistic because rational self-interest is in fact at the heart of all moral systems that are consistent and sufficiently motivating to follow.

If you maximize your personal satisfaction and life fulfillment by committing what you believe to be a moral act over some alternative that you believe will make you less fulfilled and thus less overall satisfied (such as not obeying God), then you are acting for your own self-interest (by obeying God), even if you are not acting in an explicitly selfish way.  A person can certainly be wrong about what will actually make them most satisfied and fulfilled, but this doesn’t negate one’s intention to do so.  Acting for the betterment of others over oneself (i.e. living by or for “the universal”) involves behaviors that lead you to a more fulfilling life, in part based on how those actions affect your view of yourself and your character.  If one believes in gods or a God, then their perspective on their belief of how God sees them will also affect their view of themselves.  In short, a properly formulated ethics is centered around the individual even if it seems otherwise.

Given the fact that Kierkegaard seems to have believed that the ethical life revolved around the universal rather than the individual, perhaps it’s no wonder that he would choose to elevate some kind of individualistic stage of life, namely the religious life, over that of the ethical.  It would be interesting to see how his stages of life may have looked had he believed in a more individualistic theory of ethics.  I find that an egoistic ethical framework actually fits quite nicely with the rest of Kierkegaard’s overtly individualistic philosophy.

He ends this book by pointing out that passion is required in order to have faith, and passion isn’t something that somebody can teach us, unlike the epistemic fruits of rational reflection.  Instead, passion has to be experienced firsthand in order for us to understand it at all.  He contrasts this passion with the disinterested intellectualization involved in reflection, which was the means used in Hegel’s approach to try and understand faith.

Kierkegaard doesn’t think that Hegel’s method will suffice since it isn’t built upon a fundamentally subjective experiential foundation and instead tries to understand faith and systematize it through an objective analysis based on logic and rational reflection.  Although I see logic and rational reflection as most important for best achieving our overall happiness and life fulfillment, I can still appreciate the significant role of passion and felt experience within the human condition, our attraction to it, and it’s role in religious belief.  I can also appreciate how our overall satisfaction and life fulfillment are themselves instantiated and evaluated as a subjective felt experience, and one that is entirely individualistic.  And so I can’t help but agree with Kierkegaard, in recognizing that there is no substitute for a subjective experience, and no way to adequately account for the essence of those experiences through entirely non-subjective (objective) means.

The individual subject and their conscious experience is of primary importance (it’s the only thing we can be certain exists), and the human need to find meaning in an apparently meaningless world is perhaps the most important facet of that ongoing conscious experience.  Even though I disagree with a lot of what Kierkegaard believed, it wasn’t all bull$#!+.  I think he captured and expressed some very important points about the individual and some of the psychological forces that color the view of our personal identity and our own existence.

Predictive Processing: Unlocking the Mysteries of Mind & Body (Part III)

In the first post in this series I re-introduced the basic concepts underling the Predictive Processing (PP) theory of perception, in particular how the brain uses a form of active Bayesian inference to form predictive models that effectively account for perception as well as action.  I also mentioned how the folk psychological concepts of beliefs, desires and emotions can fit within the PP framework.  In the second post in this series I expanded the domain of PP a bit to show how it relates to language and ontology, and how we perceive the world to be structured as discrete objects, objects with “fuzzy boundaries”, and other more complex concepts all stemming from particular predicted causal relations.

It’s important to note that this PP framework differs from classical computational frameworks for brain function in a very big way, because the processing and learning steps are no longer considered separate stages within the PP framework, but rather work at the same time (learning is effectively occurring all the time).  Furthermore, classical computational frameworks for brain function treat the brain more or less like a computer which I think is very misguided, and the PP framework offers a much better alternative that is far more creative, economical, efficient, parsimonious and pragmatic.  The PP framework holds the brain to be more of a probabilistic system rather than a deterministic computational system as held in classical computationalist views.  Furthermore, the PP framework puts a far greater emphasis on the brain using feed-back loops whereas traditional computational approaches tend to suggest that the brain is primarily a feed-forward information processing system.

Rather than treating the brain like a generic information processing system that passively waits for new incoming data from the outside world, it stays one step ahead of the game, by having formed predictions about the incoming sensory data through a very active and creative learning process built-up by past experiences through a form of active Bayesian inference.  Rather than utilizing some kind of serial processing scheme, this involves primarily parallel neuronal processing schemes resulting in predictions that have a deeply embedded hierarchical structure and relationship with one another.  In this post, I’d like to explore how traditional and scientific notions of knowledge can fit within a PP framework as well.

Knowledge as a Subset of Predicted Causal Relations

It has already been mentioned that, within a PP lens, our ontology can be seen as basically composed of different sets of highly differentiated, probabilistic causal relations.  This allows us to discriminate one object or concept from another, as they are each “composed of” or understood by the brain as causes that can be “explained away” by different sets of predictions.  Beliefs are just another word for predictions, with higher-level beliefs (including those that pertain to very specific contexts) consisting of a conjunction of a number of different lower level predictions.  When we come to “know” something then, it really is just a matter of inferring some causal relation or set of causal relations about a particular aspect of our experience.

Knowledge is often described by philosophers using various forms of the Platonic definition: Knowledge = Justified True Belief.  I’ve talked about this to some degree in previous posts (here, here) and I came up with what I think is a much better working definition for knowledge which could be defined as such:

Knowledge consists of recognized patterns of causality that are stored into memory for later recall and use, that positively and consistently correlate with reality, and for which that correlation has been validated by empirical evidence (i.e. successful predictions made and/or goals accomplished through the use of said recalled patterns).

We should see right off the bat how this particular view of knowledge fits right in to the PP framework, where knowledge consists of predictions of causal relations, and the brain is in the very business of making predictions of causal relations.  Notice my little caveat however, where these causal relations should positively and consistently correlate with “reality” and be supported by empirical evidence.  This is because I wanted to distinguish all predicted causal relations (including those that stem from hallucinations or unreliable inferences) from the subset of predicted causal relations that we have a relatively high degree of certainty in.

In other words, if we are in the game of trying to establish a more reliable epistemology, we want to distinguish between all beliefs and the subset of beliefs that have a very high likelihood of being true.  This distinction however is only useful for organizing our thoughts and claims based on our level of confidence in their truth status.  And for all beliefs, regardless of the level of certainty, the “empirical evidence” requirement in my definition given above is still going to be met in some sense because the incoming sensory data is the empirical evidence (the causes) that support the brain’s predictions (of those causes).

Objective or Scientific Knowledge

Within a domain like science however, where we want to increase the reliability or objectivity of our predictions pertaining to any number of inferred causal relations in the world, we need to take this same internalized strategy of modifying the confidence levels of our predictions by comparing them to the predictions of others (third-party verification) and by testing these predictions with externally accessible instrumentation using some set of conventions or standards (including the scientific method).

Knowledge then, is largely dependent on or related to our confidence levels in our various beliefs.  And our confidence level in the truth status of a belief is just another way of saying how probable such a belief is to explain away a certain set of causal relations, which is equivalent to our brain’s Bayesian prior probabilities (our “priors”) that characterize any particular set of predictions.  This means that I would need a lot of strong sensory evidence to overcome a belief with high Bayesian priors, not least because it is likely to be associated with a large number of other beliefs.  This association between different predictions seems to me to be a crucial component not only for knowledge generally (or ontology as mentioned in the last post), but also for our reasoning processes.  In the next post of this series, I’m going to expand a bit on reasoning and how I view it through a PP framework.

Predictive Processing: Unlocking the Mysteries of Mind & Body (Part I)

I’ve been away from writing for a while because I’ve had some health problems relating to my neck.  A few weeks ago I had double-cervical-disc replacement surgery and so I’ve been unable to write and respond to comments and so forth for a little while.  I’m in the second week following my surgery now and have finally been able to get back to writing, which feels very good given that I’m unable to lift or resume martial arts for the time being.  Anyway, I want to resume my course of writing beginning with a post-series that pertains to Predictive Processing (PP) and the Bayesian brain.  I’ve written one post on this topic a little over a year ago (which can be found here) as I’ve become extremely interested in this topic for the last several years now.

The Predictive Processing (PP) theory of perception shows a lot of promise in terms of finding an overarching schema that can account for everything that the brain seems to do.  While its technical application is to account for the acts of perception and active inference in particular, I think it can be used more broadly to account for other descriptions of our mental life such as beliefs (and knowledge), desires, emotions, language, reasoning, cognitive biases, and even consciousness itself.  I want to explore some of these relationships as viewed through a PP lens more because I think it is the key framework needed to reconcile all of these aspects into one coherent picture, especially within the evolutionary context of an organism driven to survive.  Let’s begin this post-series by first looking at how PP relates to perception (including imagination), beliefs, emotions, and desires (and by extension, the actions resulting from particular desires).

Within a PP framework, beliefs can be best described as simply the set of particular predictions that the brain employs which encompass perception, desires, action, emotion, etc., and which are ultimately mediated and updated in order to reduce prediction errors based on incoming sensory evidence (and which approximates a Bayesian form of inference).  Perception then, which is constituted by a subset of all our beliefs (with many of them being implicit or unconscious beliefs), is more or less a form of controlled hallucination in the sense that what we consciously perceive is not the actual sensory evidence itself (not even after processing it), but rather our brain’s “best guess” of what the causes for the incoming sensory evidence are.

Desires can be best described as another subset of one’s beliefs, and a set of beliefs which has the special characteristic of being able to drive action or physical behavior in some way (whether driving internal bodily states, or external ones that move the body in various ways).  Finally, emotions can be thought of as predictions pertaining to the causes of internal bodily states and which may be driven or changed by changes in other beliefs (including changes in desires or perceptions).

When we believe something to be true or false, we are basically just modeling some kind of causal relationship (or its negation) which is able to manifest itself into a number of highly-weighted predicted perceptions and actions.  When we believe something to be likely true or likely false, the same principle applies but with a lower weight or precision on the predictions that directly corresponds to the degree of belief or disbelief (and so new sensory evidence will more easily sway such a belief).  And just like our perceptions, which are mediated by a number of low and high-level predictions pertaining to incoming sensory data, any prediction error that the brain encounters results in either updating the perceptual predictions to new ones that better reduce the prediction error and/or performing some physical action that reduces the prediction error (e.g. rotating your head, moving your eyes, reaching for an object, excreting hormones in your body, etc.).

In all these cases, we can describe the brain as having some set of Bayesian prior probabilities pertaining to the causes of incoming sensory data, and these priors changing over time in response to prediction errors arising from new incoming sensory evidence that fails to be “explained away” by the predictive models currently employed.  Strong beliefs are associated with high prior probabilities (highly-weighted predictions) and therefore need much more counterfactual sensory evidence to be overcome or modified than for weak beliefs which have relatively low priors (low-weighted predictions).

To illustrate some of these concepts, let’s consider a belief like “apples are a tasty food”.  This belief can be broken down into a number of lower level, highly-weighted predictions such as the prediction that eating a piece of what we call an “apple” will most likely result in qualia that accompany the perception of a particular satisfying taste, the lower level prediction that doing so will also cause my perception of hunger to change, and the higher level prediction that it will “give me energy” (with these latter two predictions stemming from the more basic category of “food” contained in the belief).  Another prediction or set of predictions is that these expectations will apply to not just one apple but a number of apples (different instances of one type of apple, or different types of apples altogether), and a host of other predictions.

These predictions may even result (in combination with other perceptions or beliefs) in an actual desire to eat an apple which, under a PP lens could be described as the highly weighted prediction of what it would feel like to find an apple, to reach for an apple, to grab it, to bite off a piece of it, to chew it, and to swallow it.  If I merely imagine doing such things, then the resulting predictions will necessarily carry such a small weight that they won’t be able to influence any actual motor actions (even if these imagined perceptions are able to influence other predictions that may eventually lead to some plan of action).  Imagined perceptions will also not carry enough weight (when my brain is functioning normally at least) to trick me into thinking that they are actual perceptions (by “actual”, I simply mean perceptions that correspond to incoming sensory data).  This low-weighting attribute of imagined perceptual predictions thus provides a viable way for us to have an imagination and to distinguish it from perceptions corresponding to incoming sensory data, and to distinguish it from predictions that directly cause bodily action.  On the other hand, predictions that are weighted highly enough (among other factors) will be uniquely capable of affecting our perception of the real world and/or instantiating action.

This latter case of desire and action shows how the PP model takes the organism to be an embodied prediction machine that is directly influencing and being influenced by the world that its body interacts with, with the ultimate goal of reducing any prediction error encountered (which can be thought of as maximizing Bayesian evidence).  In this particular example, the highly-weighted prediction of eating an apple is simply another way of describing a desire to eat an apple, which produces some degree of prediction error until the proper actions have taken place in order to reduce said error.  The only two ways of reducing this prediction error are to change the desire (or eliminate it) to one that no longer involves eating an apple, and/or to perform bodily actions that result in actually eating an apple.

Perhaps if I realize that I don’t have any apples in my house, but I realize that I do have bananas, then my desire will change to one that predicts my eating a banana instead.  Another way of saying this is that my higher-weighted prediction of satisfying hunger supersedes my prediction of eating an apple specifically, thus one desire is able to supersede another.  However, if the prediction weight associated with my desire to eat an apple is high enough, it may mean that my predictions will motivate me enough to avoid eating the banana, and instead to predict what it is like to walk out of my house, go to the store, and actually get an apple (and therefore, to actually do so).  Furthermore, it may motivate me to predict actions that lead me to earn the money such that I can purchase the apple (if I don’t already have the money to do so).  To do this, I would be employing a number of predictions having to do with performing actions that lead to me obtaining money, using money to purchase goods, etc.

This is but a taste of what PP has to offer, and how we can look at basic concepts within folk psychology, cognitive science, and theories of mind in a new light.  Associated with all of these beliefs, desires, emotions, and actions (which again, are simply different kinds of predictions under this framework), is a number of elements pertaining to ontology (i.e. what kinds of things we think exist in the world) and pertaining to language as well, and I’d like to explore this relationship in my next post.  This link can be found here.

Knowledge: An Expansion of the Platonic Definition

In the first post I ever wrote on this blog, titled: Knowledge and the “Brain in a Vat” scenario, I discussed some elements concerning the Platonic definition of knowledge, that is, that knowledge is ultimately defined as “justified true belief”.  I further refined the Platonic definition (in order to account for the well-known Gettier Problem) such that knowledge could be better described as “justified non-coincidentally-true belief”.  Beyond that, I also discussed how one’s conception of knowledge (or how it should be defined) should consider the possibility that our reality may be nothing more than the product of a mad scientist feeding us illusory sensations/perceptions with our brain in a vat, and thus, that how we define things and adhere to those definitions plays a crucial role in our conception and mutual understanding of any kind of knowledge.  My concluding remarks in that post were:

“While I’m aware that anything discussed about the metaphysical is seen by some philosophers to be completely and utterly pointless, my goal in making the definition of knowledge compatible with the BIV scenario is merely to illustrate that if knowledge exists in both “worlds” (and our world is nothing but a simulation), then the only knowledge we can prove has to be based on definitions — which is a human construct based on hierarchical patterns observed in our reality.”

While my views on what knowledge is or how it should be defined have changed somewhat in the past three years or so since I wrote that first blog post, in this post, I’d like to elaborate on this key sentence, specifically with regard to how knowledge is ultimately dependent on the recall and use of previously observed patterns in our reality as I believe that this is the most important aspect regarding how to define knowledge.  After making a few related comments on another blog (https://nwrickert.wordpress.com/2015/03/07/knowledge-vs-belief/), I decided to elaborate on some of those comments accordingly.

I’ve elsewhere mentioned how there is a plethora of evidence that suggests that intelligence is ultimately a product of pattern recognition (1, 2, 3).  That is, if we recognize patterns in nature and then commit them to memory, we can later use those remembered patterns to our advantage in order to accomplish goals effectively.  The more patterns that we can recognize and remember, specifically those that do in fact correlate with reality (as opposed to erroneously “recognized” patterns that are actually non-existent), the better our chances of predicting the consequences of our actions accurately, and thus the better chances we have at obtaining our goals.  In short, the more patterns that we can recognize and remember, the greater our intelligence.  It is therefore no coincidence that intelligence tests are primarily based on gauging one’s ability to recognize patterns (e.g. solving Raven’s Progressive Matrices, puzzles, etc.).

To emphasize the role of pattern recognition as it applies to knowledge, if we use my previously modified Platonic definition of knowledge, that is,  that knowledge is defined as “justified, non-coincidentally-true belief”, then I must break down the individual terms of this definition as follows, starting with “belief”:

  • Belief = Recognized patterns of causality that are stored into memory for later recall and use.
  • Non-Coincidentally-True = The belief positively and consistently correlates with reality, and thus not just through luck or chance.
  • Justified = Empirical evidence exists to support said belief.

So in summary, I have defined knowledge (more specifically) as:

“Recognized patterns of causality that are stored into memory for later recall and use, that positively and consistently correlate with reality, and for which that correlation has been validated by empirical evidence (e.g. successful predictions made and/or goals accomplished through the use of said recalled patterns)”.

This means that if we believe something to be true that is unfalsifiable (such as religious beliefs that rely on faith), since it has not met the justification criteria, it fails to be considered knowledge (even if it is still considered a “belief”).  Also, if we are able to make a successful prediction with the patterns we’ve recognized, yet are only able to do so once, due to the lack of consistency, we likely just got lucky and didn’t actually correctly identify a pattern that correlates with reality, and thus this would fail to count as knowledge.  Finally, one should also note that the patterns that are recognized were not specifically defined as “consciously” recognized/remembered, nor was it specified that the patterns couldn’t be innately acquired/stored into memory (through DNA coded or other pre-sensory neural developmental mechanisms).  Thus, even procedural knowledge like learning to ride a bike or other forms of “muscle memory” used to complete a task, or any innate form of knowledge (acquired before/without sensory input) would be an example of unconscious or implicit knowledge that still fulfills this definition I’ve given above.  In the case of unconscious/implicit knowledge, we would have to accept that “beliefs” can also be unconscious/implicit (in order to remain consistent with the definition I’ve chosen), and I don’t see this as being a problem at all.  One just has to keep in mind that when people use the term “belief”, they are likely going to be referring to only those that are in our consciousness, a subset of all beliefs that exist, and thus still correct and adherent to the definition laid out here.

This is how I prefer to define “knowledge”, and I think it is a robust definition that successfully solves many (though certainly not all) of the philosophical problems that one tends to encounter in epistemology.

Neuroscience Arms Race & Our Changing World View

At least since the time of Hippocrates, people began to realize that the brain was the physical correlate of consciousness and thought.  Since then, the fields of psychology, neuroscience, and several inter-related fields have emerged.  There have been numerous advancements made within the field of neuroscience during the last decade or so, and in that same time frame there has also been an increased interest in the social, religious, philosophical, and moral implications that have precipitated from such a far-reaching field.  Certainly the medical knowledge we’ve obtained from the neurosciences has been the primary benefit of such research efforts, as we’ve learned quite a bit more about how the brain works, how it is structured, and the ongoing neuropathology that has led to improvements in diagnosing and treating various mental illnesses.  However, it is the other side of neuroscience that I’d like to focus on in this post — the paradigm shift relating to how we are starting to see the world around us (including ourselves), and how this is affecting our goals as well as how to achieve them.

Paradigm Shift of Our World View

Aside from the medical knowledge we are obtaining from the neurosciences, we are also gaining new perspectives on what exactly the “mind” is.  We’ve come a long way in demonstrating that “mental” or “mind” states are correlated with physical brain states, and there is an ever growing plethora of evidence which suggests that these mind states are in fact caused by these brain states.  It should come as no surprise then that all of our thoughts and behaviors are also caused by these physical brain states.  It is because of this scientific realization that society is currently undergoing an important paradigm shift in terms of our world view.

If all of our thoughts and behaviors are mediated by our physical brain states, then many everyday concepts such as thinking, learning, personality, and decision making can take on entirely new meanings.  To illustrate this point, I’d like to briefly mention the well known “nature vs. nurture” debate.  The current consensus among scientists is that people (i.e. their thoughts and behavior) are ultimately products of both their genes and their environment.

Genes & Environment

From a neuroscientific perspective, the genetic component is accounted for by noting that genes have been shown to play a very large role in directing the initial brain wiring schema of an individual during embryological development and through gestation.  During this time, the brain is developing very basic instinctual behavioral “programs” which are physically constituted by vastly complex neural networks, and the body’s developing sensory organs and systems are also connected to particular groups of these neural networks.  These complex neural networks, which have presumably been naturally selected for in order to benefit the survival of the individual, continue being constructed after gestation and throughout the entire ontogenic evolution of the individual (albeit to lesser degrees over time).

As for the environmental component, this can be further split into two parts: the internal and the external environment.  The internal environment within the brain itself, including various chemical concentration gradients partly mediated by random Brownian motion, provides some gene expression constraints as well as some additional guidance to work with the genetic instructions to help guide neuronal growth, migration, and connectivity.  The external environment, consisting of various sensory stimuli, seems to modify this neural construction by providing a form of inputs which may cause the constituent neurons within these neural networks to change their signal strength, change their action potential threshold, and/or modify their connections with particular neurons (among other possible changes).

Causal Constraints

This combination of genetic instructions and environmental interaction and input produces a conscious, thinking, and behaving being through a large number of ongoing and highly complex hardware changes.  It isn’t difficult to imagine why these insights from neuroscience might modify our conventional views of concepts such as thinking, learning, personality, and decision making.  Prior to these developments over the last few decades, the brain was largely seen as a sort of “black box”, with its internal milieu and functional properties remaining mysterious and inaccessible.  From that time and prior to it, for millennia, many people have assumed that our thoughts and behaviors were self-caused or causa sui.  That is, people believed that they themselves (i.e. some causally free “consciousness”, or “soul”, etc.) caused their own thoughts and behavior as opposed to those thoughts and behaviors being ultimately caused by physical processes (e.g. neuronal activity, chemical reactions, etc.).

Neuroscience (as well as biochemistry and its underlying physics) has shed a lot of light on this long-held assumption and, as it stands, the evidence has shown this prior assumption to be false.  The brain is ultimately controlled by the laws of physics since every chemical reaction and neural event that physically produces our thoughts, choices, and behaviors, have never been shown to be causally free from these physically guiding constraints.  I will mention that quantum uncertainty or quantum “randomness” (if ontologically random) does provide some possible freedom from physical determinism.  However, these findings from quantum physics do not provide any support for self-caused thoughts or behaviors.  Rather, it merely shows that those physically constrained thoughts and behaviors may never be completely predictable by physical laws no matter how much data is obtained.  In other words, our thoughts and behaviors are produced by highly predictable (although not necessarily completely predictable) physical laws and constraints as well as some possible random causal factors.

As a result of these physical causal constraints, the conventional perspective of an individual having classical free will has been shattered.  Our traditional views of human attributes including morality, choices, ideology, and even individualism are continuing to change markedly.  Not surprisingly, there are many people uncomfortable with these scientific discoveries including members of various religious and ideological groups that are largely based upon and thus depend on the very presupposition of precepts such as classical free will and moral responsibility.  The evidence that is compiling from the neurosciences is in fact showing that while people are causally responsible for their thoughts, choices, and behavior (i.e. an individual’s thoughts and subsequent behavior are constituents of a causal chain of events), they are not morally responsible in the sense that they can choose to think or behave any differently than they do, for their thoughts and behavior are ultimately governed by physically constrained neural processes.

New World View

Now I’d like to return to what I mentioned earlier and consider how these insights from neuroscience may be drastically modifying how we look at concepts such as thinking, learning, personality, and decision making.  If our brain is operating via these neural network dynamics, then conscious thought appears to be produced by a particular subset of these neural network configurations and processes.  So as we continue to learn how to more directly control or alter these neural network arrangements and processes (above and beyond simply applying electrical potentials to certain neural regions in order to bring memories or other forms of imagery into consciousness, as we’ve done in the past), we should be able to control thought generation from a more “bottom-up” approach.  Neuroscience is definitely heading in this direction, although there is a lot of work to be done before we have any considerable knowledge of and control over such processes.

Likewise, learning seems to consist of a certain type of neural network modification (involving memory), leading to changes in causal pattern recognition (among other things) which results in our ability to more easily achieve our goals over time.  We’ve typically thought of learning as the successful input, retention, and recall of new information, and we have been achieving this “learning” process through the input of environmental stimuli via our sensory organs and systems.  In the future, it may be possible to once again, as with the aforementioned bottom-up thought generation, physically modify our neural networks to directly implant memories and causal pattern recognition information in order to “learn” without any actual sensory input, and/or we may be able to eventually “upload” information in a way that bypasses the typical sensory pathways thus potentially allowing us to catalyze the learning process in unprecedented ways.

If we are one day able to more directly control the neural configurations and processes that lead to specific thoughts as well as learned information, then there is no reason that we won’t be able to modify our personalities, our decision-making abilities and “algorithms”, etc.  In a nutshell, we may be able to modify any aspect of “who” we are in extraordinary ways (whether this is a “good” or “bad” thing is another issue entirely).  As we come to learn more about the genetic components of these neural processes, we may also be able to use various genetic engineering techniques to assist with the necessary neural modifications required to achieve these goals.  The bottom line here is that people are products of their genes and environment, and by manipulating both of those causal constraints in more direct ways (e.g. through the use of neuroscientific techniques), we may be able to achieve previously unattainable abilities and perhaps in a relatively miniscule amount of time.  It goes without saying that these methods will also significantly affect our evolutionary course as a species, allowing us to enter new landscapes through our substantially enhanced ability to adapt.  This may be realized through these methods by finding ways to improve our intelligence, memory, or other cognitive faculties, effectively giving us the ability to engineer or re-engineer our brains as desired.

Neuroscience Arms Race

We can see that increasing our knowledge and capabilities within the neurosciences has the potential for drastic societal changes, some of which are already starting to be realized.  The impact that these fields will have on how we approach the problem of criminal, violent, or otherwise undesirable behavior can not be overstated.  Trying to correct these issues by focusing our efforts on the neural or cognitive substrate that underlie them, as opposed to using less direct and more external means (e.g. social engineering methods) that we’ve been using thus far, may lead to much less expensive solutions as well as solutions that may be realized much, much more quickly.

As with any scientific discovery or subsequent technology produced from it, neuroscience has the power to bestow on us both benefits as well as disadvantages.  I’m reminded of the ground-breaking efforts made within nuclear physics several decades ago, whereby physicists not only gained precious information about subatomic particles (and their binding energies) but also how to release these enormous amounts of energy from nuclear fusion and fission reactions.  It wasn’t long after these breakthrough discoveries were made before they were used by others to create the first atomic bombs.  Likewise, while our increasing knowledge within neuroscience has the power to help society improve by optimizing our brain function and behavior, it can also be used by various entities to manipulate the populace for unethical reasons.

For example, despite the large number of free market proponents who claim that the economy need not be regulated by anything other than rational consumers and their choices of goods and services, corporations have clearly increased their use of marketing strategies that take advantage of many humans’ irrational tendencies (whether it is “buy one get one free” offers, “sales” on items that have artificially raised prices, etc.).  Politicians and other leaders have been using similar tactics by taking advantage of voters’ emotional vulnerabilities on certain controversial issues that serve as nothing more than an ideological distraction in order to reduce or eliminate any awareness or rational analysis of the more pressing issues.

There are already research and development efforts being made by these various entities in order to take advantage of these findings within neuroscience such that they can have greater influence over people’s decisions (whether it relates to consumers’ purchases, votes, etc.).  To give an example of some of these R&D efforts, it is believed that MRI (Magnetic Resonance Imaging) or fMRI (functional Magnetic Resonance Imaging) brain scans may eventually be able to show useful details about a person’s personality or their innate or conditioned tendencies (including compulsive or addictive tendencies, preferences for certain foods or behaviors, etc.).  This kind of capability (if realized) would allow marketers to maximize how many dollars they can squeeze out of each consumer by optimizing their choices of goods and services and how they are advertised. We have already seen how purchases made on the internet, if tracked, begin to personalize the advertisements that we see during our online experience (e.g. if you buy fishing gear online, you may subsequently notice more advertisements and pop-ups for fishing related goods and services).  If possible, the information found using these types of “brain probing” methods could be applied to other areas, including that of political decision making.

While these methods derived from the neurosciences may be beneficial in some cases, for instance, by allowing the consumer more automated access to products that they may need or want (which will likely be a selling point used by these corporations for obtaining consumer approval of such methods), it will also exacerbate unsustainable consumption and other personal or societally destructive tendencies and it is likely to continue to reduce (or eliminate) whatever rational decision making capabilities we still have left.

Final Thoughts

As we can see, neuroscience has the potential to (and is already starting to) completely change the way we look at the world.  Further advancements in these fields will likely redefine many of our goals as well as how to achieve them.  It may also allow us to solve many problems that we face as a species, far beyond simply curing mental illnesses or ailments.  The main question that comes to mind is:  Who will win the neuroscience arms race?  Will it be those humanitarians, scientists, and medical professionals that are striving to accumulate knowledge in order to help solve the problems of individuals and societies as well as to increase their quality of life?  Or will it be the entities that are trying to accumulate similar knowledge in order to take advantage of human weaknesses for the purposes of gaining wealth and power, thus exacerbating the problems we currently face?

Knowledge and the “Brain in a Vat” scenario

Many philosophers have stumbled across the possibility of the BIV scenario — that is, that our experience of reality could be a result of our brains being in a vat, connected to electrodes which are controlled by some mad scientist, thus creating the illusion of a physical body, universe, etc.  Many variations of this idea have been presented which appear to be similar to or based off of the “Evil Demon” proposed by Descartes in his Meditations on First Philosophy back in 1641.

There is no way to prove or disprove this notion because all of our “knowledge” (if any exists at all depending on how we define it) would be limited to the mad scientist’s program of sensory inputs, etc.  What exactly is “knowledge” ?  If we can decide on what qualifies as “knowledge”, then another question I’ve wondered about is:

How do we modify or shape the definition of “knowledge” to make it compatible in the BIV scenario? 

Many might say that we have to have an agreed upon definition of “knowledge” in the first place before we modify it for some crazy hypothetical scenario.  I understand that there is disagreement in the philosophical/epistemological community regarding how we define “knowledge”.  To simplify matters, let’s start with Plato’s well known (albeit controversial) definition which basically says:

Knowledge = Justified True Belief

It would help if Plato had spelled out an agreed upon convention for what is considered “justified”, “true”, and “belief”.  It seems reasonable that we can assume that “belief” is any proposition or premise an individual holds to be true.  We can also assume that “true” implies that the belief is factual — that is, that the “belief” has been confirmed to correspond with “reality”.  The term “justified” is a bit trickier and concepts such as “externalism vs. internalism”, probability, evidentialism, etc., indeed arise when theories of justification are discussed.  It is the “true” and “justified” concepts that I think are of primary importance (as I think most people if not everyone can agree on what a “belief” is) — and how they relate to the BIV scenario (or any similar variation of this scenario) in our quest for properly defining “knowledge”.

Is this definition creation/modification even something that can be accomplished?  Or will any modification necessary to make it compatible in the BIV scenario completely strip the word “knowledge” of all practical meaning?  If the modification does strip the word of most of its meaning, then does that say anything profound about what “knowledge” really is?  I think that, if anything, it may allow us to step back and look at the concept of “knowledge” in a different way.  My contention is that it will demonstrate that knowledge is really just another subjective concept which we can claim to possess based on nothing more than an understanding of and adherence to definitions, and the definitions are based on hierarchical patterns observed in nature.  This is in contrast to the current assumption that knowledge is some form of objective truth that we strive to obtain with a deeper meaning that may in fact (but not necessarily) be metaphysical in some way.

How does the concept of “true” differ in the “real world” vs. the BIV’s “virtual world”?

Can any beliefs acquired within the BIV scenario be considered “true”?  If not, then we would have to consider the possibility that nothing “true” exists.  This isn’t necessarily the end of the world, although for most people it may be a pill that is a bit too hard to swallow.  Likewise, if we consider the possibility that at least one belief in our reality is true (I’m betting that most people believe this to be the case), then we must accept that “true beliefs” may exist within the BIV scenario.  The bottom line is that we can never know if we are BIVs or not.  So this means that if we believe that true beliefs exist, we should modify the definition to apply within the BIV scenario.  Perhaps we can say that something is “true” only insofar as it corresponds to “our reality”.  If there are multiple levels of reality (e.g. our reality as well as that of the mad scientist who is simulating our reality), then perhaps there are multiple levels of truth.  If this is the case then either some levels of truth are more or less “true” than others, or they all contain an equally valid truth.  This latter possibility seems most reasonable as the former seems to negate the strict meaning of “true”, since it seems that something is either true, false, or neither (if a third option exists).   There is no state that lies in between that of “true” and “false”.  By this rationale, our concept of what is true and what isn’t is as valid as the mad scientists.  Perhaps the mad scientist would disagree with this, but based on how we define true and false, our claim of “equally valid truth” seems to be reasonable.  This may illustrate (to some) that our idea of “true beliefs” does not hold any fundamentally significant value as they are only “true beliefs” based on our definitions of particular words and concepts (again, ultimately based on hierarchical patterns observed in nature).  If I define something to be “true”, then it is true.  If I point to what looks like a tree (even if it’s a hologram and I don’t know this) and say “I’m defining that to be a “tree”, then it is in fact a tree and I can refer to it as such.  Likewise, if I define it’s location to be called “there”, then it is in fact there.  Based on these definitions (among others) I think it would be a completely true statement (and thus a true belief) to say “I KNOW that there is a tree over there”.  However if I failed to define (in detail) what a “tree” was, or if my definition of “tree” somehow excluded “a hologram of a tree” (because the definition was too specific), then I could easily make errors in terms of what I inferred to know or not know.  This may seem obvious, but I believe it is extremely important to establish these foundations before making any claims about knowledge.

Turning our attention to the concept of “justified”, one could argue that in this case I wouldn’t need any justification for the aforementioned belief to be true as my belief is true by definition (i.e. there isn’t anything that could make this particular belief false other than changing the definitions of “there”, “tree”, etc.).  Or we could say that the justification is the fact that I’ve defined the terms such that I am correct.  In my opinion this would imply that I would have 100% justification (for this belief) which I don’t think can be accomplished in any way other than through this definitive method.  In every other case of justification we will be less than 100% justified, and there will only be an arbitrary way of grading or quantifying that justification (100% is not arbitrary at all, but to say that we are “50% justified” or “23% justified” or any other value other than that 100% would be arbitrary and thus meaningless).  However, I will add that it is possible to have a belief that is MORE or LESS justified than another, but we could never quantify how much MORE or LESS justified it may be, nor will we ALWAYS be able to say that one belief is more or less justified than another.  Perhaps most importantly regarding the importance of “justification” with regard to knowledge, we must realize that our most common means of justification is through empiricism, the scientific method, etc.  That is, our justification seems to be based inductively on the hierarchical patterns which we observe in nature.  If certain patterns are observed and are repeatable, then the beliefs ascertained from them will not only be pragmatically useful, but they will be most justified.

When we undergo the task of defining knowledge or more specifically the task of analyzing Plato’s definition, I believe that it is useful for us to play the devil’s advocate and recognize that scenarios can be devised whereby the requirements of “justified”, “true”, and “belief” have been met and yet no “true knowledge” exists.  These hypothetical scenarios can be represented by the well known “Gettier Problem”.  Without going into too much detail, let’s examine a possible scenario to see where the problem arises.

Let’s imagine that I am in my living room and I see my wife sitting on the couch, but it just so happens that what I’m looking at is actually nothing more than a perfect hologram of my wife.  However my wife IS actually in the room (she’s hiding behind the couch even though I can’t see her hiding).  Now if I were to say “My wife is in the living room”, technically my belief would be “justified” to some degree (I do see her on the couch after all and I don’t know that what I’m looking at is actually a hologram).  My belief would also be “true” because she is actually in the living room (she’s hiding behind the couch).  This could be inferred as fulfilling Plato’s requirements for knowledge: truth and justification.  However most people would argue that I don’t actually possess any knowledge in this particular example because it is a COINCIDENCE that my belief is true, or to be more specific, it’s “true-ness” is not related to my justification (i.e. that I see her on the couch).  It’s “true-ness” is based on the coincidence of her actual presence behind the couch and thus meeting the requirements of “being in the living room” (albeit without me knowing it).

How do we get around this dilemma, and can we do so such that we at least  partially (if not completely) preserve Plato’s requirements?  One way to solve this dilemma would be to remove the element of COINCIDENCE entirely.  If we change our definition to:

Knowledge = Justified Non-Coincidentally True Belief

then we have solved the problem presented by Gettier.  Removing this element of coincidence in order to properly define knowledge (or at least in order to define it more appropriately) has been accomplished in one way or another by various responses to the “Gettier Problem” including that of Goldman’s “causal theory”, Lehrer-Paxson’s “defeasibility condition“, and many others which I won’t discuss in detail at this time, but the links provided will show a basic summary of some of those responses.  Overall, I think it’s an important distinction and a necessary one to make (i.e. non-coincidence) in order to properly define knowledge.

Alternatively, another response to the “Gettier Problem” is to say that while it appears that I didn’t actually possess true knowledge in the example illustrated above (because of the coincidence), it could be due to the fact that I never actually had sufficient justification to begin with as I failed to consider the possibility that I was being deceived on a number of possible levels with varying significance (e.g. hologram, brain in a vat, etc.).  This lack of justification would mean that I’ve failed to satisfy the Platonic definition of knowledge and thus no “Gettier Problem” would exist in that case.  However, I think it’s more pragmatic to trust our senses on a basic level and say that “seeing my wife in the living room” (regardless of whether or not it may have actually been a hologram I was seeing and thus wasn’t “real”) warrants a high level of justification for all practical purposes.  After all, Plato never seemed to have specified how “justified” something needs to be before it meets the requirement necessary to call it knowledge.  Had Plato considered the possibility of the “Brain in a vat” scenario (or a related scenario more befitting to the century he lived in), would he have said that no justification exists and thus no knowledge exists?  I highly doubt it, since it seems like an exercise of futility to define “knowledge” at all if it was assumed to be something non-existent or unattainable.  Lastly, if one’s belief justification is in fact based on hierarchical patterns observed in nature, then the more detailed or thoroughly defined those patterns are (i.e. the more scientific/empirical data, models, and explanatory power that are available from them), the more justified my beliefs will be and the more useful that knowledge will be.

In summary, I think that knowledge is nothing more than an attribute that we possess solely based on an understanding of and adherence to definitions and thus is entirely compatible with the BIV scenario.  Definitions do not require a metaphysical explanation nor do they need to apply in any way to anything outside of our reality (i.e. metaphysically).  They are nothing more than human constructs.  As long as we are careful in how we define terms, adhere to those definitions without contradiction and understand those definitions that we create (through the use of hierarchical patterns observed in nature), then we have the ability to possess knowledge — regardless of if we are nothing more than BIVs.  The only knowledge that we can claim to know must be “justified non-coincidentally true belief” (to use my modified version of the Platonic definition, thus accounting for the “Gettier Problem”).  The most significant realization here is that the only knowledge that we can PROVE to know is based on and limited by the definitions we create and our use of them within our claims of knowledge.  This proven knowledge would include but NOT be limited to Descartes’ famous claim “cogito ergo sum” (i.e. “I think therefore I am”), which in much of Western Philosophy is believed to be the foundation for all knowledge.  I disagree that this is the foundation for all knowledge, as I believe the foundation to be based on the understanding and proper use of definitions and nothing more.  For Descartes, he had to define (or use a commonly accepted definition of) every word in that claim including what “I” is, what “think” is, what “am” is, etc.  Did his definition of “I” include the unconscious portion of his self or only the conscious portion?  I’m aware that Freudian theory didn’t exist at the time but my question still stands.  Personally I would define “I” as the culmination of both our conscious and unconscious minds.  Yet, the only “thinking” that we’re aware of is conscious (which I would call the “me”), so based on my definitions of “I” and “think”, I would disagree with his claim, because my unconscious mind doesn’t “think” at all (and even if it did, the conscious mind doesn’t perceive it so I wouldn’t say “I think” when referring to my unconscious mind).  When I am thinking, I am conscious of those thoughts.  So there are potential problems that exist with Descartes’ assertion depending on definitions he chooses for the words used in that assertion.  This further illustrates the importance of definitions on a foundational level, and the hierarchical patterns observed in nature that led to those definitions.

My views of knowledge may seem to be unacceptable or incomplete, but these are the conclusions I’ve arrived at in order to avoid the “Gettier Problem” as well as to remain compatible with the BIV scenario or any other metaphysical possibilities that we have no way of falsifying.  While I’m aware that anything discussed about the metaphysical is seen by some philosophers to be completely and utterly pointless, my goal in making the definition of knowledge compatible with the BIV scenario is merely to illustrate that if knowledge exists in both “worlds” (and our world is nothing but a simulation), then the only knowledge we can prove has to be based on definitions — which is a human construct based on hierarchical patterns observed in our reality.