“Silent Bridge”

Words are but a bridge between our minds
So let us not burn these bridges down
For they are the only means of knowing
Knowing what’s on the other side
If the bridge is ever lost, surprise awaits
For a seedling may turn into a jungle 
Or a flickering flame into a fiery blaze 
Behold the power of unspoken words

Words are but a bridge between our minds
So let us not burn these bridges down 
For they are the only means of gaining
Gaining new perspectives, a broader lens
The power to diagnose the masses
For an itch may turn into infection
Or an emotion into a reign of tyranny 
Behold the power of unspoken words

Words are but a bridge between our minds 
So let us not burn these bridges down 
For they are the only means of growing 
Growing stronger from the challenge
Words are not violence, so fear not! 
For a fear of words will only weaken us 
Or limit thought and human freedom 
Behold the power of unspoken words

Speak!  Silence!  Shut up and speak!
This contradiction pervades humanity
We’re “free” to profess popular opinion
Free to be deafened by the echo chamber
As honest critique is made to wear the muzzle
We’re free to conform to our social tribes
But so often not free to cross the bridge
The bridge between our minds

Advertisement

Technology, Mass-Culture, and the Prospects of Human Liberation

Cultural evolution is arguably just as fascinating as biological evolution (if not more so), with new ideas and behaviors stemming from the same kinds of natural selective pressures that lead to new species along with their novel morphologies and capacities.  And as with biological evolution where it, in a sense, takes off on its own unbeknownst to the new organisms it produces and independent of the intentions they may have (with our species being the notable exception given our awareness of evolutionary history and our ever-growing control over genetics), so too cultural evolution takes off on its own, where cultural changes are made manifest through a number of causal influences that we’re largely unaware of, despite our having some conscious influence over this vastly transformative process.

Alongside these cultural changes, human civilizations have striven to find new means of manipulating nature and to better predict the causal structure that makes up our reality.  One unfortunate consequence of this is that, as history has shown us, within any particular culture’s time and place, people have a decidedly biased overconfidence in the perceived level of truth or justification for the status quo and their present world view (both on an individual and collective level).  Undoubtedly, the “group-think” or “herd mentality” that precipitates from our simply having social groups often reinforces this overconfidence, and this is so in spite of the fact that what actually influences a mass of people to believe certain things or to behave as they do is highly contingent, unstable, and amenable to irrational forms of persuasion including emotive, sensationalist propaganda that prey on our cognitive biases.

While we as a society have an unprecedented amount of control over the world around us, this type of control is perhaps best described as a system of bureaucratic organization and automated information processing, that gives less and less individual autonomy, liberty, and basic freedom, as it further expands its reach.  How much control do we as individuals really have in terms of the information we have access to, and given the implied picture of reality that is concomitant with this information in the way it’s presented to us?  How much control do we have in terms of the number of life trajectories and occupations made available to us, what educational and socioeconomic resources we have access to given the particular family, culture, and geographical location we’re born and raised in?

As more layers of control have been added to our way of life and as certain criteria for organizational efficiency are continually implemented, our lives have become externally defined by increasing layers of abstraction, and our modes of existence are further separated cognitively and emotionally from an aesthetically and otherwise psychologically valuable sense of meaning and purpose.

While the Enlightenment slowly dragged our species, kicking and screaming, out of the theocratic, anti-intellectual epistemologies of the Medieval period of human history, the same forces that unearthed a long overdue appreciation for (and development of) rationality and technological progress, unknowingly engendered a vulnerability to our misusing this newfound power.  There was an overcompensation of rationality when it was deployed to (justifiably) respond to the authoritarian dogmatism of Christianity and to the demonstrably unreliable nature of superstitious beliefs and of many of our intuitions.

This overcompensatory effect was in many ways accounted for, or anticipated within the dialectical theory of historical development as delineated by the German philosopher Georg Hegel, and within some relevant reformulations of this dialectical process as theorized by the German philosopher Karl Marx (among others).  Throughout history, we’ve had an endless clash of ideas whereby the prevailing worldviews are shown to be inadequate in some way, failing to account for some notable aspect of our perceived reality, or shown to be insufficient for meeting our basic psychological or socioeconomic needs.  With respect to any problem we’ve encountered, we search for a solution (or wait for one to present itself to us), and then we become overconfident in the efficacy of the solution.  Eventually we end up overgeneralizing its applicability, and then the pendulum swings too far the other way, thereby creating new problems in need of a solution, with this process seemingly repeating itself ad infinitum.

Despite the various woes of modernity, as explicated by the modern existentialist movement, it does seem that history, from a long-term perspective at least, has been moving in the right direction, not only with respect to our heightened capacity of improving our standard of living, but also in terms of the evolution of our social contracts and our conceptions of basic and universal human rights.  And we should be able to plausibly reconcile this generally positive historical trend with the Hegelian view of historical development, and the conflicts that arise in human history, by noting that we often seem to take one step backward followed by taking two steps forward in terms of our moral and epistemological progress.

Regardless of the progress we’ve made, we seem to be at a crucial point in our history where the same freedom-limiting authoritarian reach that plagued humanity (especially during the Middle Ages) has undergone a kind of morphogenesis, having been reinstantiated albeit in a different form.  The elements of authoritarianism have become built into the very structure of mass-culture, with an anti-individualistic corporatocracy largely mediating the flow of information throughout this mass-culture, and also mediating its evolution over time as it becomes more globalized, interconnected, and cybernetically integrated into our day-to-day lives.

Coming back to the kinds of parallels in biology that I opened up with, we can see human autonomy and our culture (ideas and behaviors) as having evolved in ways that are strikingly similar to the biological jump that life made long ago, where single-celled organisms eventually joined forces with one another to become multi-cellular.  This biological jump is analogous to the jump we made during the early onset of civilization, where we employed an increasingly complex distribution of labor and occupational specialization, allowing us to survive many more environmental hurdles than ever before.  Once civilization began, the spread of culture became much more effective for transmitting ideas both laterally within a culture and longitudinally from generation to generation, with this process heavily enhanced by our having adopted various forms of written language, allowing us to store and transmit information in much more robust ways, similar to genetic information storage and transfer via DNA, RNA, and proteins.

Although the single-celled bacterium or amoeba (for example) may be thought of as having more “autonomy” than a cell that is forcefully interconnected within a multi-cellular organism, we can see how the range of capacities available to single cells were far more limited before making the symbiotic jump, just as humans living before the onset of civilization had more “freedom” (at least of a certain type) and yet the number of possible life trajectories and experiences was minuscule when compared to a human living in a post-cultural world.  But once multi-cellular organisms began to form a nervous system and eventually a brain, the entire collection of cells making up an organism became ultimately subservient to a centralized form of executive power — just as humans have become subservient to the executive authority of the state or government (along with various social pressures of conformity).

And just as the fates of each cell in a multi-cellular organism became predetermined and predictable by its particular set of available resources and the specific information it received from neighboring cells, similarly our own lives are becoming increasingly predetermined and predictable by the socioeconomic resources made available to us and the information we’re given which constitutes our mass-culture.  We are slowly morphing from individual brains into something akin to individual neurons within a global brain of mass-consciousness and mass-culture, having our critical thinking skills and creative aspirations exchanged for rehearsed responses and docile expectations that maintain the status quo and which continually transfers our autonomy to an oligarchic power structure.

We might wonder if this shift has been inevitable, possibly being yet another example of a “fractal pattern” recapitulated in sociological form out of the very same freely floating rationales that biological evolution has been making use of for eons.  In any case, it’s critically important that we become aware of this change, so we can try and actively achieve and effectively maintain the liberties and level of individual autonomy that we so highly cherish.  We ought to be thinking about what kinds of ways we can remain cognizant of, and critical to, our culture and its products; how we can reconcile or transform technological rationality and progress with a future world comprised of truly liberated individuals; and how to transform our corporatocratic capitalist society into one that is based on a mixed economy with a social safety net that even the wealthiest citizens would be content with living under, so as to maximize the actual creative freedom people have once their basic existential needs have been met.

Will unchecked capitalism, social-media, mass-media, and the false needs and epistemological bubbles they’re forming lead to our undoing and destruction?  Or will we find a way to rise above this technologically-induced setback, and take advantage of the opportunities it has afforded us, to make the world and our technology truly compatible with our human psychology?  Whatever the future holds for us, it is undoubtedly going to depend on how many of us begin to critically think about how we can seriously restructure our educational system and how we disseminate information, how we can re-prioritize and better reflect on what our personal goals ought to be, and also how we ought to identify ourselves as free and unique individuals.

Irrational Man: An Analysis (Part 3, Chapter 9: Heidegger)

In my previous post of this series on William Barrett’s Irrational Man, I explored some of Nietzsche’s philosophy in more detail.  Now we’ll be taking a look at the work of Martin Heidegger.

Heidegger was perhaps the most complex of the existentialist philosophers, as his work is often interpreted in a number of different ways and he develops a lot of terminology and concepts that can sometimes be difficult to grasp; and there’s also a fundamental limitation of fully engaging with the concepts he uses unless one is fluent in the German language, due to what is otherwise lost in translation to English.

Phenomenology, or the study of the structures of our conscious experience, was a big part of Heidegger’s work and had a big influence on his distinguishing between what he called “calculating thought” and “meditating thought”, and Barrett alludes to this distinction when he first mentions Heidegger’s feelings on thought and reason.  Heidegger says:

“Thinking only begins at the point where we have come to know that Reason, glorified for centuries, is the most obstinate adversary of thinking.”

Now at first one might be confused with the assertion that reason is somehow against thinking, but if one understands that Heidegger is only making reference to one of the two types of thinking outlined above, then we can begin to make sense of what he’s saying.  Calculating thought is what he has in mind here, which is the kind of thinking that we use all the time for planning and investigating in order to accomplish some specific purpose and achieve our goals.  Meditative thought, on the other hand, involves opening up one’s mind such that they aren’t consumed by a single perspective of an idea; it requires us to not limit our thinking to only one category of ideas, and requires us to at least temporarily free our thinking from the kind of unity that makes everything seemingly fit together.

Meaning is more or less hidden behind calculating thought, and thus meditative thought is needed to help uncover that meaning.  Calculative thinking also involves a lot of external input from society and the cultural or technological constructs that our lives are embedded in, whereas meditative thinking is internal, prioritizing the self over the collective and creating a self-derived form of truth and meaning rather than one that is externally imposed on us.  By thinking about reason’s relation to what Heidegger is calling meditative thought in particular, we can understand why Heidegger would treat reason as an adversary to what he sees as a far more valuable form of “thinking”.

It would be mistaken however to interpret this as Heidegger being some kind of an irrationalist, because Heidegger still values thinking even though he redefines what kinds of thinking exist:

“Heidegger is not a rationalist, because reason operates by means of concepts, mental representations, and our existence eludes these.  But he is not an irrationalist either.  Irrationalism holds that feeling, or will, or instinct are more valuable and indeed more truthful than reason-as in fact, from the point of view of life itself, they are.  But irrationalism surrenders the field of thinking to rationalism and thereby secretly comes to share the assumptions of its enemy.  What is needed is a more fundamental kind of thinking that will cut under both opposites.”

Heidegger’s intention seems to involve carving out a space for thinking that connects to the most basic elements of our experience, and which connects to the grounding for all of our experience.  He thinks that we’ve almost completely submerged our lives in reason or calculative thinking and that this has caused us to lose our connection with what he calls Being, and this is analogous to Kierkegaard’s claim of reason having estranged us from faith:

“Both Kierkegaard and Nietzsche point up a profound dissociation, or split, that has taken place in the being of Western man, which is basically the conflict of reason with the whole man.  According to Kierkegaard, reason threatens to swallow up faith; Western man now stands at a crossroads forced to choose either to be religious or to fall into despair…Now, the estrangement from Being itself is Heidegger’s central theme.”

Where or when did this estrangement from our roots first come to be?  Many may be tempted to say it was coincident with the onset of modernity, but perhaps it happened long before that:

“Granted that modern man has torn himself up by his roots, might not the cause of this lie farther back in his past than he thinks?  Might it not, in fact, lie in the way in which he thinks about the most fundamental of all things, Being itself?”

At this point, it’s worth looking at Heidegger’s overall view of man, and see how it might be colored by his own upbringing in southern Germany:

“The picture of man that emerges from Heidegger’s pages is of an earth-bound, time-bound, radically finite creature-precisely the image of man we should expect from a peasant, in this case a peasant who has the whole history of Western philosophy at his fingertips.”

So Heidegger was in an interesting position by wanting to go back to one of the most basic questions ever asked in philosophy, but to do so with the knowledge of the philosophy that had been developed ever since, hopefully giving him a useful outline of what went wrong in that development.

1. Being

“He has never ceased from that single task, the “repetition” of the problem of Being: the standing face to face with Being as did the earliest Greeks.  And on the very first pages of Being and Time he tells us that this task involves nothing less than the destruction of the whole history of Western ontology-that is, of the way the West has thought about Being.”

Sometimes the only way to make progress in philosophy is through a paradigm shift of some kind, where the most basic assumptions underlying our theories of the world and our existence are called into question; and we may end up having to completely discard what we thought we knew and start over, in order to follow a new path and discover a new way of looking at the problem we first began with.

For Heidegger, the problem all comes down to how we look at beings versus being (or Being), and in particular how little attention we’ve given the latter:

“Now, it is Heidegger’s contention that the whole history of Western thought has shown an exclusive preoccupation with the first member of these pairs, with the thing-which-is, and has let the second, the to-be of what is, fall into oblivion.  Thus that part of philosophy which is supposed to deal with Being is traditionally called ontology-the science of the thing-which-is-and not einai-logy, which would be the study of the to-be of Being as opposed to beings…What it means is nothing less than this: that from the beginning the thought of Western man has been bound to things, to objects.”

And it’s the priority we’ve given to positing and categorizing various types of beings that has distracted us from really contemplating the grounding for any and all beings, Being itself.  It’s more or less been taken for granted or seen as too tenuous or insubstantial to give any attention to.  We can certainly blame our lack of attention toward Being, at least in part, as a result of how we’ve understood it as contingent on the existence of particular objects:

“Once Being has been understood solely in terms of beings, things, it becomes the most general and empty of concepts: ‘The first object of the understanding,’ says St. Thomas Aquinas, ‘that which the intellect conceives when it conceives of anything.’ “

Aquinas seems to have pointed out the obvious here: we can’t easily think of the concept of Being without thinking of beings, and once we’ve thought of beings, we tend to think of its fact of being as not adding anything useful to our conception of the beings themselves (similar to Kant’s claim about existence as a superfluous or useless concept).  Heidegger wants to turn this common assumption on its head:

“Being is not an empty abstraction but something in which all of us are immersed up to our necks, and indeed over our heads.  We all understand the meaning in ordinary life of the word ‘is,’ though we are not called upon to give a conceptual explanation of it.  Our ordinary human life moves within a preconceptual understanding of Being, and it is this everyday understanding of Being in which we live, move, and have our Being that Heidegger wants to get at as a philosopher.”

Since we live our lives with an implicit understanding of Being and its permeating everything in our experience, it seems reasonable that we can turn our attention toward it and build up a more explicit conceptualization or understanding of it.  To do this carefully, without building in too many abstract assumptions or uncertain inferences, we need to make special use of phenomenology in this task.  Heidegger turns to Edmund Husserl’s work to take this project off the ground.

2. Phenomenology and Human Existence

“Instead of making intellectual speculations about the whole of reality, philosophy must turn, Husserl declared, to a pure description of what is…Heidegger accepts Husserl’s definition of phenomenology: he will attempt to describe, he says, and without any obscuring preconceptions, what human existence is.”

And this means that we need to try and analyze our experience without importing the plethora of assumptions and ideas that we’ve been taught about our existence.  We have to try and do a kind of phenomenological reduction, where we suspend our judgment about the way the world works which will include leaving aside (at least temporarily) most if not all of science and its various theories pertaining to how reality is structured.

Heidegger also makes many references to language in his work and, perhaps sharing a common thread with Wittgenstein’s views, dissects language to get at what we really mean by certain words, revealing important nuances that are taken for granted and how our words and concepts constrain our thinking to some degree:

“Heidegger’s perpetual digging at words to get at their hidden nuggets of meaning is one of his most exciting facets…(The thing for itself) will reveal itself to us, he says, only if we do not attempt to coerce it into one of our ready-made conceptual strait jackets.”

Truth is another concept that needs to be reformulated for Heidegger, and he first points out how the Greek word for truth, aletheia, means “un-hiddenness or “revelation”, and so he believes that truth lies in finding out what has been hidden from us; and this goes against our usual views of what is meant by “truth”, where we most often think of it as a status ascribed to propositions that correspond to actual facts about the world.  Since propositions can’t exist without minds, then modern conceptions of truth are limited in a number of ways, which Heidegger wants to find a way around:

“…truth is therefore, in modern usage, to be found in the mind when it has a correct judgment about what is the case.  The trouble with this view is that it cannot take account of other manifestations of truth.  For example, we speak of the “truth” of a work of art.  A work of art in which we find truth may actually have in it no propositions that are true in this literal sense.  The truth of a work of art is in its being a revelation, but that revelation does not consist in a statement or group of statements that are intellectually correct.  The momentous assertion that Heidegger makes is that truth does not reside primarily in the intellect, but that, on the contrary, intellectual truth is in fact a derivative of a more basic sense of truth.”

This more basic sense of truth will be explored more later on, but the important takeaway is that truth for Heidegger seems to involve the structure of our experience and of the beings within that experience; and it lends itself toward a kind of epistemological relativism, depending on how a human being interprets the world they’re interacting with.

On the other hand, if we think about truth as a matter of deciding what we should trust or doubt in terms of what we’re experiencing, we may wind up in a position of solipsism similar to Descartes’, where we come to doubt the “external world” altogether:

“…Descartes’ cogito ergo sum moment, is the point at which modern philosophy, and with it the modern epoch, begins: man is locked up in his own ego.  Outside him is the doubtful world of things, which his science has not taught him are really not the least like their familiar appearances.  Descartes got the external world back through a belief in God, who in his goodness would not deceive us into believing that this external world existed if it really did not.  But the ghost of subjectivism (and solipsism too) is there and haunts the whole of modern philosophy.”

While Descartes pulled himself out of solipsism through the added assumption of another (benevolent) mind existing that was responsible for any and all experience, I think there’s a far easier and less ad hoc solution to this problem.  First, we need to realize that we wouldn’t be able to tell the difference between an experience of an artificial or illusory external reality and one that was truly real independent of our conscious experience, and this means that solipsism is entirely indeterminable and so it’s almost pointless to worry about; and second, we don’t feel that we are the sole authors of every aspect of our experience anyway, otherwise there would be nothing unpredictable or uncertain with respect to any part of our experience, and therefore we’d know everything that will ever happen before it happens with not so much as an inkling of surprise or confusion.

This is most certainly not the case, which means we can be confident that we are not the sole authors of our experience and therefore there really is a reality or a causal structure that is independent of our own will and expectations.  So it’s conceptually simpler to simply define reality in the most broadest sense of the term, as nothing more or less than what we experience at any point in time.  And because we experience what appear to be other beings that are just like ourselves, each appearing to have their own experiences of reality, it’s going to be more natural and intuitive to treat them as such regardless of whether or not they’re really figments of our imagination or products of some unknown higher level reality like The Matrix.

Heidegger does something similar here, where he views our being in the “external” world as essential to who we are, and thus he doesn’t think we can separate ourselves from the world, like Descartes himself did as a result of his methodological skepticism:

“Heidegger destroys the Cartesian picture at one blow: what characterizes man essentially, he says, is that he is Being-in-the-world.  Leibniz had said that the monad has no windows; and Heidegger’s reply is that man does not look out upon an external world through windows, from the isolation of his ego: he is already out-of-doors.  He is in the world because, existing, he is involved in it totally.”

This is fundamentally different from the traditional conception of an ego or self that may or may not exist in a world; within this conception of human beings, one can’t treat the ego as separable from the world that it’s embedded in.  One way to look at this is to imagine that we were separated from the world forever, and then ask ourselves if the mode of living that remains (if any) in the absence of any world is still fundamentally human.  It’s an interesting thought experiment to imagine us somehow as a disembodied mind “residing” within an infinite void of nothingness, but if this were really done for eternity and not simply for a short while where we knew we could later return to the world we left behind, then we’d have no more interactions or relations with any objects or other beings and no mode of living at all.  I think it’s perfectly reasonable to conclude in this case that we would have lost something that we need in order to truly be a human being.  If one fails to realize this, then they’ve simply failed to truly grasp the conditions called for in the thought experiment.

Heidegger drives this point further by describing existence or more appropriately our Being as analogous to a non-localized field as in theoretical physics:

“Existence itself, according to Heidegger, means to stand outside oneself, to be beyond oneself.  My Being is not something that takes place inside my skin (or inside an immaterial substance inside that skin); my Being, rather, is spread over a field or region which is the world of its care and concern.  Heidegger’s theory of man (and of Being) might be called the Field Theory of Man (or the Field Theory of Being) in analogy with Einstein’s Field Theory of Matter, provided we take this purely as an analogy; for Heidegger would hold it a spurious and inauthentic way to philosophize to derive one’s philosophic conclusions from the highly abstract theories of physics.  But in the way that Einstein took matter to be a field (a magnetic field, say)- in opposition to the Newtonian conception of a body as existing inside its surface boundaries-so Heidegger takes man to be a field or region of Being…Heidegger calls this field of Being Dasein.  Dasein (which in German means Being-there) is his name for man.”

So we can see our state of Being (which Heidegger calls Dasein) as one that’s spread out over a number of different interactions and relations we have with other beings in the world.  In some sense we can think of the sum total of all the actions, goals, and beings that we concern ourselves with as constituting our entire field of Being.  And each being, especially each human being, shares this general property of Being even though each of those beings will be embedded within their own particular region of Being with their own set of concerns (even if these sets overlap in some ways).

What I find really fascinating is Heidegger’s dissolution of the subject-object distinction:

“That Heidegger can say everything he wants to say about human existence without using either “man” or “consciousness” means that the gulf between subject and object, or between mind and body, that has been dug by modern philosophy need not exist if we do not make it.”

I mentioned Wittgenstein earlier, because I think Heidegger’s philosophy has a few parallels.  For instance, Wittgenstein treats language and meaning as inherently connected to how we use words and concepts and thus is dependent on the contextual relations between how or what we communicate and what our goals are, as opposed to treating words like categorical objects with strict definitions and well-defined semantic boundaries.  For Wittgenstein, language can’t be separated from our use of it in the world even though we often treat it as if we can (and assuming we can is what often creates problems in philosophy), and this is similar to how Heidegger describes us human beings as inherently inseparable from our extension in the world as manifested in our many relations and concerns within that world.

Heidegger’s reference to Being as some kind of field or region of concern can be illustrated well by considering how a child comes to learn their own name (or to respond to it anyway):

“He comes promptly enough at being called by name; but if asked to point out the person to whom the name belongs, he is just as likely to point to Mommy or Daddy as to himself-to the frustration of both eager parents.  Some months later, asked the same question, the child will point to himself.  But before he has reached that stage, he has heard his name as naming a field or region of Being with which he is concerned, and to which he responds, whether the call is to come to food, to mother, or whatever.  And the child is right.  His name is not the name of an existence that takes place within the envelope of his skin: that is merely the awfully abstract social convention that has imposed itself not only on his parents but on the history of philosophy.  The basic meaning the child’s name has for him does not disappear as he grows older; it only becomes covered over by the more abstract social convention.  He secretly hears his own name called whenever he hears any region of Being named with which he is vitally involved.”

This is certainly an interesting interpretation of this facet of child development, and we might presume that the current social conventions which eventually attach our name to our body arise out of a number of pragmatic considerations such as wanting to provide a label for each conscious agent that makes decisions and behaves in ways that affect other conscious agents, and because our regions of Being overlap in some ways but not in others (i.e. some of my concerns, though not all of them, are your concerns too), it also makes sense to differentiate between all the different “sets of concerns” that comprise each human being we interact with.  But the way we tend to do this is by abstracting the sets of concerns, localizing them in time and space as “mine” and “yours” rather than the way they truly exist in the world.

Nevertheless, we could (in principle anyway) use an entirely different convention where a person’s name is treated as a representation of a non-localized set of causal structures and concerns, almost as if their body extended in some sense to include all of this, where each concern or facet of their life is like a dynamic appendage emanating out from their most concentrated region of Being (the body itself).  Doing so would markedly change the way we see one another, the way we see objects, etc.

Perhaps our state of consciousness can mislead us into forgetting about the non-locality of our field of Being, because our consciousness is spatially constrained to one locus of experience; and perhaps this is the reason why philosophers going all the way back to the Greeks have linked existence directly with our propensity to reflect in solitude.  But this solitary reflection also misleads us into forgetting the average “everydayness” of our existence, where we are intertwined with our community like a sheep directed by the rest of the herd:

“…none of us is a private Self confronting a world of external objects.  None of us is yet even a Self.  We are each simply one among many; a name among the names of our schoolfellows, our fellow citizens, our community.  This everyday public quality of our existence Heidegger calls “the One.”  The One is the impersonal and public creature whom each of us is even before he is an I, a real I.  One has such-and-such a position in life, one is expected to behave in such-and-such a manner, one does this, one does not do that, etc.  We exist thus in a state of “fallen-ness”, according to Heidegger, in the sense that we are as yet below the level of existence to which it is possible for us to rise.  So long as we remain in the womb of this externalized and public existence, we are spared the terror and the dignity of becoming a Self.”

Heidegger’s concept of fallen-ness illustrates our primary mode of living; sometimes we think of ourselves as individuals striving toward our personal self-authored goals but most of the time we’re succumbing to society’s expectations of who we are, what we should become, and how we ought to judge the value of our own lives.  We shouldn’t be surprised by this fact as we’ve evolved as a social species, which means our inclination toward a herd mentality is in some sense instinctual and therefore becomes the path of least resistance.  But because of our complex cognitive and behavioral versatility, we can also consciously rise above this collective limitation; and even if we don’t strive to, in other cases our position within the safety of the collective is torn away from us as a result of the chaos that reigns in on us from life’s many contingencies:

“But…death and anxiety (and such) intrude upon this fallen state, destroy our sheltered position of simply being one among many, and reveal to us our own existence as fearfully and irremediably our own.  Because it is less fearful to be “the One” than to be a Self, the modern world has wonderfully multiplied all the devices of self-evasion.”

It is only by rising out of this state of fallenness, that is, by our rescuing some measure of true individuality, that Heidegger thinks we can live authentically.  But in any case:

“Whether it be fallen or risen, inauthentic or authentic, counterfeit copy or genuine original, human existence is marked by three general traits: 1) mood or feeling; 2) understanding; 3) speech.  Heidegger calls these existentialia and intends them as basic categories of existence (as opposed to more common ones like quantity, quality, space, time, etc.).”

Heidegger’s use of these traits in describing our existence shouldn’t be confused with some set of internal mental states but rather we need to include them within a conception of Being or Dasein as a field that permeates the totality of our existence.  So when we consider the first trait, mood or feeling, we should think of each human being as being a mood rather than simply having a mood.  According to Heidegger, it is through moods that we feel a sense of belonging to the world, and thus, if we didn’t have any mood at all, then we wouldn’t find ourselves in a world at all.

It’s also important to understand that he doesn’t see moods as some kind of state of mind as we would typically take them to be, but rather that all states of mind presuppose this sense of belonging to a world in the first place; they presuppose a mood in order to have the possibilities for any of those states of mind.  And as Barrett mentions, not all moods are equally important for Heidegger:

“The fundamental mood, according to Heidegger, is anxiety (Angst); he does not choose this as primary out of any morbidity of temperament, however, but simply because in anxiety this here-and-now of our existence arises before us in all its precarious and porous contingency.”

It seems that he finds anxiety to be fundamental in part because it is a mood that opens us up to see Being for what it really is: a field of existence centered around the individual rather than society and its norms and expectations; and this perspective largely results from the fact that when we’re in anxiety all practical significance melts away and the world as it was no longer seems to be relevant.  Things that we took for granted now become salient and there’s a kind of breakdown in our sense of everyday familiarity.  If what used to be significant and insignificant reverse roles in this mood, then we can no longer misinterpret ourselves as some kind of entity within the world (the world of “Them”; the world of externally imposed identity and essence); instead, we finally see ourselves as within, or identical with, our own world.

Heidegger also seems to see anxiety as always there even though it may be covered up (so to speak).  Perhaps it’s hiding most of the time because the fear of being a true Self and facing our individuality with the finitude, contingency, and responsibility that goes along with it is often too much for one to bear, so we blind ourselves by shifting into a number of other, less fundamental moods which enhance the significance of the usual day-to-day world, even if the possibility for Being to “reveal itself” to us, is just below the surface.

If we think of moods as a way of our feeling a sense of belonging within a world, where they effectively constitute the total range of ways that things can have significance for us, then we can see how our moods ultimately affect and constrain our view of the possibilities that our world affords us.  And this is a good segue to consider Heidegger’s second trait of Dasein or human Being, namely that of understanding.

“The “understanding” Heidegger refers to here is not abstract or theoretical; it is the understanding of Being in which our existence is rooted, and without which we could not make propositions or theories that can claim to be “true”.  Whence comes this understanding?  It is the understanding that I have by virtue of being rooted in existence.”

I think Heidegger’s concept of understanding (verstehen) is more or less an intuitive sense of knowing how to manipulate the world in order to make use of it for some goal or other.  By having certain goals, we see the world and the entities in that world colored by the context of those goals.  What we desire then will no doubt change the way the world appears to us in terms of what functionality we can make the most use of, what tools are available to us, and whether we see something as a tool or not.  And even though our moods may structure the space of possibilities that we see the world present to us, it is our realizing what these possibilities are that seems to underlie this form of understanding that Heidegger’s referring to.

We also have no choice but to use our past experience to interpret the world, determine what we find significant and meaningful, and to further drive the evolution of our understanding; and of course this process feeds back in on itself by changing our interpretation of the world which changes our dynamic understanding yet again.  And here we might make the distinction between an understanding of the possible uses for various objects and processes, and the specific interpretation of which possibilities are relevant to the task at hand thereby making use of that understanding.

If I’m sitting at my desk with a cup of coffee and a stack of papers in front of me that need to be sorted, my understanding of these entities affords me a number of possibilities depending on my specific mode of Being or what my current concerns are at the present moment.  If I’m tired or thirsty or what-have-you I may want to make use of that cup of coffee to drink from it, but if the window is open and the wind is starting to blow the stack of papers off my desk, I may decide to use my cup of coffee as a paper weight instead of a vessel to drink from.  And if a fly begins buzzing about while I’m sorting my papers and starts to get on my nerves, distracting me from the task at hand, I may decide to temporarily roll up my stack of papers and use it as a fly swatter.  My understanding of all of these objects in terms of their possibilities allows me to interpret an object as a particular possibility that fits within the context of my immediate wants and needs.

I like to think of understanding and interpretation, whether in the traditional sense or in Heidegger’s terms, as ultimately based on our propensity to make predictions about the world and its causal structure.  As a proponent of the predictive processing framework of brain function, I see the ultimate purpose of brains as being prediction generators, where the brain serves to organize information in order to predict the causal structure of the world we interact with; and this is done at many different levels of abstraction, so we can think of us as making use of many smaller-scale modules of understanding as well as various larger-scale assemblies of those individual modules.  In some cases, the larger-scale assemblies constrain which smaller-scale modules can be used and in other cases, the smaller-scale modules are prioritized which constrain or direct the shape of the larger-scale assemblies.

Another way to say this is to say that our brains make use of a number of lower level and higher level predictions (any of which may change over time based on new experiences or desires) and depending on the context we find ourselves in, one level may have more or less influence on another level.  My lowest level predictions may include things like what each “pixel” in my visual field is going to be from moment to moment, and a slightly higher level prediction may include things like what a specific coffee cup looks like, it’s shape, weight, etc., and slightly higher level predictions may include things like what coffee cups in general look like, their range of shapes, weights, etc., and eventually we get to higher levels of predictions that may include what we expect an object can be used for.

All of these predictions taken as a whole constitute our understanding of the world and the specific sets of predictions that come into play at any point in time, depending on our immediate concerns and where our attention is directed, constitute our interpretation of what we’re presently experiencing.  Barrett points to the fact that what we consider to be truth in the most primitive sense is this intuitive sense of understanding:

“Truth and Being are thus inseparable, given always together, in the simple sense that a world with things in it opens up around man the moment he exists.  Most of the time, however, man does not let himself see what really happens in seeing.”

This process of the world opening itself up to us, such that we have an immediate sense of familiarity with it, is a very natural and automated process and so unless we consciously pull ourselves out of it to try and externally reflect on this situation, we’re just going to take our understanding and interpretation for granted.  But regardless of whether we reflect on it or not, this process of understanding and interpretation is fundamental to how we exist in the world, and thus it is fundamental to our Being in the world.

Speech is the last trait of Dasein that Heidegger mentions, and it is intimately connected with the other two traits, mood and understanding.  So what exactly is he referring to by speech?

“Speech:  Language, for Heidegger, is not primarily a system of sounds or of marks on paper symbolizing those sounds.  Sounds and marks upon paper can become language only because man, insofar as he exists, stands within language.”

This is extremely reminiscent of Wittgenstein once again, since Wittgenstein views language in terms of how it permeates our day-to-day lives within a community or social group of some kind.  Language isn’t some discrete set of symbols with fixed meanings, but rather is a much more dynamic, context-dependent communication tool.  We primarily use language to accomplish some specific goal or another in our day-to-day lives, and this is a fundamental aspect of how we exist as human beings.  Because language is attached to its use, the meaning of any word or utterance (if any) is derived from its use as well.  Sounds and marks on paper are meaningless, unless we first ascribe a meaning to it based on how its been used in the past and how its being used in the present.

And as soon as we pick up language during our child development, it becomes automatically associated with our thought processes where we begin to think in our native language, and both real perceptions and figments of our imagination are instantly attached to some linguistic label in our heads.  If I showed you a picture of an elephant, the word elephant would no doubt enter your stream of consciousness, even without my saying the word or asking you what animal you saw in the picture.  Whether our thinking linguistically comes about as a result of having learned a specific language, or if its an innate feature of our how our brains function, language permeates our thought as well as our interactions with others in the world.

While we tend to think of language as involving words, sounds, and so forth, Heidegger has an interesting take on what else falls under the umbrella of language:

“Two people are talking together.  They understand each other, and they fall silent-a long silence.  This silence is language; it may speak more eloquently than any words.  In their mood they are attuned to each other; they may even reach down into that understanding which, as we have seen above, lies below the level of articulation.  The three-mood, understanding, and speech (a speech here that is silence)-thus interweave and are one.”

I’m not sure if Heidegger is only referring to silence itself here or if he’s also including body language, which often continues even after we’ve stopped talking and which sometimes speaks better than verbal language ever could.  In silence, we often get a chance to simply read another person in a much more basic and instinctual way, ascertaining their mood, and allowing any language that may have recently transpired to be interpreted differently than if a long silence had never occurred.  Heidegger refers to what happens in silence as an attunement between fellow human beings:

“…Nor is this silence merely a gap in our chatter; it is, rather, the primordial attunement of one existent to another, out of which all language-as sounds, marks, and counters-comes.  It is only because man is capable of such silence that he is capable of authentic speech.  If he ceases to be rooted in that silence all his talk becomes chatter.”

We may however also want to consider silence when it is accompanied with action, as our actions speak louder than words and we gain information primarily in this way anyway, by simply watching another do something in the world.  And language can only work within the context of mutual understanding, which necessarily involves action (at some point) and so perhaps we could say that action at least partially constitutes the “silence” that Heidegger refers to.

Heidegger’s entire “Field Theory” of Being relies on context, and Barrett mentions this as well:

“…we might just as well call it a contextual theory of Being.  Being is the context in which all beings come to light-and this means those beings as well that are sounds or marks on paper…Men exist “within language” prior to their uttering sounds because they exist within a mutual context of understanding, which in the end is nothing but Being itself.”

I think that we could interpret this as saying that in order for us to be able to interact with one another and with the world, there’s a prerequisite context of meaning and of what matters to us, already in play in order for those interactions to be possible in the first place.  In other words, whether we’re talking to somebody or simply trying to cook our breakfast, all of our actions presuppose some background of understanding and a sense of what’s significant to us.

3. Death, Anxiety, Finitude

Heidegger views the concept of death as an important one especially as it relates to a proper understanding of the concept of Being, and he also sees the most common understanding of death as fundamentally misguided:

“The authentic meaning of death-“I am to die”-is not as an external and public fact within the world, but as an internal possibility of my own Being.  Nor is it a possibility like a point at the end of a road, which I will in time reach.  So long as I think in this way, I still hold death at a distance outside myself.  The point is that I may die at any moment, and therefore death is my possibility now…Hence, death is the most personal and intimate of possibilities, since it is what I must suffer for myself: nobody else can die for me.”

I definitely see the merit in viewing death as an imminent possibility in order to fully grasp the implications it has on how we should be living our lives.  But most people tend to consider death in very abstract terms where it’s only thought about in the periphery as something that will happen “some day” in the future; and I think we owe this abstraction and partial repression of death to our own fear of death.  In many other cases, this fear of death manifests itself into supernatural beliefs that posit life after death, the resurrection of the dead, immortality, and other forms of magic; and importantly, all of these psychologically motivated tactics prevent one from actually accepting the truth about death as an integral part of our existence as human beings.  By denying the truth about death, one is denying the true value of the life they actually have.  On the other hand, by accepting it as an extremely  personal and intimate attribute of ourselves, we can live more authentically.

We can also see that by viewing death in more personal terms, we are pulled out the world of “Them”, the collective world of social norms and externalized identities, and able to better connect with our sense of being a Self:

“Only by taking my death into myself, according to Heidegger, does an authentic existence become possible for me.  Touched by this interior angel of death, I cease to be the impersonal and social One among many…and I am free to become myself.”

Thinking about death may make us uncomfortable, but if I know I’m going to die and could die at any moment, then shouldn’t I prioritize my personal endeavors to reflect this radical contingency?  It’s hard to argue with that conclusion but even if we should be doing this it’s understandably easy to get lost in our mindless day-to-day routines and superficial concerns, and then we simply lose sight of what we ought to be doing with our limited time.  It’s important to break out of this pattern, or at least to strive to, and in doing so we can begin to center our lives around truly personal goals and projects:

“Though terrifying, the taking of death into ourselves is also liberating: It frees us from servitude to the petty cares that threaten to engulf our daily life and thereby opens us to the essential projects by which we can make our lives personally and significantly our own.  Heidegger calls this the condition of “freedom-toward-death” or “resoluteness.”

The fear of death is also unique and telling in the sense that it isn’t really directed at any object but rather it is directed at Nothingness itself.  We fear that our lives will come to an end and that we’ll one day cease to be.  Heidegger seems to refer to this disposition as anxiety rather than fear, since it isn’t really the fear of anything but rather the fear of nothing at all; we just treat this nothing as if it were an object:

“Anxiety is not fear, being afraid of this or that definite object, but the uncanny feeling of being afraid of nothing at all.  It is precisely Nothingness that makes itself present and felt as the object of our dread.”

Accordingly, the concept of Nothingness is integral to the concept of Being, since it’s interwoven in our existence:

“In Heidegger Nothingness is a presence within our own Being, always there, in the inner quaking that goes on beneath the calm surface of our preoccupation with things.  Anxiety before Nothingness has many modalities and guises: now trembling and creative, now panicky and destructive; but always it is as inseparable from ourselves as our own breathing because anxiety is our existence itself in its radical insecurity.  In anxiety we both are and are not, at one and the same time, and this is our dread.”

What’s interesting about this perspective is the apparent asymmetry between existence and non-existence, at least insofar as it relates to human beings.  Non-existence becomes a principal concern for us as a result of our kind of existence, but non-existence itself carries with it no concerns at all.  In other words, non-existence doesn’t refer to anything at all whereas (human) existence refers to everything as well as to nothing (non-existence).  And it seems to me that anxiety and our relation to Nothingness is entirely dependent on our inherent capacity of imagination, where we use imagination to simulate possibilities, and if this imagination is capable of rendering the possibility of our own existence coming to an end, then that possibility may forcefully reorganize the relative significance of all other products of our imagination.

As confusing as it may sound, death is the possibility of ending all possibilities for any extant being.  If imagining such a possibility were not enough to fundamentally change one’s perspective of their own life, then I think that combining it with the knowledge of its inevitability, that this possibility is also a certainty, ought to.  Another interesting feature of our existence is the fact that death isn’t the only possibility that relates to Nothingness, for anytime we imagine a possibility that has not yet been realized or even if we conceive of an actuality that has been realized, we are making reference to that which is not; we are referring to what we are not, to what some thing or other is not and ultimately to that which does not exist (at a particular time and place).  I only know how to identify myself, or any object in the world for that matter, by distinguishing it from what it is not.

“Man is finite because the “not”-negation-penetrates the very core of his existence.  And whence is this “not” derived?  From Being itself.  Man is finite because he lives and moves within a finite understanding of Being.”

As we can see, Barrett describes how within Heidegger’s philosophy negation is treated as something that we live and move within and this makes sense when we consider what identity itself is dependent on.  And the mention of finitude is interesting because it’s not simply a limitation in the sense of what we aren’t capable of (e.g. omnipotence, immortality, perfection, etc.), but rather that our mode of existence entails making discriminations between what is and what isn’t, between what is possible and what is not, between what is actual and what is not.

4. Time and Temporality; History

The last section on Heidegger concerns the nature of time itself, and it begins by pointing out the relation between negation and our experience of time:

“Our finitude discloses itself essentially in time.  In existing…we stand outside ourselves at once open to Being and in the open clearing of Being; and this happens temporally as well as spatially.  Man, Heidegger says, is a creature of distance: he is perpetually beyond himself, his existence at every moment opening out toward the future.  The future is the not-yet, and the past is the no-longer; and these two negatives-the not-yet and the no-longer-penetrate his existence.  They are his finitude in its temporal manifestation.”

I suppose we could call this projection towards the future the teleological aspect of our Being.  In order to do anything within our existence, we must have a goal or a number of them, and these goals refer to possible future states of existence that can only be confirmed or disconfirmed once our growing past subsumes them as actualities or lost opportunities.  We might even say that the present is somewhat of an illusion (though as far as I know Heidegger doesn’t make this claim) because in a sense we’re always living in the future and in the past, since the person we see ourselves as and the person we want to be are a manifestation of both; whereas the present itself seems to be nothing more than a fleeting moment that only references what’s in front or behind itself.

In addition to this, if we look at how our brain functions from a predictive coding perspective, we can see that our brain generates predictive models based on our past experiences and the models it generates are pure attempts to predict the future; nowhere does the present come into play.  The present as it is experienced by us, is nothing but a prediction of what is yet to come.  I think this is integral to take into account when considering how time is experienced by us and how it should be incorporated into a concept of Being.

Heidegger’s concept of temporality is also tied to the concept of death that we explored earlier.  He sees death as necessary to put our temporal existence into a true human perspective:

“We really know time, says Heidegger, because we know we are going to die.  Without this passionate realization of our mortality, time would be simply a movement of the clock that we watch passively, calculating its advance-a movement devoid of human reasoning…Everything that makes up human existence has to be understood in the light of man’s temporality: of the not-yet, the no-longer, the here-and-now.”

It’s also interesting to consider what I’ve previously referred to as the illusion of persistent identity, an idea that’s been explored by a number of philosophers for centuries: we think of our past self as being the same person as our present or future self.  Now whether the present is actually an illusion or not is irrelevant to this point; the point is that we think of ourselves as always being there in our memories and at any moment of time.  The fact that we are all born into a society that gives us a discrete and fixed name adds to this illusion of our having an identity that is fixed as well.  But we are not the same person that we were ten years ago let alone the same person we were as a five-year old child.  We may share some memories with those previous selves but even those change over time and are often altered and reconstructed upon recall.  Our values, our ontology, our language, our primary goals in life, are all changing to varying degrees over time and we mustn’t lose sight of this fact either.  I think this “dynamic identity” is a fundamental aspect of our Being.

We might help to explain how we’re taken in by the illusion of a persistent identity by understanding once again that our identity is fundamentally projected toward the future.  Since that projection changes over time (as our goals change) and since it is a process that relies on a kind of psychological continuity, we might expect to simply focus on the projection itself rather than what that projection used to be.  Heidegger stresses the importance of the future in our concept of Being:

“Heidegger’s theory of time is novel, in that, unlike earlier philosophers with their “nows,” he gives priority to the future tense.  The future, according to him, is primary because it is the region toward which man projects and in which he defines his own being.  “Man never is, but always is to be,” to alter slightly the famous line of Pope.”

But, since we’re always relying on past experiences in order to determine our future projection, to give it a frame of reference with which we can evaluate that projected future, we end up viewing our trajectory in terms of human history:

“All these things derive their significance from a more basic fact: namely, that man is the being who, however dimly and half-consciously, always understands, and must understand, his own being historically.”

And human history, in a collective sense also plays a role, since our view of ourselves and of the world is unavoidably influenced by the cultural transmission of ideas stemming from our recent past all the way to thousands of years ago.  A lot of that influence has emanated from the Greeks especially (as Barrett has mentioned throughout Irrational Man), and it’s had a substantial impact on our conceptualization of Being as well:

“By detaching the figure from the ground the object could be made to emerge into the daylight of human consciousness; but the sense of the ground, the environing background, could also be lost.  The figure comes into sharper focus, that is, but the ground recedes, becomes invisible, is forgotten.  The Greeks detached beings from the vast environing ground of Being.  This act of detachment was accompanied by a momentous shift in the meaning of truth for the Greeks, a shift which Heidegger pinpoints as taking place in a single passage in Plato’s Republic, the celebrated allegory of the cave.”

Through the advent of reason and forced abstraction, the Greeks effectively separated the object from the context it’s normally embedded within.  But if a true understanding of our Being in the world can only be known contextually, then we can see how Heidegger may see a crucial problem with this school of thought which has propagated itself in Western philosophy ever since Plato.  Even the concept of truth itself had underwent a dramatic change as a result of the Greeks discovering and developing the process of reason:

“The quality of unhiddenness had been considered the mark of truth; but with Plato in that passage truth came to be defined, rather, as the correctness of an intellectual judgment.”

And this concept of unhiddenness seems to be a variation of “subjective truth” or “subjective understanding”, although it shouldn’t be confused with modern subjectivism; in this case the unhiddenness would naturally precipitate from whatever coherency and meaning is immediately revealed to us through our conscious experience.  I think that the redefining of truth had less of an impact on philosophy than Heidegger may have thought; the problem wasn’t the redefining of truth per se but rather the fact that a contextual understanding and subjective experience itself lost the level of importance they once had.  And I’m sure Heidegger would agree that this perceived loss of importance for both subjectivity and for a holistic understanding of human existence has led to a major shift in our view of the world, and a view that has likely resulted in some of the psychological pathologies sprouting up in our age of modernity.

Heidegger thought that allowing Being to reveal itself to us, was analogous to an artist letting the truth reveal itself naturally:

“…the artist, as well as the spectator, must submit patiently and passively to the artistic process, that he must lie in wait for the image to produce itself; that he produces false notes as soon as he tries to force anything; that, in short, he must let the truth of his art happen to him?  All of these points are part of what Heidegger means by our letting Being be.  Letting it be, the artist lets it speak to him and through him; and so too the thinker must let it be thought.”

He seems to be saying that modern ways of thinking are too forceful in the sense of our always prioritizing the subject-object distinction in our way of life.  We’ve let the obsessive organization of our lives and the various beings within those lives drown out our appreciation of, and ability to connect to, Being itself.  I think that another way we could put this is to say that we’ve split ourselves off from the sense of self that has a strong connection to nature and this has disrupted our ability to exist in a mode that’s more in tune with our evolutionary psychology, a kind of harmonious path of least resistance.  We’ve become masters over manipulating our environment while simultaneously losing our innate connection to that environment.  There’s certainly a lesson to be learned here even if the problem is difficult to articulate.

•      •      •      •      •

This concludes William Barrett’s chapter on Martin Heidegger.  I’ve gotta say that I think Heidegger’s views are fascinating as they serve to illustrate a radically different way of viewing the world and human existence, and for those that take his philosophy seriously, it forces us to re-evaluate much of the Western philosophical thought that we’ve been taking for granted, and which has constrained a lot of our thinking.  And I think it’s both refreshing and worthwhile to explore some of these novel ideas as they help us see life through an entirely new lens.  We need a shift in our frame of mind; a need to think outside the box so we have a better chance of finding new solutions to many of the problems we face in today’s world.  In the next post for this series on William Barrett’s Irrational Man, I’ll be looking at the philosophy of Jean-Paul Sartre.

It’s Time For Some Philosophical Investigations

Ludwig Wittgenstein’s Philosophical Investigations is a nice piece of work where he attempts to explain his views on language and the consequences of this view on various subjects like logic, semantics, cognition, and psychology.  I’ve mentioned some of his views very briefly in a couple of earlier posts, but I wanted to delve into his work in a little more depth here and comment on what strikes me as most interesting.  Lately, I’ve been looking back at some of the books I’ve read from various philosophers and have been wanting to revisit them so I can explore them in more detail and share how they connect to some of my own thoughts.  Alright…let’s begin.

Language, Meaning, & Their Probabilistic Attributes

He opens his Philosophical Investigations with a quote from St. Augustine’s Confessions that describes how a person learns a language.  St. Augustine believed that this process involved simply learning the names of objects (for example, by someone else pointing to the objects that are so named) and then stringing them together into sentences, and Wittgenstein points out that this is true to some trivial degree but it overlooks a much more fundamental relationship between language and the world.  For Wittgenstein, the meaning of words can not simply be attached to an object like a name can.  The meaning of a word or concept has much more of a fuzzy boundary as it depends on a breadth of context or associations with other concepts.  He analogizes this plurality in the meanings of words with the relationship between members of a family.  While there may be some resemblance between different uses of a word or concept, we aren’t able to formulate a strict definition to fully describe this resemblance.

One problem then, especially within philosophy, is that many people assume that the meaning of a word or concept is fixed with sharp boundaries (just like the fixed structure of words themselves).  Wittgenstein wants to dispel people of this false notion (much as Nietzsche tried to do before him) so that they can stop misusing language, as he believed that this misuse was the cause of many (if not all) of the major problems that had cropped up in philosophy over the centuries, particularly in metaphysics.  Since meaning is actually somewhat fluid and can’t be accounted for by any fixed structure, Wittgenstein thinks that any meaning that we can attach to these words is ultimately going to be determined by how those words are used.  Since he ties meaning with use, and since this use is something occurring in our social forms of life, it has an inextricably external character.  Thus, the only way to determine if someone else has a particular understanding of a word or concept is through their behavior, in response to or in association with the use of the word(s) in question.  This is especially important in the case of ambiguous sentences, which Wittgenstein explores to some degree.

Probabilistic Shared Understanding

Some of what Wittgenstein is trying to point out here are what I like to refer to as the inherently probabilistic attributes of language.  And it seems to me to be probabilistic for a few different reasons, beyond what Wittgenstein seems to address.  First, there is no guarantee that what one person means by a word or concept exactly matches the meaning from another person’s point of view, but at the very least there is almost always going to be some amount of semantic overlap (and possibly 100% in some cases) between the two individual’s intended meanings, and so there is going to be some probability that the speaker and the listener do in fact share a complete understanding.  It seems reasonable to argue that simpler concepts will have a higher probability of complete semantic overlap whereas more complex concepts are more likely to leave a gap in that shared understanding.  And I think this is true even if we can’t actually calculate what any of these probabilities are.

Now my use of the word meaning here differs from Wittgenstein’s because I am referring to something that is not exclusively shared by all parties involved and I am pointing to something that is internal (a subjective understanding of a word) rather than external (the shared use of a word).  But I think this move is necessary if we are to capture all of the attributes that people explicitly or implicitly refer to with a concept like meaning.  It seems better to compromise with Wittgenstein’s thinking and refer to the meaning of a word as a form of understanding that is intimately connected with its use, but which involves elements that are not exclusively external.

We can justify this version of meaning through an example.  If I help teach you how to ride a bike and explain that this activity is called biking or to bike, then we can use Wittgenstein’s conception of meaning and it will likely account for our shared understanding, and so I have no qualms about that, and I’d agree with Wittgenstein that this is perhaps the most important function of language.  But it may be the case that you have an understanding that biking is an activity that can only happen on Tuesdays, because that happened to be the day that I helped teach you how to ride a bike.  Though I never intended for you to understand biking in this way, there was no immediate way for me to infer that you had this misunderstanding on the day I was teaching you this.  I could only learn of this fact if you explicitly explained to me your understanding of the term with enough detail, or if I had asked you additional questions like whether or not you’d like to bike on a Wednesday (for example), with you answering “I don’t know how to do that as that doesn’t make any sense to me.”  Wittgenstein doesn’t account for this gap in understanding in his conception of meaning and I think this makes for a far less useful conception.

Now I think that Wittgenstein is still right in the sense that the only way to determine someone else’s understanding (or lack thereof) of a word is through their behavior, but due to chance as well as where our attention is directed at any given moment, we may never see the right kinds of behavior to rule out any number of possible misunderstandings, and so we’re apt to just assume that these misunderstandings don’t exist because language hides them to varying degrees.  But they can and in some cases do exist, and this is why I prefer a conception of meaning that takes these misunderstandings into account.  So I think it’s useful to see language as probabilistic in the sense that there is some probability of a complete shared understanding underlying the use of a word, and thus there is a converse probability of some degree of misunderstanding.

Language & Meaning as Probabilistic Associations Between Causal Relations

A second way of language being probabilistic is due to the fact that the unique meanings associated with any particular use of a word or concept as understood by an individual are derived from probabilistic associations between various inferred causal relations.  And I believe that this is the underlying cause of most of the problems that Wittgenstein was trying to address in this book.  He may not have been thinking about the problem in this way, but it can account for the fuzzy boundary problem associated with the task of trying to define the meaning of words since this probabilistic structure underlies our experiences, our understanding of the world, and our use of language such that it can’t be represented by static, sharp definitions.  When a person is learning a concept, like redness, they experience a number of observations and infer what is common to all of those experiences, and then they extract a particular subset of what is common and associate it with a word like red or redness (as opposed to another commonality like objectness, or roundness, or what-have-you).  But in most cases, separate experiences of redness are going to be different instantiations of redness with different hues, textures, shapes, etc., which means that redness gets associated with a large range of different qualia.

If you come across a new qualia that seems to more closely match previous experiences associated with redness rather than orangeness (for example), I would argue that this is because the brain has assigned a higher probability to that qualia being an instance of redness as opposed to, say, orangeness.  And the brain may very well test a hypothesis of the qualia matching the concept of redness versus the concept of orangeness, and depending on your previous experiences of both, and the present context of the experience, your brain may assign a higher probability of orangeness instead.  Perhaps if a red-orange colored object is mixed in with a bunch of unambiguously orange-colored objects, it will be perceived as a shade of orange (to match it with the rest of the set), but if the case were reversed and it were mixed in with a bunch of unambiguously red-colored objects, it will be perceived as a shade of red instead.

Since our perception of the world depends on context, then the meanings we assign to words or concepts also depends on context, but not only in the sense of choosing a different use of a word (like Wittgenstein argues) in some language game or other, but also by perceiving the same incoming sensory information as conceptually different qualia (like in the aforementioned case of a red-orange colored object).  In that case, we weren’t intentionally using red or orange in a different way but rather were assigning one word or the other to the exact same sensory information (with respect to the red-orange object) which depended on what else was happening in the scene that surrounded that subset of sensory information.  To me, this highlights how meaning can be fluid in multiple ways, some of that fluidity stemming from our conscious intentions and some of it from unintentional forces at play involving our prior expectations within some context which directly modify our perceived experience.

This can also be seen through Wittgenstein’s example of what he calls a duckrabbit, an ambiguous image that can be perceived as a duck or a rabbit.  I’ve taken the liberty of inserting this image here along with a copy of it which has been rotated in order to more strongly invoke the perception of a rabbit.  The first image no doubt looks more like a duck and the second image, more like a rabbit.

Now Wittgenstein says that when one is looking at the duckrabbit and sees a rabbit, they aren’t interpreting the picture as a rabbit but are simply reporting what they see.  But in the case where a person sees a duck first and then later sees a rabbit, Wittgenstein isn’t sure what to make of this.  However, he claims to be sure that whatever it is, it can’t be the case that the external world stays the same while an internal cognitive change takes place.  Wittgenstein was incorrect on this point because the external world doesn’t change (in any relevant sense) despite our seeing the duck or seeing the rabbit.  Furthermore, he never demonstrates why two different perceptions would require a change in the external world.  The fact of the matter is, you can stare at this static picture and ask yourself to see a duck or to see a rabbit and it will affect your perception accordingly.  This is partially accomplished by you mentally rotating the image in your imagination and seeing if that changes how well it matches one conception or the other, and since it matches both conceptions to a high degree, you can easily perceive it one way or the other.  Your brain is simply processing competing hypotheses to account for the incoming sensory information, and the top-down predictions of rabbitness or duckness (which you’ve acquired over past experiences) actually changes the way you perceive it with no change required in the external world (despite Wittgenstein’s assertion to the contrary).

To give yet another illustration of the probabilistic nature of language, just imagine the head of a bald man and ask yourself, if you were to add one hair at a time to this bald man’s head, at what point does he lose the property of baldness?  If hairs were slowly added at random, and you could simply say “Stop!  Now he’s no longer bald!” at some particular time, there’s no doubt in my mind that if this procedure were repeated (even if the hairs were added non-randomly), you would say “Stop!  Now he’s no longer bald!” at a different point in this transition.  Similarly if you were looking at a light that was changing color from red to orange, and were asked to say when the color has changed to orange, you would pick a point in the transition that is within some margin of error but it wouldn’t be an exact, repeatable point in the transition.  We could do this thought experiment with all sorts of concepts that are attached to words, like cat and dog and, for example, use a computer graphic program to seamlessly morph a picture of a cat into a picture of a dog and ask at what point did the cat “turn into” a dog?  It’s going to be based on a probability of coincident features that you detect which can vary over time.  Here’s a series of pictures showing a chimpanzee morphing into Bill Clinton to better illustrate this point:

At what point do we stop seeing a picture of a chimpanzee and start seeing a picture of something else?  When do we first see Bill Clinton?  What if I expanded this series of 15 images into a series of 1000 images so that this transition happened even more gradually?  It would be highly unlikely to pick the exact same point in the transition two times in a row if the images weren’t numbered or arranged in a grid.  We can analogize this phenomenon with an ongoing problem in science, known as the species problem.  This problem can be described as the inherent difficulty of defining exactly what a species is, which is necessary if one wants to determine if and when one species evolves into another.  This problem occurs because the evolutionary processes giving rise to new species are relatively slow and continuous whereas sorting those organisms into sharply defined categories involves the elimination of that generational continuity and replacing it with discrete steps.

And we can see this effect in the series of images above, where each picture could represent some large number of generations in an evolutionary timeline, where each picture/organism looks roughly like the “parent” or “child” of the picture/organism that is adjacent to it.  Despite this continuity, if we look at the first picture and the last one, they look like pictures of distinct species.  So if we want to categorize the first and last picture as distinct species, then we create a problem when trying to account for every picture/organism that lies in between that transition.  Similarly words take on an appearance of strict categorization (of meaning) when in actuality, any underlying meaning attached is probabilistic and dynamic.  And as Wittgenstein pointed out, this makes it more appropriate to consider meaning as use so that the probabilistic and dynamic attributes of meaning aren’t lost.

Now you may think you can get around this problem of fluidity or fuzzy boundaries with concepts that are simpler and more abstract, like the concept of a particular quantity (say, a quantity of four objects) or other concepts in mathematics.  But in order to learn these concepts in the first place, like quantity, and then associate particular instances of it with a word, like four, one had to be presented with a number of experiences and infer what was common to all of those experiences (as was the case with redness mentioned earlier).  And this inference (I would argue) involves a probabilistic process as well, it’s just that the resulting probability of our inferring particular experiences as an instance of four objects is incredibly high and therefore repeatable and relatively unambiguous.  Therefore that kind of inference is likely to be sustained no matter what the context, and it is likely to be shared by two individuals with 100% semantic overlap (i.e. it’s almost certain that what I mean by four is exactly what you mean by four even though this is almost certainly not the case for a concept like love or consciousness).  This makes mathematical concepts qualitatively different from other concepts (especially those that are more complex or that more closely map on to reality), but it doesn’t negate their having a probabilistic attribute or foundation.

Looking at the Big Picture

Though this discussion of language and meaning is not an exhaustive analysis of Wittgenstein’s Philosophical Investigations, it represents an analysis of the main theme present throughout.  His main point was to shed light on the disparity between how we often think of language and how we actually use it.  When we stray away from the way it is actually used in our everyday lives, in one form of social life or other, and instead misuse it such as in philosophy, this creates all sorts of problems and unwarranted conclusions.  He also wants his readers to realize that the ultimate goal of philosophy should not be to try and make metaphysical theories and deep explanations underlying everyday phenomena, since these are often born out of unwarranted generalizations and other assumptions stemming from how our grammar is structured.  Instead we ought to subdue these temptations to generalize and subdue our temptations to be dogmatic and instead use philosophy as a kind of therapeutic tool to keep our abstract thinking in check and to better understand ourselves and the world we live in.

Although I disagree with some of Wittgenstein’s claims about cognition (in terms of how intimately it is connected to the external world) and take some issue with his arguably less useful conception of meaning, he makes a lot of sense overall.  Wittgenstein was clearly hitting upon a real difference between the way actual causal relations in our experience are structured and how those relations are represented in language.  Personally, I think that work within philosophy is moving in the right direction if the contributions made therein lead us to make more successful predictions about the causal structure of the world.  And I believe this to be so even if this progress includes generalizations that may not be exactly right.  As long as we can structure our thinking to make more successful predictions, then we’re moving forward as far as I’m concerned.  In any case, I liked the book overall and thought that the interlocutory style gave the reader a nice break from the typical form seen in philosophical argumentation.  I highly recommend it!

Predictive Processing: Unlocking the Mysteries of Mind & Body (Part IV)

In the previous post which was part 3 in this series (click here for parts 1 and 2) on Predictive Processing (PP), I discussed how the PP framework can be used to adequately account for traditional and scientific notions of knowledge, by treating knowledge as a subset of all the predicted causal relations currently at our brain’s disposal.  This subset of predictions that we tend to call knowledge has the special quality of especially high confidence levels (high Bayesian priors).  Within a scientific context, knowledge tends to have an even stricter definition (and even higher confidence levels) and so we end up with a smaller subset of predictions which have been further verified through comparing them with the inferred predictions of others and by testing them with external means of instrumentation and some agreed upon conventions for analysis.

However, no amount of testing or verification is going to give us direct access to any knowledge per se.  Rather, the creation or discovery of knowledge has to involve the application of some kind of reasoning to explain the causal inputs, and only after this reasoning process can the resulting predicted causal relations be validated to varying degrees by testing it (through raw sensory data, external instruments, etc.).  So getting an adequate account of reasoning within any theory or framework of overall brain function is going to be absolutely crucial and I think that the PP framework is well-suited for the job.  As has already been mentioned throughout this post-series, this framework fundamentally relies on a form of Bayesian inference (or some approximation) which is a type of reasoning.  It is this inferential strategy then, combined with a hierarchical neurological structure for it to work upon, that would allow our knowledge to be created in the first place.

Rules of Inference & Reasoning Based on Hierarchical-Bayesian Prediction Structure, Neuronal Selection, Associations, and Abstraction

While PP tends to focus on perception and action in particular, I’ve mentioned that I see the same general framework as being able to account for not only the folk psychological concepts of beliefs, desires, and emotions, but also that the hierarchical predictive structure it entails should plausibly be able to account for language and ontology and help explain the relationship between the two.  It seems reasonable to me that the associations between all of these hierarchically structured beliefs or predicted causal relations at varying levels of abstraction, can provide a foundation for our reasoning as well, whether intuitive or logical forms of reasoning.

To illustrate some of the importance of associations between beliefs, consider an example like the belief in object permanence (i.e. that objects persist or continue to exist even when I can no longer see them).  This belief of ours has an extremely high prior because our entire life experience has only served to support this prediction in a large number of ways.  This means that it’s become embedded or implicit in a number of other beliefs.  If I didn’t predict that object permanence was a feature of my reality, then an enormous number of everyday tasks would become difficult if not impossible to do because objects would be treated as if they are blinking into and out of existence.

We have a large number of beliefs that require object permanence (and which are thus associated with object permanence), and so it is a more fundamental lower-level prediction (though not as low level as sensory information entering the visual cortex) and we use this lower-level prediction to build upon into any number of higher-level predictions in the overall conceptual/predictive hierarchy.  When I put money in a bank, I expect to be able to spend it even if I can’t see it anymore (such as with a check or debit card).  This is only possible if my money continues to exist even when out of view (regardless of if the money is in a paper/coin or electronic form).  This is just one of many countless everyday tasks that depend on this belief.  So it’s no surprise that this belief (this set of predictions) would have an incredibly high Bayesian prior, and therefore I would treat it as a non-negotiable fact about reality.

On the other hand, when I was a newborn infant, I didn’t have this belief of object permanence (or at best, it was a very weak belief).  Most psychologists estimate that our belief in object permanence isn’t acquired until after several months of brain development and experience.  This would translate to our having a relatively low Bayesian prior for this belief early on in our lives, and only once a person begins to form predictions based on these kinds of recognized causal relations can we begin to increase that prior and perhaps eventually reach a point that results in a subjective experience of a high degree in certainty for this particular belief.  From that point on, we are likely to simply take that belief for granted, no longer questioning it.  The most important thing to note here is that the more associations made between beliefs, the higher their effective weighting (their priors), and thus the higher our confidence in those beliefs becomes.

Neural Implementation, Spontaneous or Random Neural Activity & Generative Model Selection

This all seems pretty reasonable if a neuronal implementation worked to strengthen Bayesian priors as a function of the neuronal/synaptic connectivity (among other factors), where neurons that fire together are more likely to wire together.  And connectivity strength will increase the more often this happens.  On the flip-side, the less often this happens or if it isn’t happening at all then the connectivity is likely to be weakened or non-existent.  So if a concept (or a belief composed of many conceptual relations) is represented by some cluster of interconnected neurons and their activity, then it’s applicability to other concepts increases its chances of not only firing but also increasing the strength of wiring with those other clusters of neurons, thus plausibly increasing the Bayesian priors for the overlapping concept or belief.

Another likely important factor in the Bayesian inferential process, in terms of the brain forming new generative models or predictive hypotheses to test, is the role of spontaneous or random neural activity and neural cluster generation.  This random neural activity could plausibly provide a means for some randomly generated predictions or random changes in the pool of predictive models that our brain is able to select from.  Similar to the role of random mutation in gene pools which allows for differential reproductive rates and relative fitness of offspring, some amount of randomness in neural activity and the generative models that result would allow for improved models to be naturally selected based on those which best minimize prediction error.  The ability to minimize prediction error could be seen as a direct measure of the fitness of the generative model, within this evolutionary landscape.

This idea is related to the late Gerald Edelman’s Theory of Neuronal Group Selection (NGS), also known as Neural Darwinism, which I briefly explored in a post I wrote long ago.  I’ve long believed that this kind of natural selection process is applicable to a number of different domains (aside from genetics), and I think any viable version of PP is going to depend on it to at least some degree.  This random neural activity (and the naturally selected products derived from them) could be thought of as contributing to a steady supply of new generative models to choose from and thus contributing to our overall human creativity as well whether for reasoning and problem solving strategies or simply for artistic expression.

Increasing Abstraction, Language, & New Rules of Inference

This kind of use it or lose it property of brain plasticity combined with dynamic associations between concepts or beliefs and their underlying predictive structure, would allow for the brain to accommodate learning by extracting statistical inferences (at increasing levels of abstraction) as they occur and modifying or eliminating those inferences by changing their hierarchical associative structure as prediction error is encountered.  While some form of Bayesian inference (or an approximation to it) underlies this process, once lower-level inferences about certain causal relations have been made, I believe that new rules of inference can be derived from this basic Bayesian foundation.

To see how this might work, consider how we acquire a skill like learning how to speak and write in some particular language.  The rules of grammar, the syntactic structure and so forth which underlie any particular language are learned through use.  We begin to associate words with certain conceptual structures (see part 2 of this post-series for more details on language and ontology) and then we build up the length and complexity of our linguistic expressions by adding concepts built on higher levels of abstraction.  To maximize the productivity and specificity of our expressions, we also learn more complex rules pertaining to the order in which we speak or write various combinations of words (which varies from language to language).

These grammatical rules can be thought of as just another higher-level abstraction, another higher-level causal relation that we predict will convey more specific information to whomever we are speaking to.  If it doesn’t seem to do so, then we either modify what we have mistakenly inferred to be those grammatical rules, or depending on the context, we may simply assume that the person we’re talking to hasn’t conformed to the language or grammar that my community seems to be using.

Just like with grammar (which provides a kind of logical structure to our language), we can begin to learn new rules of inference built on the same probabilistic predictive bedrock of Bayesian inference.  We can learn some of these rules explicitly by studying logic, induction, deduction, etc., and consciously applying those rules to infer some new piece of knowledge, or we can learn these kinds of rules implicitly based on successful predictions (pertaining to behaviors of varying complexity) that happen to result from stumbling upon this method of processing causal relations within various contexts.  As mentioned earlier, this would be accomplished in part by the natural selection of randomly-generated neural network changes that best reduce the incoming prediction error.

However, language and grammar are interesting examples of an acquired set of rules because they also happen to be the primary tool that we use to learn other rules (along with anything we learn through verbal or written instruction), including (as far as I can tell) various rules of inference.  The logical structure of language (though it need not have an exclusively logical structure), its ability to be used for a number of cognitive short-cuts, and it’s influence on our thought complexity and structure, means that we are likely dependent on it during our reasoning processes as well.

When we perform any kind of conscious reasoning process, we are effectively running various mental simulations where we can intentionally manipulate our various generative models to test new predictions (new models) at varying levels of abstraction, and thus we also manipulate the linguistic structure associated with those generative models as well.  Since I have a lot more to say on reasoning as it relates to PP, including more on intuitive reasoning in particular, I’m going to expand on this further in my next post in this series, part 5.  I’ll also be exploring imagination and memory including how they relate to the processes of reasoning.

Predictive Processing: Unlocking the Mysteries of Mind & Body (Part II)

In the first post of this series I introduced some of the basic concepts involved in the Predictive Processing (PP) theory of perception and action.  I briefly tied together the notions of belief, desire, emotion, and action from within a PP lens.  In this post, I’d like to discuss the relationship between language and ontology through the same framework.  I’ll also start talking about PP in an evolutionary context as well, though I’ll have more to say about that in future posts in this series.

Active (Bayesian) Inference as a Source for Ontology

One of the main themes within PP is the idea of active (Bayesian) inference whereby we physically interact with the world, sampling it and modifying it in order to reduce our level of uncertainty in our predictions about the causes of the brain’s inputs.  Within an evolutionary context, we can see why this form of embodied cognition is an ideal schema for an information processing system to employ in order to maximize chances of survival in our highly interactive world.

In order to reduce the amount of sensory information that has to be processed at any given time, it is far more economical for the brain to only worry about the prediction error that flows upward through the neural system, rather than processing all incoming sensory data from scratch.  If the brain is employing a set of predictions that can “explain away” most of the incoming sensory data, then the downward flow of predictions can encounter an upward flow of sensory information (effectively cancelling each other out) and the only thing that remains to propagate upward through the system and do any “cognitive work” (i.e. the only thing that needs to be processed) on the predictive models flowing downward is the remaining prediction error (prediction error = predictions of sensory input minus the actual sensory input).  This is similar to data compression strategies for video files (for example) that only worry about the information that changes over time (pixels that change brightness/color) and then simply compress the information that remains constant (pixels that do not change from frame-to-frame).

The ultimate goal for this strategy within an evolutionary context is to allow the organism to understand its environment in the most salient ways for the pragmatic purposes of accomplishing goals relating to survival.  But once humans began to develop culture and evolve culturally, the predictive strategy gained a new kind of evolutionary breathing space, being able to predict increasingly complex causal relations and developing technology along the way.  All of these inferred causal relations appear to me to be the very source of our ontology, as each hierarchically structured prediction and its ability to become associated with others provides an ideal platform for differentiating between any number of spatio-temporal conceptions and their categorical or logical organization.

An active Bayesian inference system is also ideal to explain our intellectual thirst, human curiosity, and interest in novel experiences (to some degree), because we learn more about the world (and ourselves) by interacting with it in new ways.  In doing so, we are provided with a constant means of fueling and altering our ontology.

Language & Ontology

Language is an important component as well and it fits well within a PP framework as it serves to further link perception and action together in a very important way, allowing us to make new kinds of predictions about the world that wouldn’t have been possible without it.   A tool like language makes a lot of sense from an evolutionary perspective as well since better predictions about the world result in a higher chance of survival.

When we use language by speaking or writing it, we are performing an action which is instantiated by the desire to do so (see previous post about “desire” within a PP framework).  When we interpret language by listening to it or by reading, we are performing a perceptual task which is again simply another set of predictions (in this case, pertaining to the specific causes leading to our sensory inputs).  If we were simply sending and receiving non-lingual nonsense, then the same basic predictive principles underlying perception and action would still apply, but something new emerges when we send and receive actual language (which contains information).  With language, we begin to associate certain sounds and visual information with some kind of meaning or meaningful information.  Once we can do this, we can effectively share our thoughts with one another, or at least many aspects of our thoughts with one another.  This provides for an enormous evolutionary advantage as now we can communicate almost anything we want to one another, store it in external forms of memory (books, computers, etc.), and further analyze or manipulate the information for various purposes (accounting, inventory, science, mathematics, etc.).

By being able to predict certain causal outcomes through the use of language, we are effectively using the lower level predictions associated with perceiving and emitting language to satisfy higher level predictions related to more complex goals including those that extend far into the future.  Since the information that is sent and received amounts to testing or modifying our predictions of the world, we are effectively using language to share and modulate one brain’s set of predictions with that of another brain.  One important aspect of this process is that this information is inherently probabilistic which is why language often trips people up with ambiguities, nuances, multiple meanings behind words and other attributes of language that often lead to misunderstanding.  Wittgenstein is one of the more prominent philosophers who caught onto this property of language and its consequence on philosophical problems and how we see the world structured.  I think a lot of the problems Wittgenstein elaborated on with respect to language can be better accounted for by looking at language as dealing with probabilistic ontological/causal relations that serve some pragmatic purpose, with the meaning of any word or phrase as being best described by its use rather than some clear-cut definition.

This probabilistic attribute of language in terms of the meanings of words having fuzzy boundaries also tracks very well with the ontology that a brain currently has access to.  Our ontology, or what kinds of things we think exist in the world, are often categorized in various ways with some of the more concrete entities given names such as: “animals”, “plants”, “rocks”, “cats”, “cups”, “cars”, “cities”, etc.  But if I morph a wooden chair (say, by chipping away at parts of it with a chisel), eventually it will no longer be recognizable as a chair, and it may begin to look more like a table than a chair or like nothing other than an oddly shaped chunk of wood.  During this process, it may be difficult to point to the exact moment that it stopped being a chair and instead became a table or something else, and this would make sense if what we know to be a chair or table or what-have-you is nothing more than a probabilistic high-level prediction about certain causal relations.  If my brain perceives an object that produces too high of a prediction error based on the predictive model of what a “chair” is, then it will try another model (such as the predictive model pertaining to a “table”), potentially leading to models that are less and less specific until it is satisfied with recognizing the object as merely a “chunk of wood”.

From a PP lens, we can consider lower level predictions pertaining to more basic causes of sensory input (bright/dark regions, lines, colors, curves, edges, etc.) to form some basic ontological building blocks and when they are assembled into higher level predictions, the amount of integrated information increases.  This information integration process leads to condensed probabilities about increasingly complex causal relations, and this ends up reducing the dimensionality of the cause-effect space of the predicted phenomenon (where a set of separate cause-effect repertoires are combined into a smaller number of them).

You can see the advantage here by considering what the brain might do if it’s looking at a black cat sitting on a brown chair.  What if the brain were to look at this scene as merely a set of pixels on the retina that change over time, where there’s no expectations of any subset of pixels to change in ways that differ from any other subset?  This wouldn’t be very useful in predicting how the visual scene will change over time.  What if instead, the brain differentiates one subset of pixels (that correspond to what we call a cat) from all the rest of the pixels, and it does this in part by predicting proximity relations between neighboring pixels in the subset (so if some black pixels move from the right to the left visual field, then some number of neighboring black pixels are predicted to move with it)?

This latter method treats the subset of black-colored pixels as a separate object (as opposed to treating the entire visual scene as a single object), and doing this kind of differentiation in more and more complex ways leads to a well-defined object or concept, or a large number of them.  Associating sounds like “meow” with this subset of black-colored pixels, is just one example of yet another set of properties or predictions that further defines this perceived object as distinct from the rest of the perceptual scene.  Associating this object with a visual or auditory label such as “cat” finally links this ontological object with language.  As long as we agree on what is generally meant by the word “cat” (which we determine through its use), then we can share and modify the predictive models associated with such an object or concept, as we can do with any other successful instance of linguistic communication.

Language, Context, and Linguistic Relativism

However, it should be noted that as we get to more complex causal relations (more complex concepts/objects), we can no longer give these concepts a simple one word label and expect to communicate information about them nearly as easily as we could for the concept of a “cat”.  Think about concepts like “love” or “patriotism” or “transcendence” and realize how there’s many different ways that we use those terms and how they can mean all sorts of different things and so our meaning behind those words will be heavily conveyed to others by the context that they are used in.  And context in a PP framework could be described as simply the (expected) conjunction of multiple predictive models (multiple sets of causal relations) such as the conjunction of the predictive models pertaining to the concepts of food, pizza, and a desirable taste, and the word “love” which would imply a particular use of the word “love” as used in the phrase “I love pizza”.  This use of the word “love” is different than one which involves the conjunction of predictive models pertaining to the concepts of intimacy, sex, infatuation, and care, implied in a phrase like “I love my wife”.  In any case, conjunctions of predictive models can get complicated and this carries over to our stretching our language to its very limits.

Since we are immersed in language and since it is integral in our day-to-day lives, we also end up being conditioned to think linguistically in a number of ways.  For example, we often think with an interior monologue (e.g. “I am hungry and I want pizza for lunch”) even when we don’t plan on communicating this information to anyone else, so it’s not as if we’re simply rehearsing what we need to say before we say it.  I tend to think however that this linguistic thinking (thinking in our native language) is more or less a result of the fact that the causal relations that we think about have become so strongly associated with certain linguistic labels and propositions, that we sort of automatically think of the causal relations alongside the labels that we “hear” in our head.  This seems to be true even if the causal relations could be thought of without any linguistic labels, in principle at least.  We’ve simply learned to associate them so strongly to one another that in most cases separating the two is just not possible.

On the flip side, this tendency of language to associate itself with our thoughts, also puts certain barriers or restrictions on our thoughts.  If we are always preparing to share our thoughts through language, then we’re going to become somewhat entrained to think in ways that can be most easily expressed in a linguistic form.  So although language may simply be along for the ride with many non-linguistic aspects of thought, our tendency to use it may also structure our thinking and reasoning in large ways.  This would account for why people raised in different cultures with different languages see the world in different ways based on the structure of their language.  While linguistic determinism seems to have been ruled out (the strong version of the Sapir-Whorf hypothesis), there is still strong evidence to support linguistic relativism (the weak version of the Sapir-Whorf hypothesis), whereby one’s language effects their ontology and view of how the world is structured.

If language is so heavily used day-to-day then this phenomenon makes sense as viewed through a PP lens since we’re going to end up putting a high weight on the predictions that link ontology with language since these predictions have been demonstrated to us to be useful most of the time.  Minimal prediction error means that our Bayesian evidence is further supported and the higher the weight carried by these predictions, the more these predictions will restrict our overall thinking, including how our ontology is structured.

Moving on…

I think that these are but a few of the interesting relationships between language and ontology and how a PP framework helps to put it all together nicely, and I just haven’t seen this kind of explanatory power and parsimony in any other kind of conceptual framework about how the brain functions.  This bodes well for the framework and it’s becoming less and less surprising to see it being further supported over time with studies in neuroscience, cognition, psychology, and also those pertaining to pathologies of the brain, perceptual illusions, etc.  In the next post in this series, I’m going to talk about knowledge and how it can be seen through the lens of PP.

Predictive Processing: Unlocking the Mysteries of Mind & Body (Part I)

I’ve been away from writing for a while because I’ve had some health problems relating to my neck.  A few weeks ago I had double-cervical-disc replacement surgery and so I’ve been unable to write and respond to comments and so forth for a little while.  I’m in the second week following my surgery now and have finally been able to get back to writing, which feels very good given that I’m unable to lift or resume martial arts for the time being.  Anyway, I want to resume my course of writing beginning with a post-series that pertains to Predictive Processing (PP) and the Bayesian brain.  I’ve written one post on this topic a little over a year ago (which can be found here) as I’ve become extremely interested in this topic for the last several years now.

The Predictive Processing (PP) theory of perception shows a lot of promise in terms of finding an overarching schema that can account for everything that the brain seems to do.  While its technical application is to account for the acts of perception and active inference in particular, I think it can be used more broadly to account for other descriptions of our mental life such as beliefs (and knowledge), desires, emotions, language, reasoning, cognitive biases, and even consciousness itself.  I want to explore some of these relationships as viewed through a PP lens more because I think it is the key framework needed to reconcile all of these aspects into one coherent picture, especially within the evolutionary context of an organism driven to survive.  Let’s begin this post-series by first looking at how PP relates to perception (including imagination), beliefs, emotions, and desires (and by extension, the actions resulting from particular desires).

Within a PP framework, beliefs can be best described as simply the set of particular predictions that the brain employs which encompass perception, desires, action, emotion, etc., and which are ultimately mediated and updated in order to reduce prediction errors based on incoming sensory evidence (and which approximates a Bayesian form of inference).  Perception then, which is constituted by a subset of all our beliefs (with many of them being implicit or unconscious beliefs), is more or less a form of controlled hallucination in the sense that what we consciously perceive is not the actual sensory evidence itself (not even after processing it), but rather our brain’s “best guess” of what the causes for the incoming sensory evidence are.

Desires can be best described as another subset of one’s beliefs, and a set of beliefs which has the special characteristic of being able to drive action or physical behavior in some way (whether driving internal bodily states, or external ones that move the body in various ways).  Finally, emotions can be thought of as predictions pertaining to the causes of internal bodily states and which may be driven or changed by changes in other beliefs (including changes in desires or perceptions).

When we believe something to be true or false, we are basically just modeling some kind of causal relationship (or its negation) which is able to manifest itself into a number of highly-weighted predicted perceptions and actions.  When we believe something to be likely true or likely false, the same principle applies but with a lower weight or precision on the predictions that directly corresponds to the degree of belief or disbelief (and so new sensory evidence will more easily sway such a belief).  And just like our perceptions, which are mediated by a number of low and high-level predictions pertaining to incoming sensory data, any prediction error that the brain encounters results in either updating the perceptual predictions to new ones that better reduce the prediction error and/or performing some physical action that reduces the prediction error (e.g. rotating your head, moving your eyes, reaching for an object, excreting hormones in your body, etc.).

In all these cases, we can describe the brain as having some set of Bayesian prior probabilities pertaining to the causes of incoming sensory data, and these priors changing over time in response to prediction errors arising from new incoming sensory evidence that fails to be “explained away” by the predictive models currently employed.  Strong beliefs are associated with high prior probabilities (highly-weighted predictions) and therefore need much more counterfactual sensory evidence to be overcome or modified than for weak beliefs which have relatively low priors (low-weighted predictions).

To illustrate some of these concepts, let’s consider a belief like “apples are a tasty food”.  This belief can be broken down into a number of lower level, highly-weighted predictions such as the prediction that eating a piece of what we call an “apple” will most likely result in qualia that accompany the perception of a particular satisfying taste, the lower level prediction that doing so will also cause my perception of hunger to change, and the higher level prediction that it will “give me energy” (with these latter two predictions stemming from the more basic category of “food” contained in the belief).  Another prediction or set of predictions is that these expectations will apply to not just one apple but a number of apples (different instances of one type of apple, or different types of apples altogether), and a host of other predictions.

These predictions may even result (in combination with other perceptions or beliefs) in an actual desire to eat an apple which, under a PP lens could be described as the highly weighted prediction of what it would feel like to find an apple, to reach for an apple, to grab it, to bite off a piece of it, to chew it, and to swallow it.  If I merely imagine doing such things, then the resulting predictions will necessarily carry such a small weight that they won’t be able to influence any actual motor actions (even if these imagined perceptions are able to influence other predictions that may eventually lead to some plan of action).  Imagined perceptions will also not carry enough weight (when my brain is functioning normally at least) to trick me into thinking that they are actual perceptions (by “actual”, I simply mean perceptions that correspond to incoming sensory data).  This low-weighting attribute of imagined perceptual predictions thus provides a viable way for us to have an imagination and to distinguish it from perceptions corresponding to incoming sensory data, and to distinguish it from predictions that directly cause bodily action.  On the other hand, predictions that are weighted highly enough (among other factors) will be uniquely capable of affecting our perception of the real world and/or instantiating action.

This latter case of desire and action shows how the PP model takes the organism to be an embodied prediction machine that is directly influencing and being influenced by the world that its body interacts with, with the ultimate goal of reducing any prediction error encountered (which can be thought of as maximizing Bayesian evidence).  In this particular example, the highly-weighted prediction of eating an apple is simply another way of describing a desire to eat an apple, which produces some degree of prediction error until the proper actions have taken place in order to reduce said error.  The only two ways of reducing this prediction error are to change the desire (or eliminate it) to one that no longer involves eating an apple, and/or to perform bodily actions that result in actually eating an apple.

Perhaps if I realize that I don’t have any apples in my house, but I realize that I do have bananas, then my desire will change to one that predicts my eating a banana instead.  Another way of saying this is that my higher-weighted prediction of satisfying hunger supersedes my prediction of eating an apple specifically, thus one desire is able to supersede another.  However, if the prediction weight associated with my desire to eat an apple is high enough, it may mean that my predictions will motivate me enough to avoid eating the banana, and instead to predict what it is like to walk out of my house, go to the store, and actually get an apple (and therefore, to actually do so).  Furthermore, it may motivate me to predict actions that lead me to earn the money such that I can purchase the apple (if I don’t already have the money to do so).  To do this, I would be employing a number of predictions having to do with performing actions that lead to me obtaining money, using money to purchase goods, etc.

This is but a taste of what PP has to offer, and how we can look at basic concepts within folk psychology, cognitive science, and theories of mind in a new light.  Associated with all of these beliefs, desires, emotions, and actions (which again, are simply different kinds of predictions under this framework), is a number of elements pertaining to ontology (i.e. what kinds of things we think exist in the world) and pertaining to language as well, and I’d like to explore this relationship in my next post.  This link can be found here.

Is Death Bad For You? A Response to Shelly Kagan

I’ve enjoyed reading and listening to the philosopher Shelly Kagan, both in debate, lectures, and various articles.  One topic he’s well known for is that of death, specifically the fear of death, and trying to understand the details behind, and justification for, the general attitude people have toward the concept of death.  I’ve thought about the fear of death on and off for a long time now, but coming across an article of Kagan’s reignited my interest in the topic.  He wrote an article a few years ago in The Chronicle, where he expounds on some of the ontological puzzles related to the concept of death.  I thought I’d briefly summarize the article’s main points and give a response to it here.

Can Death Be Bad For Us?

Kagan begins with the assumption that the death of a person’s body results in the end of that person’s existence.  This is certainly a reasonable assumption as there’s no evidence to the contrary, that is, that persons can exist without a living body.  Simple enough.  Then he asks the question, if death is the end of our existence, then how can being dead be bad for us?  While some would say that death is particularly bad for the survivors of the deceased since they miss the person who’s died and the relationship they once had with that person.  But it seems more complicated than that, because we could likewise have an experience where a cherished friend or family member leaves us and goes somewhere far away such that we can be confident that we’ll never see that person ever again.

Both the death of that person, and the alternative of their leaving forever to go somewhere such that we’ll never have contact with them again, result in the same loss of relationship.  Yet most people would say that if we knew about their dying instead of simply leaving forever, there’s more to be sad about in terms of death being bad for them, not simply bad for us.  And this sadness results from more than simply knowing how they died — the process of death itself — which could have been unpleasant, but also could have been entirely benign (such as dying peacefully in one’s sleep).  Similarly, Kagan tells us, the prospect of dying can be unpleasant as well, but he asserts, this only seems to make sense if death itself is bad for us.

Kagan suggests:

Maybe nonexistence is bad for me, not in an intrinsic way, like pain, and not in an instrumental way, like unemployment leading to poverty, which in turn leads to pain and suffering, but in a comparative way—what economists call opportunity costs. Death is bad for me in the comparative sense, because when I’m dead I lack life—more particularly, the good things in life. That explanation of death’s badness is known as the deprivation account.

While the deprivation account seems plausible, Kagan thinks that accepting it results in a couple of potential problems.  He argues, if something is true, it seems as if there must be some time when it’s true.  So when would it be true that death is bad for us?  Not now, he says.  Because we’re not dead now.  Not after we’re dead either, because then we no longer exist so nothing can be bad for a being that no longer exists.  This seems to lead to the conclusion that either death isn’t bad for anyone after all, or alternatively, that not all facts are datable.  He gives us another possible example of an undatable fact.  If Kagan shoots “John” today such that John slowly bleeds to death after two days, but Kagan dies tomorrow (before John dies) then after John dies, can we say that Kagan killed John?  If Kagan did kill John, when did he kill him?  Kagan no longer existed when John died so how can we say that Kagan killed John?

I think we could agree with this and say that while it’s true that Kagan didn’t technically kill John, a trivial response to this supposed conundrum is to say that Kagan’s actions led to John’s death.  This seems to solve that conundrum by working within the constraints of language, while highlighting the fact that when we say someone killed X what we really mean is that someone’s actions led to the death of X, thus allowing us to be consistent with our conceptions of existence, causality, killing, blame, etc.

Existence Requirement, Non-Existential Asymmetry, & It’s Implications

In any case, if all facts are datable (or at least facts like these), then we should be able to say when exactly death is bad for us.  Can things only be bad for us when we exist?  If so, this is what Kagan refers to as the existence requirement.  If we don’t accept such a requirement — that one must exist in order for things to be bad for us — that produces other problems, like being able to say for example that non-existence could be bad for someone who has never existed but that could have possibly existed.  This seems to be a pretty strange claim to hold to.  So if we refuse to accept that it’s a tragedy for possibly existent people to never come into existence, then we’d have to accept the existence requirement, which I would contend is a more plausible assumption to accept.  But if we do so, then it seems that we have to accept that death isn’t in fact bad for us.

Kagan suggests that we may be able to reinterpret the existence requirement, and he does this by distinguishing between two versions, a modest version which asserts that something can be bad for you only if you exist at some time or another, and a bold version which asserts that something can be bad for you only if you exist at the same time as that thing.  Accepting the modest version seems to allow us a way out of the problems posed here, but that it too has some counter-intuitive implications.

He illustrates this with another example:

Suppose that somebody’s got a nice long life. He lives 90 years. Now, imagine that, instead, he lives only 50 years. That’s clearly worse for him. And if we accept the modest existence requirement, we can indeed say that, because, after all, whether you live 50 years or 90 years, you did exist at some time or another. So the fact that you lost the 40 years you otherwise would have had is bad for you. But now imagine that instead of living 50 years, the person lives only 10 years. That’s worse still. Imagine he dies after one year. That’s worse still. An hour? Worse still. Finally, imagine I bring it about that he never exists at all. Oh, that’s fine.

He thinks this must be accepted if we accept the modest version of the existence requirement, but how can this be?  If one’s life is shortened relative to what they would have had, this is bad, and gets progressively worse as the life is hypothetically shortened, until a life span of zero is reached, in which case they no longer meet the modest existence requirement and thus can’t have anything be bad for them.  So it’s as if it gets infinitely worse as the potential life span approaches the limit of zero, and then when zero is reached, becomes benign and is no longer an issue.

I think a reasonable response to this scenario is to reject the claim that hypothetically shrinking the life span to zero is suddenly no longer an issue.  What seems to be glossed over in this example is the fact that this is a set of comparisons of one hypothetical life to another hypothetical life (two lives with different non-zero life spans), resulting in a final comparison between one hypothetical life and no life at all (a life span of zero).  This example illustrates whether or not something is better or worse in comparison, not whether something is good or bad intrinsically speaking.  The fact that somebody lived for as long as 90 years or only for 10 years isn’t necessarily good or bad but only better or worse in comparison to somebody who’s lived for a different length of time.

The Intrinsic Good of Existence & Intuitions On Death

However, I would go further and say that there is an intrinsic good to existing or being alive, and that most people would agree with such a claim (and that the strong will to live that most of us possess is evidence of our acknowledging such a good).  That’s not to say that never having lived is bad, but only to say that living is good.  If not living is neither good nor bad but considered a neutral or inconsequential state, then we can hold the position that living is better than not living, even if not living isn’t bad at all (after all it’s neutral, neither good nor bad).  Thus we can still maintain our modest existence requirement while consistently holding these views.  We can say that not living is neither good nor bad, that living 10 years is good (and better than not living), that living 50 years is even better, and that living 90 years is even better yet (assuming, for the sake of argument, that the quality of life is equivalently good in every year of one’s life).  What’s important to note here is that not having lived in the first place doesn’t involve the loss of a good, because there was never any good to begin with.  On the other hand, extending the life span involves increasing the quantity of the good, by increasing it’s duration.

Kagan seems to agree overall with the deprivation account of why we believe death is bad for us, but that some puzzles like those he presented still remain.  I think one of the important things to take away from this article is the illustration that we have obvious limitations in the language that we use to describe our ontological conceptions.  These scenarios and our intuitions about them also seem to show that we all generally accept that living or existence is intrinsically good.  It may also highlight the fact that many people intuit that some part of us (such as a soul) continues to exist after death such that death can be bad for us after all (since our post-death “self” would still exist).  While the belief in souls is irrational, it may help to explain some common intuitions about death.

Dying vs. Death, & The Loss of An Intrinsic Value

Remember that Kagan began his article by distinguishing between how one dies, the prospect of dying and death itself.  He asked us, how can the prospect of dying be bad if death itself (which is only true when we no longer exist) isn’t bad for us. Well, perhaps we should consider that when people say that death is bad for us they tend to mean that dying itself is bad for us.  That is to say, the prospect of dying isn’t unpleasant because death is bad for us, but rather because dying itself is bad for us.  If dying occurs while we’re still alive, resulting in one’s eventual loss of life, then dying can be bad for us even if we accepted the bold existence requirement — that something can only be bad for us if we exist at the same time as that thing.  So if the “thing” we’re referring to is our dying rather than our death, this would be consistent with the deprivation account of death, would allow us to put a date (or time interval) on such an event, and would seem to resolve the aforementioned problems.

As for Kagan’s opening question, when is death bad for us?  If we accept my previous response that dying is what’s bad for us, rather than death, then it would stand to reason that death itself isn’t ever bad for us (or doesn’t have to be), but rather what is bad for us is the loss of life that occurs as we die.  If I had to identify exactly when the “badness” that we’re actually referring to occurs, I suppose I would choose an increment of time before one’s death occurs (with an exclusive upper bound set to the time of death).  If time is quantized, as per quantum mechanics, then that means that the smallest interval of time is one Planck second.  So I would argue that at the very least, the last Planck second of our life (if not a longer interval), marks the event or time interval of our dying.

It is this last interval of time ticking away that is bad for us because it leads to our loss of life, which is a loss of an intrinsic good.  So while I would argue that never having received an intrinsic good in the first place isn’t bad (such as never having lived), the loss of (or the process of losing) an intrinsic good is bad.  So I agree with Kagan that the deprivation account is on the right track, but I also think the problems he’s posed are resolvable by thinking more carefully about the terminology we use when describing these concepts.

The Co-Evolution of Language and Complex Thought

Language appears to be the most profound feature that has arisen during the evolution of the human mind.  This feature of humanity has led to incredible thought complexity, and also provided the foundation for the most simplistic thoughts imaginable.  Many of us may wonder how exactly language is related to thought and also how the evolution of language has affected the evolution of thought complexity.  In this post, I plan to discuss what I believe to be some evolutionary aspects of psycholinguistics.

Mental Languages

It is clear that humans think in some form of language, whether it is accomplished as an interior monologue using our native spoken language and/or some form of what many call “mentalese” (i.e. a means of thinking about concepts and propositions without the use of words).  Our thoughts are likely accomplished by a combination of these two “types” of language.  The fact that young infants and aphasics (for example) are able to think, clearly implies that not all thoughts are accomplished through a spoken language.  It is also likely that the aforementioned “mentalese” is some innate form of mental symbolic representation that is primary in some sense, supported by the fact that it appears to be necessary in order for spoken language to develop or exist at all.  Considering that words and sentences do not have any intrinsic semantic content or value (at least non-iconic forms) illustrates that this “mentalese” is in fact a prerequisite for understanding or assigning the meaning of words and sentences.  Complex words can always be defined by a number of less complex words, but at some point a limit is reached whereby the most simple units of definition are composed of seemingly irreducible concepts and propositions.  Furthermore, those irreducible concepts and propositions do not require any words to have meaning (for if they did, we would have an infinite regress of words being defined by words being defined by words, ad infinitum).  The only time we theoretically require symbolic representation of semantic content using words is if the concepts are to be easily (if at all) communicated to others.

While some form of mentalese is likely the foundation or even ultimate form of thought, it is my contention that communicable language has likely had a considerable impact on the evolution of the human mind — not only in the most trivial or obvious way whereby communicated words affect our thoughts (e.g. through inspiration, imagination, and/or reflection of new knowledge or perspectives), but also by serving as a secondary multidimensional medium for symbolic representation. That is, spoken language (as well as its subsequent allotrope, written language) has provided a form of combinatorial leverage somewhat independent of (although working in harmony with) the mental or cognitive faculties that innately exist for thought.

To be sure, spoken language has likely co-evolved with our mentalese, as they seem to affect one another in various ways.  As new types or combinations of propositions and concepts are discovered, the spoken language has to adapt in order to make those new propositions and concepts communicable to others.  What interests me more however, is how communicable language (spoken or written) has affected the evolution of thought complexity itself.

Communicable Language and Thought Complexity

Words and sentences, which primarily developed in order to communicate instances of our mental language to others, have also undoubtedly provided a secondary multidimensional medium for symbolic representation.  For example, when we use words, we are able to compress a large amount of information (i.e. many concepts and propositions) into small tokens with varying densities.  This type of compression has provided a way to maximize our use of short-term and long-term memory in order for more complex thoughts and mental capabilities to develop (whether that increase in complexity is defined as longer strings of concepts or propositions, or otherwise).

When we think of a sentence to ourselves, we end up utilizing a phonological/auditory loop, whereby we can better handle and organize information at any single moment by internally “hearing” it.  We can also visualize the words in multiple ways including how the mouth movements of people speaking those words would look like (and we can use our tactile and/or motor memory to mentally simulate how our mouth feels when these words are spoken), and if a written form of the communicable language exists, we can actually visualize the words as they would appear in their written form (as well as the aforementioned tactile/motor memory to mentally simulate how it feels to write those words).  On top of this, we can often visualize each glyph in multiple formats (i.e. different sizes, shapes, fonts, etc.).  This has provided a multidimensional memory tool, because it serves to represent the semantic information in a way that our brain can perceive and associate with multiple senses (in this case through our auditory, visual, and somatosensory cortices).  In some cases, when a particular written language uses iconic glyphs (as opposed to arbitrary symbols), the user can also visualize the concept represented by the symbol in an idealized fashion.  Associating information with multiple cognitive faculties or perceptual systems means that more neural network patterns of the brain will be involved with the attainment, retention, and recall of that information.  For those of us that have successfully used various pneumonic devices and other memory-enhancing “tricks”, we can clearly see the efficacy and importance of communicable language and its relationship to how we think about and combine various concepts and propositions.

By enhancing our memory, communicable language has served as an epistemic catalyst allowing us to build upon our previous knowledge in ways that would have likely been impossible without said language.  Once written language was developed, we were no longer limited by our own short-term and long-term memory, for we had a way of recording as much information as possible, and this also allowed us to better formulate new ideas and consider thoughts that would have otherwise been too complex to mentally visualize or keep track of.  Mathematics, for example, exponentially increased in complexity once we were able to represent the relationships between variables in a written form.  While previously we would have been limited by our short-term and long-term memory, written language allowed us to eventually formulate incredibly long (sometimes painfully long) mathematical expressions.  Once written language was converted further into an electro-mechanical language (i.e. through the use of computers), our “writing” mediums, information acquisition mechanisms, and pattern recognition capabilities, were further aided and enhanced exponentially thus providing yet another platform for an increased epistemic or cognitive “breathing space”.  If our brains underwent particular mutations after communicable language evolved, it may have provided a way to ratchet our way into entirely new cognitive niches or capabilities.  That is, by communicable language providing us with new strings of concepts and propositions, there may have been an unprecedented natural selection pressure/opportunity (if an advantageous brain mutation accompanied this new cognitive input) in order for our brain to obtain an entirely new and possibly more complex fundamental concept or way of thinking.

Summary

It seems evident to me that communicable language, once it had developed, served as an extremely important epistemic catalyst and multidimensional cognitive tool that likely had a great influence on the evolution of the human brain.  While some form of mentalese was likely a prerequisite and precursor to any subsequent forms of communicable language, the cognitive “breathing space” that communicable language provided, seems to have had a marked impact on the evolution of human thought complexity, and on the amount of knowledge that we’ve been able to obtain from the world around us.  I have no doubt that the current external linguistic tools we use (i.e. written and electronic forms of handling information) will continue to significantly alter the ongoing evolution of the human mind.  Our biological forms of memory will likely adapt in order to be economically optimized and better work with those external media.  Likewise, our increasing access to new types of information may have provided (and may continue to provide) a natural selective pressure or opportunity for our brains to evolve in order to think about entirely new and potentially more complex concepts, thereby periodically increasing the lexicon or conceptual database of our “mentalese” (assuming that those new concepts provide a survival/reproductive advantage).