Irrational Man: An Analysis (Part 2, Chapter 4: “The Sources of Existentialism in the Western Tradition”)

In the previous post in this series on William Barrett’s Irrational Man, I explored Part 1, Chapter 3: The Testimony of Modern Art, where Barrett illustrates how existentialist thought is best exemplified in modern art.  The feelings of alienation, discontent, and meaninglessness pervade a number of modern expressions, and so as is so often the case throughout history, we can use artistic expression as a window to peer inside our evolving psyche and witness the prevailing views of the world at any point in time.

In this post, I’m going to explore Part II, Chapter 4: The Sources of Existentialism in the Western Tradition.  This chapter has a lot of content and is quite dense, and so naturally this fact is reflected in the length of this post.

Part II: “The Sources of Existentialism in the Western Tradition”

Ch. 4 – Hebraism and Hellenism

Barrett begins this chapter by pointing out two forces governing the historical trajectory of Western civilization and Western thought: Hebraism and Hellenism.  He mentions an excerpt of Matthew Arnold’s, in his book Culture and Anarchy; a book that was concerned with the contemporary situation occurring in nineteenth-century England, where Arnold writes:

“We may regard this energy driving at practice, this paramount sense of the obligation of duty, self-control, and work, this earnestness in going manfully with the best light we have, as one force.  And we may regard the intelligence driving at those ideas which are, after all, the basis of right practice, the ardent sense for all the new and changing combinations of them which man’s development brings with it, the indomitable impulse to know and adjust them perfectly, as another force.  And these two forces we may regard as in some sense rivals–rivals not by the necessity of their own nature, but as exhibited in man and his history–and rivals dividing the empire of the world between them.  And to give these forces names from the two races of men who have supplied the most splendid manifestations of them, we may call them respectively the forces of Hebraism and Hellenism…and it ought to be, though it never is, evenly and happily balanced between them.”

And while we may have felt a stronger attraction to one force over the other at different points in our history, both forces have played an important role in how we’ve structured our individual lives and society at large.  What distinguishes these two forces ultimately comes down to the difference between doing and knowing; between the Hebrew’s concern for practice and right conduct, and the Greek’s concern for knowledge and right thinking.  I can’t help but notice this Hebraistic influence in one of the earliest expressions of existentialist thought when Soren Kierkegaard (in one of his earlier journals) had said: “What I really need is to get clear about what I must do, not what I must know, except insofar as knowledge must precede every act”.  Here we see that some set of moral virtues are what form the fundamental substance and meaning of life within Hebraism (and by extension, Kierkegaard’s philosophy), in contrast with the Hellenistic subordination of the moral virtues to those of the intellectual variety.

I for one am tempted to mention Aristotle here, since he was a Greek philosopher, yet one who formulated and extolled moral virtues, forming the foundation for most if not all modern systems of virtue ethics; despite all of his work on science, logic and epistemology.  And sure enough, Arnold does mention Aristotle briefly, but he says that for Aristotle the moral virtues are “but the porch and access to the intellectual (virtues), and with these last is blessedness.”  So it’s still fair to say that the intellectual virtues were given priority over the moral virtues within Greek thought, even if moral virtues were an important element, serving as a moral means to a combined moral-intellectual end.  We’re still left then with a distinction of what is prioritized or what the ultimate teleology is for Hebraism and Hellenism: moral man versus intellectual man.

One perception of Arnold’s that colors his overall thesis is a form of uneasiness that he sees as lurking within the Hebrew conception of man; an uneasiness stemming from a conception of man that has been infused with the idea of sin, which is simply not found in Greek philosophy.  Furthermore, this idea of sin that pervades the Hebraistic view of man is not limited to one’s moral or immoral actions; rather it penetrates into the core being of man.  As Barrett puts it:

“But the sinfulness that man experiences in the Bible…cannot be confined to a supposed compartment of the individual’s being that has to do with his moral acts.  This sinfulness pervades the whole being of man: it is indeed man’s being, insofar as in his feebleness and finiteness as a creature he stands naked in the presence of God.”

So we have a predominantly moral conception of man within Hebraism, but one that is amalgamated with an essential finitude, an acknowledgement of imperfection, and the expectation of our being morally flawed human beings.  Now when we compare this to the philosophers of Ancient Greece, who had a more idealistic conception of man, where humans were believed to have the capacity to access and understand the universe in its entirety, then we can see the groundwork that was laid for somewhat rivalrous but nevertheless important motivations and contributions to the cultural evolution of Western civilization: science and philosophy from the Greeks, and a conception of “the Law” entrenched in religion and religious practice from the Hebrews.

1. The Hebraic Man of Faith

Barrett begins here by explaining how the Law, though important for its effects on having bound the Jewish community together for centuries despite their many tribulations as a people, the Law is not central to Hebraism but rather the basis of the Law is what lies at its center.  To see what this basis is, we are directed to reread the Book of Job in the Hebrew Bible:

“…reread it in a way that takes us beyond Arnold and into our own time, reread it with an historical sense of the primitive or primary mode of existence of the people who gave expression to this work.  For earlier man, the outcome of the Book of Job was not such a foregone conclusion as it is for us later readers, for whom centuries of familiarity and forgetfulness have dulled the violence of the confrontation between man and God that is central to the narrative.”

Rather than simply taking the commandments of one’s religion for granted and following them without pause, Job’s face-to-face confrontation with his Creator and his demand for justification was in some sense the first time the door had been opened to theological critique and reflection.  The Greeks did something similar where eventually they began to apply reason and rationality to examine religion, stepping outside the confines of simply blindly following religious traditions and rituals.  But unlike the Greek, the Hebrew does not proceed with this demand of justification through the use of reason but rather by a direct encounter of the person as a whole (Job, and his violence, passion, and all the rest) with an unknowable and awe-inspiring God.  Job doesn’t solve his problem with any rational resolution, but rather by changing his entire character.  His relation to God involves a mutual confrontation of two beings in their entirety; not just a rational facet of each being looking for a reasonable explanation from one another, but each complete being facing one another, prepared to defend their entire character and identity.

Barrett mentions the Jewish philosopher Martin Buber here to help clarify things a little, by noting that this relation between Job and God is a relation between an I and a Thou (or a You).  Since Barrett doesn’t explain Buber’s work in much detail, I’ll briefly digress here to explain a few key points.  For those unfamiliar with Buber’s work, the I-Thou relation is a mode of living that is contrasted with another mode centered on the connection between an I and an It.  Both modes of living, the I-Thou and the I-It, are, according to Buber, the two ways that human beings can address existence; the two modes of living required for a complete and fulfilled human being.  The I-It mode encompasses the world of experience and sensation; treating entities as discrete objects to know about or to serve some use.  The I-Thou mode on the other hand encompasses the world of relations itself, where the entities involved are not separated by some discrete boundary, and where a living relationship is acknowledged to exist between the two; this mode of existence requires one to be an active participant rather than merely an objective observer.

It is Buber’s contention that modern life has entirely ignored the I-Thou relation, which has led to a feeling of unfulfillment and alienation from the world around us.   The first mode of existence, that of the I-It, involves our acquiring data from the world, analyzing and categorizing it, and then theorizing about it; and this leads to a subject-object separation, or an objectification of what is being experienced.  According to Buber, modern society almost exclusively uses this mode to engage with the world.  In contrast, with the I-Thou relation the I encounters the object or entity such that both are transformed by the relation; and there is a type of holism at play here where the You or Thou is not simply encountered as a sum of its parts but in its entirety.  To put it another way, it’s as if the You encountered were the entire universe, or that somehow the universe existed through this You.

Since this concept is a little nebulous, I think the best way to summarize Buber’s main philosophical point here is to say that we human beings find meaning in our lives through our relationships, and so we need to find ways of engaging the world such as to maximize our relationships with it; and this is not limited to forming relationships with fellow human beings, but with other animals, inanimate objects, etc., even if these relationships differ from one another in any number of ways.

I actually find some relevance between Buber’s “I and Thou” conception and Nietzsche’s idea of eternal recurrence: the idea that given an infinite amount of time and a finite number of ways that matter and energy can be arranged, anything that has ever happened or that ever will happen, will recur an infinite number of times.  In Nietzsche’s The Gay Science, he mentions how the concept of eternal recurrence was, to him, horrifying and paralyzing.  But, he also concluded that the desire for an eternal return or recurrence would show the ultimate affirmation of one’s life:

“What, if some day or night a demon were to steal after you into your loneliest loneliness and say to you: ‘This life as you now live it and have lived it, you will have to live once more and innumerable times more’ … Would you not throw yourself down and gnash your teeth and curse the demon who spoke thus?  Or have you once experienced a tremendous moment when you would have answered him: ‘You are a god and never have I heard anything more divine.

The reason I find this relevant to Buber’s conception is two-fold: first of all, the fact that the universe is causally connected makes the universe inseparable to some degree, where each object or entity could be seen to, in some sense, represent the rest of the whole; and secondly, if one is to wish for the eternal recurrence, then they have an attitude toward the world that is non-resistant, and that can learn to accept the things that are out of one’s control.  The universe effectively takes on the character of fate itself, and offers an opportunity to a being such as ourselves, to have a kind of “faith in fate”; to have a relation of trust with the universe as it is, as it once was, and as it will be in the future.

Now the kind of faith I’m speaking of here isn’t a brand that contradicts reason or evidence, but rather is simply a form of optimism and acceptance that colors one’s expectations and overall experience.  And since we are a part of this universe too, our attitude towards it should in many ways reflect our overall relation to it; which brings us back to Buber, where any encounter I might have with the universe (or any “part” of it) is an encounter with a You, that is to say, it is an I-Thou relation.

This “faith in fate” concept I just alluded to is a good segue to return back to the relation between Job and his God within Hebraism, as it was a relation of never-ending faith.  But importantly, as Barrett points out, this faith of Job’s takes on many shapes including that of anger, dismay, revolt, and confusion.  For Job says, “Though he slay me, yet will I trust in him…but I will maintain my own ways before him.”  So Job’s faith is his maintaining a form of trust in his Creator, even though he says that he will also retain his own identity, his dispositions, and his entire being while doing so.  And this trust ultimately forms the relation between the two.  Barrett describes the kind of faith at play here as more fundamental and primary than that which many modern-day religious proponents would lay claim to.

“Faith is trust before it is belief-belief in the articles, creeds, and tenets of a Church with which later religious history obscures this primary meaning of the word.  As trust, in the sense of the opening up of one being toward another, faith does not involve any philosophical problem about its position relative to faith and reason.  That problem comes up only later when faith has become, so to speak, propositional, when it has expressed itself in statements, creeds, systems.  Faith as a concrete mode of being of the human person precedes faith as the intellectual assent to a proposition, just as truth as a concrete mode of human being precedes the truth of any proposition.”

Although I see faith as belief as fundamentally flawed and dangerous, I can certainly respect the idea of faith as trust, and consequently I can respect the idea of faith as a concrete mode of being; where this mode of being is effectively an attitude of openness taken towards another.  But, whereas I agree with Barrett’s claim that truth as a concrete mode of human being precedes the truth of any proposition, in the sense that a basis and desire for truth are needed prior to evaluating the truth value of any proposition, I don’t believe one can ever justifiably make the succession from faith as trust to faith as belief for the simple reason that if one has good reason to trust another, then they don’t need faith to mediate any beliefs stemming from that trust.  And of course, what matters most here is describing and evaluating what the “Hebrew man of faith” consists of, rather than criticizing the concept of faith itself.

Another interesting historical development stemming from Hebraism pertains to the concept of faith as it evolved within Protestantism.  As Barrett tells us:

“Protestantism later sought to revive this face-to-face confrontation of man with his God, but could produce only a pallid replica of the simplicity, vigor, and wholeness of this original Biblical faith…Protestant man would never have dared confront God and demand an accounting of His ways.  That era in history had long since passed by the time we come to the Reformation.”

As an aside, it’s worth mentioning here that Christianity actually developed as a syncretism between Hebraism and Hellenism; the two very cultures under analysis in this chapter.  By combining Jewish elements (e.g. monotheism, the substitutionary atonement of sins through blood-magic, apocalyptic-messianic resurrection, interpretation of Jewish scriptures, etc.) with Hellenistic religious elements (e.g. dying-and-rising savior gods, virgin birth of a deity, fictive kinship, etc.), the cultural diffusion that occurred resulted in a religion that was basically a cosmopolitan personal salvation cult.

But eventually Christianity became a state religion (after the age of Constantine), resulting in a theocracy that heavily enforced a particular conception of God onto the people.  Once this occurred, the religion was now fully contained within a culture that made it very difficult to question or confront anyone about these conceptions, in order to seek justification for them.  And it may be that the intimidation propagated by the prevailing religious authorities became conflated with an attribute of God; where a fear of questioning any conception of God became integrated in a theological belief about God.

Perhaps it was because the primary conceptions of God, once Christianity entered the Medieval Period, were more externally imposed on everybody rather than discovered in a more personal and introspective way (even if unavoidably initiated and reinforced by the external culture), thus externalizing the attributes of God such that they became more heavily influenced by the perceptibly unquestionable claims of those in power.  Either way, the historical contingencies surrounding the advent of Christianity involved sectarian battles with the winning sect using their dogmatic authority to suppress the views (and the Christian Gospels/scriptures) of the other sects.  And this may have inhibited people from ever questioning their God or demanding justification for what they interpreted to be God’s actions.

One final point in this section that I’d like to highlight is in regard to the concept of knowledge as it relates to Hebraism.  Barrett distinguishes this knowledge from that of the Greeks:

“We have to insist on a noetic content in Hebraism: Biblical man too had his knowledge, though it is not the intellectual knowledge of the Greek.  It is not the kind of knowledge that man can have through reason alone, or perhaps not through reason at all; he has it rather through body and blood, bones and bowels, through trust and anger and confusion and love and fear; through his passionate adhesion in faith to the Being whom he can never intellectually know.  This kind of knowledge a man has only through living, not reasoning, and perhaps in the end he cannot even say what it is he knows; yet it is knowledge all the same, and Hebraism at its source had this knowledge.”

I may not entirely agree with Barrett here, but any disagreement is only likely to be a quibble over semantics, relating to how he and I define noetic and how we each define knowledge.  The word noetic actually derives from the Greek adjective noētikos which means “intellectual”, and thus we are presumably considering the intellectual knowledge found in Hebraism.  Though this knowledge may not be derived from reason alone, I don’t think (as Barrett implies above) that it’s even possible to have any noetic content without at least some minimal use of reason.  It may be that the kind of intellectual content he’s alluding to is that which results from a kind of synthesis between reason and emotion or intuition, but it would seem that reason would still have to be involved, even if it isn’t as primary or dominant as in the intellectual knowledge of the Greek.

With regard to how Barrett and I are each defining knowledge, I must say that just as most other philosophers have done, including those going all the way back to Plato, one must distinguish between knowledge and all other beliefs because knowledge is merely a subset of all of one’s beliefs (otherwise one would be saying that any belief whatsoever is considered knowledge).  To distinguish knowledge as a subset of one’s beliefs, my definition of knowledge can be roughly defined as:

“Recognized patterns of causality that are stored into memory for later recall and use, that positively and consistently correlate with reality, and for which that correlation has been validated by having made successful predictions and/or successfully accomplishing goals through the use of said recalled patterns.”

My conception of knowledge also takes what one might call unconscious knowledge into account (which may operate more automatically and less explicitly than conscious knowledge); as long as it is acquired information that allows you to accomplish goals and make successful predictions about the causal structure of your experience (whether internal feelings or other mental states, or perceptions of the external world), it counts as knowledge nevertheless.  Now there may be different degrees of reliability and usefulness of different kinds of knowledge (such as intuitive knowledge versus rational or scientific knowledge), but those distinctions don’t need to be parsed out here.

What Barrett seems to be describing here as a Hebraistic form of knowledge is something that is deeply embodied in the human being; in the ways that one lives their life that don’t involve, or aren’t dominated by, reason or conscious abstractions.  Instead, there seems to be a more organic process involved of viewing the world and interacting with it in a manner relying more heavily on intuition and emotionally-driven forms of expression.  But, in order to give rise to beliefs that cohere with one another to some degree, to form some kind of overarching narrative, reason (I would argue) is still employed.  There’s still some logical structure to the web of beliefs found therein, even if reason and logic could be used more rigorously and explicitly to turn many of those beliefs on their head.

And when it comes to the Hebraistic conception of God, including the passionate adhesion in faith to a Being whom one can never intellectually know (as Barrett puts it), I think this can be better explained by the fact that we do have reason and rationality, unlike most other animals, as well as cognitive biases such as hyperactive agency detection.  It seems to me that the Hebraistic God concept is more or less derived from an agglomeration of the unknown sources of one’s emotional experiences (especially those involving an experience of the transcendent) and the unpredictable attributes of life itself, then ascribing agency to that ensemble of properties, and (to use Buber’s terms) establishing a relationship with that perceived agency; and in this case, the agent is simply referred to as God (Yahweh).

But in order to infer the existence of such an ensemble, it would seem to require a process of abstracting from particular emotional and unpredictable elements and instances of one’s experience to conclude some universal source for all of them.  Perhaps if this process is entirely unconscious we can say that reason wasn’t at all involved in it, but I suspect that the ascription of agency to something that is on par with the universe itself and its large conjunction of properties which vary over space and time, including its unknowable character, is more likely mediated by a combination of cognitive biases, intuition, emotion, and some degree of rational conscious inference.  But Barrett’s point still stands that the noetic content in Hebraism isn’t dominated by reason as in the case of the Greeks.

2. Greek Reason

Even though existential philosophy is largely a rebellion against the Platonic ideas that have permeated Western thought, Barrett reminds us that there is still some existential aspect to Plato’s works.  Perhaps this isn’t surprising once one realizes that he actually began his adult life aspiring to be a dramatic poet.  Eventually he abandoned this dream undergoing a sort of conversion and decided to dedicate the rest of his life toward a search for wisdom as per the inspiration of Socrates.  Even though he engaged in a life long war against the poets, he still maintained a piece of his poetic character in his writings, including up to the end when he wrote about the great creation myth, Timaeus.

By far, Plato’s biggest contribution to Western thought was his differentiating rational consciousness from the rest of our human psyche.  Prior to this move, rationality was in some sense subsumed under the same umbrella as emotion and intuition.  And this may be one of the biggest changes to how we think, and one that is so often taken for granted.  Barrett describes the position we’re in quite well:

“We are so used today to taking our rational consciousness for granted, in the ways of our daily life we are so immersed in its operations, this it is hard at first for us to imagine how momentous was this historical happening among the Greeks.  Steeped as our age is in the ideas of evolution, we have not yet become accustomed to the idea that consciousness itself is something that has evolved through long centuries and that even today, with us, is still evolving.  Only in this century, through modern psychology, have we learned how precarious a hold consciousness may exert upon life, and we are more acutely aware therefore what a precious deal of history, and of effort, was required for its elaboration, and what creative leaps were necessary at certain times to extend it beyond its habitual territory.”

Barrett’s mention of evolution is important here, because I think we ought to distinguish between the two forms of evolution that have affected how we think and experience the world.  On the one hand, we have our biological evolution to consider, where our brains have undergone dramatic changes in terms of the level of consciousness we actually experience and the degree of causal complexity that the brain can model; and on the other hand, we have our cultural evolution to consider, where our brains have undergone a number of changes in terms of the kinds of “programs” we run on it, how those programs are run and prioritized, and how we process and store information in various forms of external hardware.

In regard to biological evolution, long before our ancestors evolved into humans, the brain had nothing more than a protoself representation of our body and self (to use the neuroscientist Antonio Damasio’s terminology); it had nothing more than an unconscious state that served as a sort of basic map in the brain tracking the overall state of the body as a single entity in order to accomplish homeostasis.  Then, beyond this protoself there evolved a more sophisticated level of core consciousness where the organism became conscious of the feelings and emotions associated with the body’s internal states, and also became conscious that her feelings and thoughts were her own, further enhancing the sense of self, although still limited to the here-and-now or the present moment.  Finally, beyond this layer of self there evolved an extended consciousness: a form of consciousness that required far more memory, but which brought the organism into an awareness of the past and future, forming an autobiographical self with a perceptual simulator (imagination) that transcended space and time in some sense.

Once humans developed language, then we began to undergo our second form of evolution, namely culture.  After cultural evolution took off, human civilization led to a plethora of new words, concepts, skills, interests, and ways of looking at and structuring the world.  And this evolutionary step was vastly accelerated by written language, the scientific method, and eventually the invention of computers.  But in the time leading up to the scientific revolution, the industrial revolution, and finally the advent of computers and the information age, it was arguably Plato’s conceptualization of rational consciousness that paved the way forward to eventually reach those technological and epistemological feats.

It was Plato’s analysis of the human psyche and his effectively distilling the process of reason and rationality from the rest of our thought processes that allowed us to manipulate it, to enhance it, and to explore the true possibilities of this markedly human cognitive capacity.  Aristotle and many others since have merely built upon Plato’s monumental work, developing formal logic, computation, and other means of abstract analysis and information processing.  With the explicit use of reason, we’ve been able to overcome many of our cognitive biases (serving as a kind of “software patch” to our cognitive bugs) in order to discover many true facts about the world, allowing us to unlock a wealth of knowledge that had previously been inaccessible to us.  And it’s important to recognize that Plato’s impact on the history of philosophy has highlighted, more than anything else, our overall psychic evolution as a species.

Despite all the benefits that culture has brought us, there has been one inherent problem with the cultural evolution we’ve undergone: a large desynchronization between our cultural and biological evolution.  That is to say, our culture has evolved far, far faster than our biology ever could, and thus our biology hasn’t kept up with, or adapted us to, the cultural environment we’ve been creating.  And I believe this is a large source of our existential woes; for we have become a “fish out of water” (so to speak) where modern civilization and the way we’ve structured our day-to-day lives is incredibly artificial, filled with a plethora of supernormal stimuli and other factors that aren’t as compatible with our natural psychology.  It makes perfect sense then that many people living in the modern world have had feelings of alienation, loneliness, meaninglessness, anxiety, and disorientation.  And in my opinion, there’s no good or pragmatic way to fix this aside from engineering our genes such that our biology is able to catch up to our ever-changing cultural environment.

It’s also important to recognize that Plato’s idea of the universal, explicated in his theory of Forms, was one of the main impetuses for contemporary existential philosophy; not for its endorsement but rather because the idea that a universal such as “humanity” was somehow more real than any actual individual person fueled a revolt against such notions.  And it wasn’t until Kierkegaard and Nietzsche appeared on the scene in the nineteenth century where we saw an explicit attempt to reverse this Platonic idea; where the individual was finally given precedence and priority over the universal, and where a philosophy of existence was given priority over one of essence.  But one thing the existentialists maintained, as derived from Plato, was his conception that philosophizing was a personal means of salvation, transformation, and a concrete way of living (though this was, according to Plato, best exemplified by his teacher Socrates).

As for the modern existentialists, it was Kierkegaard who eventually brought the figure of Socrates back to life, long after Plato’s later portrayal of Socrates, which had effectively dehumanized him:

“The figure of Socrates as a living human presence dominates all the earlier dialogues because, for the young Plato, Socrates the man was the very incarnation of philosophy as a concrete way of life, a personal calling and search.  It is in this sense too that Kierkegaard, more than two thousand years later, was to revive the figure of Socrates-the thinker who lived his thought and was not merely a professor in an academy-as his precursor in existential thinking…In the earlier, so-called “Socratic”, dialogues the personality of Socrates is rendered in vivid and dramatic strokes; gradually, however, he becomes merely a name, a mouthpiece for Plato’s increasingly systematic views…”

By the time we get to Plato’s student, Aristotle, philosophy had become a purely theoretical undertaking, effectively replacing the subjective qualities of a self-examined life with a far less visceral objective analysis.  Indeed, by this point in time, as Barrett puts it: “…the ghost of the existential Socrates had at last been put to rest.”

As in all cases throughout history, we must take the good with the bad.  And we very likely wouldn’t have the sciences today had it not been for Plato detaching reason from the poetic, religious, and mythological elements of culture and thought, thus giving reason its own identity for the first time in history.  Whatever may have happened, we need to try and understand Greek rationalism as best we can such that we can understand the motivations of those that later objected to it, especially within modern existentialism.

When we look back to Aristotle’s Nicomachean Ethics, for example, we find a fairly balanced perspective of human nature and the many motivations that drive our behavior.  But, in evaluating all the possible goods that humans can aim for, in order to derive an answer to the ethical question of what one ought to do above all else, Aristotle claimed that the highest life one could attain was the life of pure reason, the life of the philosopher and the theoretical scientist.  Aristotle thought that reason was the highest part of our personality, where one’s capacity for reason was treated as the center of one’s real self and the core of their identity.  It is this stark description of rationalism that diffused through Western philosophy until bumping heads with modern existentialist thought.

Aristotle’s rationalism even permeated the Christianity of the Medieval period, where it maintained an albeit uneasy relationship between faith and reason as the center of the human personality.  And the quest for a complete view of the cosmos further propagated a valuing of reason as the highest human function.  The inherent problems with this view simply didn’t surface until some later thinkers began to see human existence and the potential of our reason as having a finite character with limitations.  Only then was the grandiose dream of having a complete knowledge of the universe and of our existence finally shattered.  Once this goal was seen as unattainable, we were left alone on an endless road of knowledge with no prospects for any kind of satisfying conclusion.

We can certainly appreciate the value of theoretical knowledge, and even develop a passion for discovering it, but we mustn’t lose sight of the embodied, subjective qualities of our human nature; nor can we successfully argue any longer that the latter can be dismissed due to a goal of reaching some absolute state of knowledge or being.  That goal is not within our reach, and so trying to make arguments that rely on its possibility is nothing more than an exercise of futility.

So now we must ask ourselves a very important question:

“If man can no longer hold before his mind’s eye the prospect of the Great Chain of Being, a cosmos rationally ordered and accessible from top to bottom to reason, what goal can philosophers set themselves that can measure up to the greatness of that old Greek ideal of the bios theoretikos, the theoretical life, which has fashioned the destiny of Western man for millennia?”

I would argue that the most important goal for philosophy has been and always will be the quest for discovering as many moral facts as possible, such that we can attain eudaimonia as Aristotle advocated for.  But rather than holding a life of reason as the greatest good to aim for, we should simply aim for maximal life fulfillment and overall satisfaction with one’s life, and not presuppose what will accomplish that.  We need to use science and subjective experience to inform us of what makes us happiest and the most fulfilled (taking advantage of the psychological sciences), rather than making assumptions about what does this best and simply arguing from the armchair.

And because our happiness and life fulfillment are necessarily evaluated through subjectivity, we mustn’t make the same mistakes of positivism and simply discard our subjective experience.  Rather we should approach morality with the recognition that we are each an embodied subject with emotions, intuitions, and feelings that motivate our desires and our behavior.  But we should also ensure that we employ reason and rationality in our moral analysis, so that our irrational predispositions don’t lead us farther away from any objective moral truths waiting to be discovered.  I’m sympathetic to a quote of Kierkegaard’s that I mentioned at the beginning of this post:

“What I really need is to get clear about what I must do, not what I must know, except insofar as knowledge must precede every act.”

I agree with Kierkegaard here, in that moral imperatives are the most important topic in philosophy, and should be the most important driving force in one’s life, rather than simply a drive for knowledge for it’s own sake.  But of course, in order to get clear about what one must do, one first has to know a number of facts pertaining to how any moral imperatives are best accomplished, and what those moral imperatives ought to be (as well as which are most fundamental and basic).  I think the most basic axiomatic moral imperative within any moral system that is sufficiently motivating to follow is going to be that which maximizes one’s life fulfillment; that which maximizes one’s chance of achieving eudaimonia.  I can’t imagine any greater goal for humanity than that.

Here is the link for part 5 of this post series.

Advertisements

Irrational Man: An Analysis (Part 1, Chapter 1: “The Advent of Existentialism”)

William Barrett’s Irrational Man is a nice exposition on existential philosophy which begins by exploring the state of modern humanity and philosophy and tracing its roots from ancient Greece, its development through the Medieval period and the Enlightenment, all the way to the mid-twentieth century.  He explores what he believes to be the primary cultural sources of existentialism and then surveys the contributions of perhaps the four most prominent existential philosophers: namely, Kierkegaard, Nietzsche, Heidegger, and Sartre.  I’d like to explore Barrett’s book here in more detail and I’m going to break this down into an analysis of every section and chapter, with each chapter analyzed within a separate blog post.  Below is the first post of this series; Part 1, Chapter 1: The Advent of Existentialism.

Part I: “The Present Age”

Ch. 1 – The Advent of Existentialism

Early on, Barrett gives a brief description of positivism, which he describes as a philosophical theory which holds that science is not only what distinguishes our post-Enlightenment civilization from all others, but it also claims that science should be the ultimate ruler of human life, to which Barrett remarks that science has never held this role before nor could it given the details of our psychology as human beings.  It’s true that science has never held this role before and it’s also true that the way we generally use science is ill-suited for the job of guiding our day-to-day lives in order to meet all of our psychological needs.

However, I think it would be mistaken to say that the scientific method, and empirical methods generally, can’t be used (even in principle) to determine (or to help determine) the choices one ought to make in one’s life.  While science as an enterprise isn’t generally used in this way (we tend to use it to solve more specific technical challenges and to determine well-defined mechanisms underlying various phenomena), we shouldn’t simply assume that the knowledge we’re able to gain from it will never include information pertaining to our decision-making, our preferences and values, and our ultimate goals in life.  On top of this, if one wanted to know whether or not a life “ruled by” science could meet all of one’s psychological needs, one could only test this hypothesis by employing (at the very least) an informal version of the scientific method.  So in some rudimentary sense, science and its methods (of testing hypotheses and building upon the results of such testing) are unavoidable as they pervade our lives and are inseparable from any falsifiable inquiry that arises therein.

On the flip-side, we shouldn’t assume that science on its own is capable of anything at all, let alone meeting all of our needs as a species.  What I mean by this, and one thing that I’m sure Barrett would have agreed with, is that the use of science itself and the desire to use it for some particular aim first requires an underlying set of philosophical views such as some kind of an epistemology, an ethics, etc.  This also means that science as a concept and as an instrument for gaining knowledge shouldn’t be criticized if it leads to undesirable consequences; rather it is the philosophical views of the scientist(s) undertaking some research project, and/or the philosophical views of the people that use that knowledge once it has been discovered, that should be criticized accordingly.

Barrett goes on to say:

“Positivist man is a curious creature who dwells in the tiny island of light composed of what he finds scientifically ‘meaningful,’ while the whole surrounding area in which ordinary men live from day to day and have their dealings with other men is consigned to the outer darkness of the ‘meaningless.’ “

And I couldn’t agree more that this kind of positivist thinking is flawed and incomplete as we need to take introspection, intuition, and raw experience into any complete account of our reality.  The German theoretical physicist Werner Heisenberg actually echoed similar sentiments in his later life where he said:

“The positivists have a simple solution: the world must be divided into that which we can say clearly and the rest, which we had better pass over in silence. But can any one conceive of a more pointless philosophy, seeing that what we can say clearly amounts to next to nothing? If we omitted all that is unclear we would probably be left with completely uninteresting and trivial tautologies.”

In Heisenberg’s quote here we can see the relevance of thinkers like Wittgenstein and Nietzsche, and how they explored different conceptions of meaning as well as the importance of (what Nietzsche called) perspectivism, or striving to look at the world as a whole or at any particular phenomena from as many viewpoints as possible without becoming trapped in the constraints of our language and culture.  In order to avoid dogmatism, we must be willing to at least consider different ontologies and different ways of looking at our own existence, our place in the world, and what is most important to us.  And although science shouldn’t be excluded from our sources of meaning or from our methods of determining what is and what is not meaningful, people shouldn’t expect these concepts to be restricted to the domain of science.

So what is existentialism then, according to Barrett?  Well, he sees it as a philosophical movement (and a kind of revolt) against the oversimplification of man (human beings) as assumed within positivism.  It seeks to replace this fractured view of man and instead gather all the facets of the human condition and assemble them into one coherent picture of man.  And it does so even when it requires acknowledging the darker and more questionable parts of our nature and existence; by exploring and accepting the uglier side of humanity that many in the Enlightenment tried to discount and leave by the wayside.

This post-Enlightenment view of man, which pictured man as inherently rational, went largely unchallenged for more than a hundred years (until Kierkegaard), and aside from Kierkegaard’s works which Barrett explores, I think we could also perhaps credit the work of Charles Darwin and his On the Origin of Species as well as his The Descent of Man, for firmly challenging any prevailing doubts about our animalistic and irrational origins.  Once it became apparent that human beings were the distant cousins of other primates and the more distant cousins of fish and reptiles and so on, it became that much harder to distance ourselves from the irrationality that pervades the rest of the animal kingdom.  And so it became harder to deny that we still had some level of irrationality at the core of our being, even if it was accompanied with a capacity for reason and rationality.

In the next post in this series, I’ll explore Irrational Man, Part 1, Chapter 2: The Encounter with Nothingness.

Looking Beyond Good & Evil

Nietzche’s Beyond Good and Evil serves as a thorough overview of his philosophy and is definitely one of his better works (although I think Thus Spoke Zarathustra is much more fun to read).  It’s definitely worth reading (if you haven’t already) to put a fresh perspective on many widely held philosophical assumptions that are often taken for granted.  While I’m not interested in all of the claims he makes (and he makes many, in nearly 300 aphorisms spread over nine chapters), there are at least several ideas that I think are worth analyzing.

One theme presented throughout the book is Nietzsche’s disdain for classical conceptions of truth and any form of absolutism or dogmatism.  With regard to truth or his study on the nature of truth (what we could call his alethiology), he subscribes to a view that he coined as perspectivism.  For Nietzsche, there are no such things as absolute truths but rather there are only different perspectives about reality and our understanding of it.  So he insists that we shouldn’t get stuck in the mud of dogmatism, and should instead try to view what is or isn’t true with an open mind and from as many points of view as possible.

Nietzsche’s view here is in part fueled by his belief that the universe is in a state of constant change, as well as his belief in the fixity of language.  Since the universe is in a state of constant change, language generally fails to capture this dynamic essence.  And since philosophy is inherently connected to the use of language, false inferences and dogmatic conclusions will often manifest from it.  Wittgenstein belonged to a similar school of thought (likely building off of Nietzsche’s contributions) where he claimed that most of the problems in philosophy had to do with the limitations of language, and therefore, that those philosophical problems could only be solved (if at all) through an in-depth evaluation of the properties of language and how they relate to its use.

This school of thought certainly has merit given the facts of language having more of a fixed syntactic structure yet also having a dynamic or probabilistic semantic structure.  We need language to communicate our thoughts to one another, and so this requires some kind of consistency or stability in its syntactic structure.  But the meaning behind the words we use is something that is established through use, through context, and ultimately through associations between probabilistic conceptual structures and relatively stable or fixed visual and audible symbols (written or spoken words).  Since the semantic structure of language has fuzzy boundaries, and yet is attached to relatively fixed words and grammar, it produces the illusion of a reality that is essentially unchanging.  And this results in the kinds of philosophical problems and dogmatic thinking that Nietzsche warns us of.

It’s hard to disagree with Nietzsche’s view that dogmatism is to be avoided at all costs, and that absolute truths are generally specious at best (let alone dangerous), and philosophy owes a lot to Nietzsche for pointing out the need to reject this kind of thinking.  Nietzsche’s rejection of absolutism and dogmatism is made especially clear in his views on the common conceptions of God and morality.  He points out how these concepts have changed a lot over the centuries (where, for example, the meaning of good has undergone a complete reversal throughout its history), and this is despite the fact that throughout that time, the proponents of those particular views of God or morality believe that these concepts have never changed and will never change.

Nietzsche believes that all change is ultimately driven by a will to power, where this will is a sort of instinct for autonomy, and which also consists of a desire to impose one’s will onto others.  So the meaning of these concepts (such as God or morality) and countless others have only changed because they’ve been re-appropriated by different or conflicting wills to power.  As such, he thinks that the meaning and interpretation of a concept illustrate the attributes of the particular will making use of those concepts, rather than some absolute truth about reality.  I think this idea makes a lot of sense if we regard the will to power as not only encompassing the desires belonging to any individual (most especially their desire for autonomy), but also the inferences they’ve made about reality, it’s apparent causal relations, etc., which provide the content for those desires.  So any will to power is effectively colored by the world view held by the individual, and this world view or set of inferred causal relations includes one’s inferences pertaining to language and any meaning ascribed to the words and concepts that one uses.

Even more interesting to me is the distinction Nietzsche makes between what he calls an unrefined use or version of the will to power and one that is refined.  The unrefined form of a will to power takes the desire for autonomy and directs it outward (perhaps more instinctually) in order to dominate the will of others.  The refined version on the other hand takes this desire for autonomy and directs it inward toward oneself, manifesting itself as a kind of cruelty which causes a person to constantly struggle to make themselves stronger, more independent, and to provide them with a deeper character and perhaps even a deeper understanding of themselves and their view of the world.  Both manifestations of the will to power seem to try and simply maximize one’s power over as much as possible, but the latter refined version is believed by Nietzsche to be superior and ultimately a more effective form of power.

We can better illustrate this view by considering a person who tries to dominate others to gain power in part because they lack the ability to gain power over their own autonomy, and then compare this to a person who gains control over their own desires and autonomy and therefore doesn’t need to compensate for any inadequacy by dominating others.  A person who feels a need to dominate others is in effect dependent on those subordinates (and dependence implies a certain lack of power), but a person who increases their power over themselves gains more independence and thus a form of freedom that is otherwise not possible.

I like this internal/external distinction that Nietzsche makes, but I’d like to build on it a little and suggest that both expressions of a will to power can be seen as complementary strategies to fulfill one’s desire for maximal autonomy, but with the added caveat that this autonomy serves to fulfill a desire for maximal causal power by harnessing as much control over our experience and understanding of the world as possible.  On the one hand, we can try and change the world in certain ways to fulfill this desire (including through the domination of other wills to power), or we can try and change ourselves and our view of the world (up to and including changing our desires if we find them to be misdirecting us away from our greatest goal).  We may even change our desires such that they are compatible with an external force attempting to dominate us, thus rendering the external domination powerless (or at least less powerful than it was), and then we could conceivably regain a form of power over our experience and understanding of the world.

I’ve argued elsewhere that I think that our view of the world as well as our actions and desires can be properly described as predictions of various causal relations (this is based on my personal view of knowledge combined with a Predictive Processing account of brain function).  Reconciling this train of thought with Nietzsche’s basic idea of a will to power, I think we could say that our will to power depends on our predictions of the world and its many apparent causal relations.  We could equate maximal autonomy with maximal predictive success (including the predictions pertaining to our desires). Looking at autonomy and a will to power in this way, we can see that one is more likely to make successful predictions about the actions of another if they subjugate the other’s will to power by their own.  And one can also increase the success of their predictions by changing them in the right ways, including increasing their complexity to better match the causal structure of the world, and by changing our desires and actions as well.

Another thing to consider about a desire for autonomy is that it is better formulated if it includes whatever is required for its own sustainability.  Dominating other wills to power will often serve to promote a sustainable autonomy for the dominator because then those other wills aren’t as likely to become dominators themselves and reverse the direction of dominance, and this preserves the autonomy of the dominating will to power.  This shows how this particular external expression of a will to power could be naturally selected for (under certain circumstances at least) which Nietzsche himself even argued (though in an anti-Darwinian form since genes are not the unit of selection here, but rather behaviors).  This type of behavioral selection would explain it’s prevalence in the animal kingdom including in a number of primate species aside from human beings.  I do think however that we’ve found many ways of overcoming the need or impulse to dominate and it has a lot to do with having adopted social contract theory, since in my view it provides a way of maximizing the average will to power for all parties involved.

Coming back to Nietzsche’s take on language, truth, and dogmatism, we can also see that an increasingly potent will to power is more easily achievable if it is able to formulate and test new tentative predictions about the world, rather than being locked in to some set of predictions (which is dogmatism at it’s core).  Being able to adapt one’s predictions is equivalent to considering and adopting a new point of view, a capability which Nietzsche described as inherent in any free spirit.  It also amounts to being able to more easily free ourselves from the shackles of language, just as Nietzsche advocated for, since new points of view affect the meaning that we ascribe to words and concepts.  I would add to this, the fact that new points of view can also increase our chances of making more successful predictions that constitute our understanding of the world (and ourselves), because we can test them against our previous world view and see if this leads to more or less error, better parsimony, and so on.

Nietzsche’s hope was that one day all philosophy would be flexible enough to overcome its dogmatic prejudices, its claims of absolute truths, including those revolving around morality and concepts like good and evil.  He sees these concepts as nothing more than superficial expressions of one particular will to power or another, and thus he wants philosophy to eventually move itself beyond good and evil.  Personally, I am a proponent of an egoistic goal theory of morality, which grounds all morality on what maximizes the satisfaction and life fulfillment of the individual (which includes cultivating virtues such as compassion, honesty, and reasonableness), and so I believe that good and evil, when properly defined, are more than simply superficial expressions.

But I agree with Nietzsche in part, because I think these concepts have been poorly defined and misused such that they appear to have no inherent meaning.  And I also agree with Nietzsche in that I reject moral absolutism and its associated dogma (as found in religion most especially), because I believe morality to be dynamic in various ways, contingent on the specific situations we find ourselves in and the ever-changing idiosyncrasies of our human psychology.  Human beings often find themselves in completely novel situations and cultural evolution is making this happen more and more frequently.  And our species is also changing because of biological evolution as well.  So even though I agree that moral facts exist (with an objective foundation), and that a subset of these facts are likely to be universal due to the overlap between our biology and psychology, I do not believe that any of these moral facts are absolute because there are far too many dynamic variables for an absolute morality to work even in principle.  Nietzsche was definitely on the right track here.

Putting this all together, I’d like to think that our will to power has an underlying impetus, namely a drive for maximal satisfaction and life fulfillment.  If this is true then our drive for maximal autonomy and control over our experience and understanding of the world serves to fulfill what we believe will maximize this overall satisfaction and fulfillment.  However, when people are irrational, dogmatic, and/or are not well-informed on the relevant facts (and this happens a lot!), this is more likely to lead to an unrefined will to power that is less conducive to achieving that goal, where dominating the wills of others and any number of other immoral behaviors overtakes the character of the individual.  Our best chance of finding a fulfilling path in life (while remaining intellectually honest) is going to require an open mind, a willingness to criticize our own beliefs and assumptions, and a concerted effort to try and overcome our own limitations.  Nietzsche’s philosophy (much of it at least) serves as a powerful reminder of this admirable goal.

Predictive Processing: Unlocking the Mysteries of Mind & Body (Part VI)

This is the last post I’m going to write for this particular post-series on Predictive Processing (PP).  Here’s the links to parts 1, 2, 3, 4, and 5.  I’ve already explored a bit on how a PP framework can account for folk psychological concepts like beliefs, desires, and emotions, how it accounts for action, language and ontology, knowledge, and also perception, imagination, and reasoning.  In this final post for this series, I’m going to explore consciousness itself and how some theories of consciousness fit very nicely within a PP framework.

Consciousness as Prediction (Predicting The Self)

Earlier in this post-series I explored how PP treats perception (whether online or an offline form like imagination) as simply predictions pertaining to incoming visual information with varying degrees of precision weighting assigned to the resulting prediction error.  In a sense then, consciousness just is prediction.  At the very least, it is a subset of the predictions, likely those that are higher up in the predictive hierarchy.  This is all going to depend on which aspect or level of consciousness we are trying to explain, and as philosophers and cognitive scientists well know, consciousness is difficult to pin down and define in any way.

By consciousness, we could mean any kind of awareness at all, or we could limit this term to only apply to a system that is aware of itself.  Either way we have to be careful here if we’re looking to distinguish between consciousness generally speaking (consciousness in any form, which may be unrecognizable to us) and the unique kind of consciousness that we as human beings experience.  If an ant is conscious, it doesn’t likely have any of the richness that we have in our experience nor is it likely to have self-awareness like we do (even though a dolphin, which has a large neocortex and prefrontal cortex, is far more likely to).  So we have to keep these different levels of consciousness in mind in order to properly assess their being explained by any cognitive framework.

Looking through a PP lens, we can see that what we come to know about the world and our own bodily states is a matter of predictive models pertaining to various inferred causal relations.  These inferred causal relations ultimately stem from bottom-up sensory input.  But when this information is abstracted at higher and higher levels, eventually one can (in principle) get to a point where those higher level models begin to predict the existence of a unified prediction engine.  In other words, a subset of the highest-level predictive models may eventually predict itself as a self.  We might describe this process as the emergence of some level of self-awareness, even if higher levels of self-awareness aren’t possible unless particular kinds of higher level models have been generated.

What kinds of predictions might be involved with this kind of emergence?  Well, we might expect that predictions pertaining to our own autobiographical history, which is largely composed of episodic memories of our past experience, would contribute to this process (e.g. “I remember when I went to that amusement park with my friend Mary, and we both vomited!”).  If we begin to infer what is common or continuous between those memories of past experiences (even if only 1 second in the past), we may discover that there is a form of psychological continuity or identity present.  And if this psychological continuity (this type of causal relation) is coincident with an incredibly stable set of predictions pertaining to (especially internal) bodily states, then an embodied subject or self can plausibly emerge from it.

This emergence of an embodied self is also likely fueled by predictions pertaining to other objects that we infer existing as subjects.  For instance, in order to develop a theory of mind about other people, that is, in order to predict how other people will behave, we can’t simply model their external behavior as we can for something like a rock falling down a hill.  This can work up to a point, but eventually it’s just not good enough as behaviors become more complex.  Animal behavior, most especially that of humans, is far more complex than that of inanimate objects and as such it is going to be far more effective to infer some internal hidden causes for that behavior.  Our predictions would work well if they involved some kind of internal intentionality and goal-directedness operating within any animate object, thereby transforming that object into a subject.  This should be no less true for how we model our own behavior.

If this object that we’ve now inferred to be a subject seems to behave in ways that we see ourselves as behaving (especially if it’s another human being), then we can begin to infer some kind of equivalence despite being separate subjects.  We can begin to infer that they too have beliefs, desires, and emotions, and thus that they have an internal perspective that we can’t directly access just as they aren’t able to access ours.  And we can also see ourselves from a different perspective based on how we see those other subjects from an external perspective.  Since I can’t easily see myself from an external perspective, when I look at others and infer that we are similar kinds of beings, then I can begin to see myself as having both an internal and external side.  I can begin to infer that others see me similar to the way that I see them, thus further adding to my concept of self and the boundaries that define that self.

Multiple meta-cognitive predictions can be inferred based on all of these interactions with others, and from the introspective interactions with our brain’s own models.  Once this happens, a cognitive agent like ourselves may begin to think about thinking and think about being a thinking being, and so on and so forth.  All of these cognitive moves would seem to provide varying degrees of self-hood or self-awareness.  And these can all be thought of as the brain’s best guesses that account for its own behavior.  Either way, it seems that the level of consciousness that is intrinsic to an agent’s experience is going to be dependent on what kinds of higher level models and meta-models are operating within the agent.

Consciousness as Integrated Information

One prominent theory of consciousness is the Integrated Information Theory of Consciousness, otherwise known as IIT.  This theory, initially formulated by Giulio Tononi back in 2004 and which has undergone development ever since, posits that consciousness is ultimately dependent on the degree of information integration that is inherent in the causal properties of some system.  Another way of saying this is that the causal system specified is unified such that every part of the system must be able to affect and be affected by the rest of the system.  If you were to physically isolate one part of a system from the rest of it (and if this part was the least significant to the rest of the system), then the resulting change in the cause-effect structure of the system would quantify the degree of integration.  A large change in the cause-effect structure based on this part’s isolation from the system (that is, by having introduced what is called a minimum partition to the system) would imply a high degree of information integration and vice versa.  And again, a high degree of integration implies a high degree of consciousness.

Notice how this information integration axiom in IIT posits that a cognitive system that is entirely feed-forward will not be conscious.  So if our brain processed incoming sensory information from the bottom up and there was no top-down generative model feeding downward through the system, then IIT would predict that our brain wouldn’t be able to produce consciousness.  PP on the other hand, posits a feedback system (as opposed to feed-forward) where the bottom-up sensory information that flows upward is met with a downward flow of top-down predictions trying to explain away that sensory information.  The brain’s predictions cause a change in the resulting prediction error, and this prediction error serves as feedback to modify the brain’s predictions.  Thus, a cognitive architecture like that suggested by PP is predicted to produce consciousness according to the most fundamental axiom of IIT.

Additionally, PP posits cross-modal sensory features and functionality where the brain integrates (especially lower level) predictions spanning various spatio-temporal scales from different sensory modalities, into a unified whole.  For example, if I am looking at and petting a black cat lying on my lap and hearing it purr, PP posits that my perceptual experience and contextual understanding of that experience are based on having integrated the visual, tactile, and auditory expectations that I’ve associated to constitute such an experience of a “black cat”.  It is going to be contingent on a conjunction of predictions that are occurring simultaneously in order to produce a unified experience rather than a barrage of millions or billions of separate causal relations (let alone those which stem from different sensory modalities) or having millions or billions of separate conscious experiences (which would seem to necessitate separate consciousnesses if they are happening at the same time).

Evolution of Consciousness

Since IIT identifies consciousness with integrated information, it can plausibly account for why it evolved in the first place.  The basic idea here is that a brain that is capable of integrating information is more likely to exploit and understand an environment that has a complex causal structure on multiple time scales than a brain that has informationally isolated modules.  This idea has been tested and confirmed to some degree by artificial life simulations (animats) where adaptation and integration are both simulated.  The organism in these simulations was a Braitenberg-like vehicle that had to move through a maze.  After 60,000 generations of simulated brains evolving through natural selection, it was found that there was a monotonic relationship between their ability to get through the maze and the amount of simulated information integration in their brains.

This increase in adaptation was the result of an effective increase in the number of concepts that the organism could make use of given the limited number of elements and connections possible in its cognitive architecture.  In other words, given a limited number of connections in a causal system (such as a group of neurons), you can pack more functions per element if the level of integration with respect to those connections is high, thus giving an evolutionary advantage to those with higher integration.  Therefore, when all else is equal in terms of neural economy and resources, higher integration gives an organism the ability to take advantage of more regularities in their environment.

From a PP perspective, this makes perfect sense because the complex causal structure of the environment is described as being modeled at many different levels of abstraction and at many different spatio-temporal scales.  All of these modeled causal relations are also described as having a hierarchical structure with models contained within models, and with many associations existing between various models.  These associations between models can be accounted for by a cognitive architecture that re-uses certain sets of neurons in multiple models, so the association is effectively instantiated by some literal degree of neuronal overlap.  And of course, these associations between multiply-leveled predictions allows the brain to exploit (and create!) as many regularities in the environment as possible.  In short, both PP and IIT make a lot of practical sense from an evolutionary perspective.

That’s All Folks!

And this concludes my post-series on the Predictive Processing (PP) framework and how I see it as being applicable to a far more broad account of mentality and brain function, than it is generally assumed to be.  If there’s any takeaways from this post-series, I hope you can at least appreciate the parsimony and explanatory scope of predictive processing and viewing the brain as a creative and highly capable prediction engine.

Predictive Processing: Unlocking the Mysteries of Mind & Body (Part V)

In the previous post, part 4 in this series on Predictive Processing (PP), I explored some aspects of reasoning and how different forms of reasoning can be built from a foundational bedrock of Bayesian inference (click here for parts 1, 2, or 3).  This has a lot to do with language, but I also claimed that it depends on how the brain is likely generating new models, which I think is likely to involve some kind of natural selection operating on neural networks.  The hierarchical structure of the generative models for these predictions as described within a PP framework, also seems to fit well with the hierarchical structure that we find in the brain’s neural networks.  In this post, I’m going to talk about the relation between memory, imagination, and unconscious and conscious forms of reasoning.

Memory, Imagination, and Reasoning

Memory is of course crucial to the PP framework whether for constructing real-time predictions of incoming sensory information (for perception) or for long-term predictions involving high-level, increasingly abstract generative models that allow us to accomplish complex future goals (like planning to go grocery shopping, or planning for retirement).  Either case requires the brain to have stored some kind of information pertaining to predicted causal relations.  Rather than memories being some kind of exact copy of past experiences (where they’d be stored like data on a computer), research has shown that memory functions more like a reconstruction of those past experiences which are modified by current knowledge and context, and produced by some of the same faculties used in imagination.

This accounts for any false or erroneous aspects of our memories, where the recalled memory can differ substantially from how the original event was experienced.  It also accounts for why our memories become increasingly altered as more time passes.  Over time, we learn new things, continuing to change many of our predictive models about the world, and thus have a more involved reconstructive process the older the memories are.  And the context we find ourselves in when trying to recall certain memories, further affect this reconstruction process, adapting our memories in some sense to better match what we find most salient and relevant in the present moment.

Conscious vs. Unconscious Processing & Intuitive Reasoning (Intuition)

Another attribute of memory is that it is primarily unconscious, where we seem to have this pool of information that is kept out of consciousness until parts of it are needed (during memory recall or or other conscious thought processes).  In fact, within the PP framework we can think of most of our generative models (predictions), especially those operating in the lower levels of the hierarchy, as being out of our conscious awareness as well.  However, since our memories are composed of (or reconstructed with) many higher level predictions, and since only a limited number of them can enter our conscious awareness at any moment, this implies that most of the higher-level predictions are also being maintained or processed unconsciously as well.

It’s worth noting however that when we were first forming these memories, a lot of the information was in our consciousness (the higher-level, more abstract predictions in particular).  Within PP, consciousness plays a special role since our attention modifies what is called the precision weight (or synaptic gain) on any prediction error that flows upward through the predictive hierarchy.  This means that the prediction errors produced from the incoming sensory information or at even higher levels of processing are able to have a greater impact on modifying and updating the predictive models.  This makes sense from an evolutionary perspective, where we can ration our cognitive resources in a more adaptable way, by allowing things that catch our attention (which may be more important to our survival prospects) to have the greatest effect on how we understand the world around us and how we need to act at any given moment.

After repeatedly encountering certain predicted causal relations in a conscious fashion, the more likely those predictions can become automated or unconsciously processed.  And if this has happened with certain rules of inference that govern how we manipulate and process many of our predictive models, it seems reasonable to suspect that this would contribute to what we call our intuitive reasoning (or intuition).  After all, intuition seems to give people the sense of knowing something without knowing how it was acquired and without any present conscious process of reasoning.

This is similar to muscle memory or procedural memory (like learning how to ride a bike) which is consciously processed at first (thus involving many parts of the cerebral cortex), but after enough repetition it becomes a faster and more automated process that is accomplished more economically and efficiently by the basal ganglia and cerebellum, parts of the brain that are believed to handle a great deal of unconscious processing like that needed for procedural memory.  This would mean that the predictions associated with these kinds of causal relations begin to function out of our consciousness, even if the same predictive strategy is still in place.

As mentioned above, one difference between this unconscious intuition and other forms of reasoning that operate within the purview of consciousness is that our intuitions are less likely to be updated or changed based on new experiential evidence since our conscious attention isn’t involved in the updating process. This means that the precision weight of upward flowing prediction errors that encounter downward flowing predictions that are operating unconsciously will have little impact in updating those predictions.  Furthermore, the fact that the most automated predictions are often those that we’ve been using for most of our lives, means that they are also likely to have extremely high Bayesian priors, further isolating them from modification.

Some of these priors may become what are called hyperpriors or priors over priors (many of these believed to be established early in life) where there may be nothing that can overcome them, because they describe an extremely abstract feature of the world.  An example of a possible hyperprior could be one that demands that the brain settle on one generative model even when it’s comparable to several others under consideration.  One could call this a “tie breaker” hyperprior, where if the brain didn’t have this kind of predictive mechanism in place, it may never be able to settle on a model, causing it to see the world (or some aspect of it) as a superposition of equiprobable states rather than simply one determinate state.  We could see the potential problem in an organism’s survival prospects if it didn’t have this kind of hyperprior in place.  Whether or not a hyperprior like this is a form of innate specificity, or acquired in early learning is debatable.

An obvious trade-off with intuition (or any kind of innate biases) is that it provides us with fast, automated predictions that are robust and likely to be reliable much of the time, but at the expense of not being able to adequately handle more novel or complex situations, thereby leading to fallacious inferences.  Our cognitive biases are also likely related to this kind of unconscious reasoning whereby evolution has naturally selected cognitive strategies that work well for the kind of environment we evolved in (African savanna, jungle, etc.) even at the expense of our not being able to adapt as well culturally or in very artificial situations.

Imagination vs. Perception

One large benefit of storing so much perceptual information in our memories (predictive models with different spatio-temporal scales) is our ability to re-create it offline (so to speak).  This is where imagination comes in, where we are able to effectively simulate perceptions without requiring a stream of incoming sensory data that matches it.  Notice however that this is still a form of perception, because we can still see, hear, feel, taste and smell predicted causal relations that have been inferred from past sensory experiences.

The crucial difference, within a PP framework, is the role of precision weighting on the prediction error, just as we saw above in terms of trying to update intuitions.  If precision weighting is set or adjusted to be relatively low with respect to a particular set of predictive models, then prediction error will have little if any impact on the model.  During imagination, we effectively decouple the bottom-up prediction error from the top-down predictions associated with our sensory cortex (by reducing the precision weighting of the prediction error), thus allowing us to intentionally perceive things that aren’t actually in the external world.  We need not decouple the error from the predictions entirely, as we may want our imagination to somehow correlate with what we’re actually perceiving in the external world.  For example, maybe I want to watch a car driving down the street and simply imagine that it is a different color, while still seeing the rest of the scene as I normally would.  In general though, it is this decoupling “knob” that we can turn (precision weighting) that underlies our ability to produce and discriminate between normal perception and our imagination.

So what happens when we lose the ability to control our perception in a normal way (whether consciously or not)?  Well, this usually results in our having some kind of hallucination.  Since perception is often referred to as a form of controlled hallucination (within PP), we could better describe a pathological hallucination (such as that arising from certain psychedelic drugs or a condition like Schizophrenia) as a form of uncontrolled hallucination.  In some cases, even with a perfectly normal/healthy brain, when the prediction error simply can’t be minimized enough, or the brain is continuously switching between models, based on what we’re looking at, we experience perceptual illusions.

Whether it’s illusions, hallucinations, or any other kind of perceptual pathology (like not being able to recognize faces), PP offers a good explanation for why these kinds of experiences can happen to us.  It’s either because the models are poor (their causal structure or priors) or something isn’t being controlled properly, like the delicate balance between precision weighting and prediction error, any of which that could result from an imbalance in neurotransmitters or some kind of brain damage.

Imagination & Conscious Reasoning

While most people would tend to define imagination as that which pertains to visual imagery, I prefer to classify all conscious experiences that are not directly resulting from online perception as imagination.  In other words, any part of our conscious experience that isn’t stemming from an immediate inference of incoming sensory information is what I consider to be imagination.  This is because any kind of conscious thinking is going to involve an experience that could in theory be re-created by an artificial stream of incoming sensory information (along with our top-down generative models that put that information into a particular context of understanding).  As long as the incoming sensory information was a particular way (any way that we can imagine!), even if it could never be that way in the actual external world we live in, it seems to me that it should be able to reproduce any conscious process given the right top-down predictive model.  Another way of saying this is that imagination is simply another word to describe any kind of offline conscious mental simulation.

This also means that I’d classify any and all kinds of conscious reasoning processes as yet another form of imagination.  Just as is the case with more standard conceptions of imagination (within PP at least), we are simply taking particular predictive models, manipulating them in certain ways in order to simulate some result with this process decoupled (at least in part) from actual incoming sensory information.  We may for example, apply a rule of inference that we’ve picked up on and manipulate several predictive models of causal relations using that rule.  As mentioned in the previous post and in the post from part 2 of this series, language is also likely to play a special role here where we’ll likely be using it to help guide this conceptual manipulation process by organizing and further representing the causal relations in a linguistic form, and then determining the resulting inference (which will more than likely be in a linguistic form as well).  In doing so, we are able to take highly abstract properties of causal relations and apply rules to them to extract new information.

If I imagine a purple elephant trumpeting and flying in the air over my house, even though I’ve never experienced such a thing, it seems clear that I’m manipulating several different types of predicted causal relations at varying levels of abstraction and experiencing the result of that manipulation.  This involves inferred causal relations like those pertaining to visual aspects of elephants, the color purple, flying objects, motion in general, houses, the air, and inferred causal relations pertaining to auditory aspects like trumpeting sounds and so forth.

Specific instances of these kinds of experienced causal relations have led to my inferring them as an abstract probabilistically-defined property (e.g. elephantness, purpleness, flyingness, etc.) that can be reused and modified to some degree to produce an infinite number of possible recreated perceptual scenes.  These may not be physically possible perceptual scenes (since elephants don’t have wings to fly, for example) but regardless I’m able to add or subtract, mix and match, and ultimately manipulate properties in countless ways, only limited really by what is logically possible (so I can’t possibly imagine what a square circle would look like).

What if I’m performing a mathematical calculation, like “adding 9 + 9”, or some other similar problem?  This appears (upon first glance at least) to be very qualitatively different than simply imagining things that we tend to perceive in the world like elephants, books, music, and other things, even if they are imagined in some phantasmagorical way.  As crazy as those imagined things may be, they still contain things like shapes, colors, sounds, etc., and a mathematical calculation seems to lack this.  I think the key thing to realize here is the fundamental process of imagination as being able to add or subtract and manipulate abstract properties in any way that is logically possible (given our current set of predictive models).  This means that we can imagine properties or abstractions that lack all the richness of a typical visual/auditory perceptual scene.

In the case of a mathematical calculation, I would be manipulating previously acquired predicted causal relations that pertain to quantity and changes in quantity.  Once I was old enough to infer that separate objects existed in the world, then I could infer an abstraction of how many objects there were in some space at some particular time.  Eventually, I could abstract the property of how many objects without applying it to any particular object at all.  Using language to associate a linguistic symbol for each and every specific quantity would lay the groundwork for a system of “numbers” (where numbers are just quantities pertaining to no particular object at all).  Once this was done, then my brain could use the abstraction of quantity and manipulate it by following certain inferred rules of how quantities can change by adding to or subtracting from them.  After some practice and experience I would now be in a reasonable position to consciously think about “adding 9 + 9”, and either do it by following a manual iterative rule of addition that I’ve learned to do with real or imagined visual objects (like adding up some number of apples or dots/points in a row or grid), or I can simply use a memorized addition table and search/recall the sum I’m interested in (9 + 9 = 18).

Whether we consider imagining a purple elephant, mentally adding up numbers, thinking about what I’m going to say to my wife when I see her next, or trying to explicitly apply logical rules to some set of concepts, all of these forms of conscious thought or reasoning are all simply different sets of predictive models that I’m simply manipulating in mental simulations until I arrive at a perception that’s understood in the desired context and that has minimal prediction error.

Putting it all together

In summary, I think we can gain a lot of insight by looking at all the different aspects of brain function through a PP framework.  Imagination, perception, memory, intuition, and conscious reasoning fit together very well when viewed as different aspects of hierarchical predictive models that are manipulated and altered in ways that give us a much more firm grip on the world we live in and its inferred causal structure.  Not only that, but this kind of cognitive architecture also provides us with an enormous potential for creativity and intelligence.  In the next post in this series, I’m going to talk about consciousness, specifically theories of consciousness and how they may be viewed through a PP framework.

Predictive Processing: Unlocking the Mysteries of Mind & Body (Part IV)

In the previous post which was part 3 in this series (click here for parts 1 and 2) on Predictive Processing (PP), I discussed how the PP framework can be used to adequately account for traditional and scientific notions of knowledge, by treating knowledge as a subset of all the predicted causal relations currently at our brain’s disposal.  This subset of predictions that we tend to call knowledge has the special quality of especially high confidence levels (high Bayesian priors).  Within a scientific context, knowledge tends to have an even stricter definition (and even higher confidence levels) and so we end up with a smaller subset of predictions which have been further verified through comparing them with the inferred predictions of others and by testing them with external means of instrumentation and some agreed upon conventions for analysis.

However, no amount of testing or verification is going to give us direct access to any knowledge per se.  Rather, the creation or discovery of knowledge has to involve the application of some kind of reasoning to explain the causal inputs, and only after this reasoning process can the resulting predicted causal relations be validated to varying degrees by testing it (through raw sensory data, external instruments, etc.).  So getting an adequate account of reasoning within any theory or framework of overall brain function is going to be absolutely crucial and I think that the PP framework is well-suited for the job.  As has already been mentioned throughout this post-series, this framework fundamentally relies on a form of Bayesian inference (or some approximation) which is a type of reasoning.  It is this inferential strategy then, combined with a hierarchical neurological structure for it to work upon, that would allow our knowledge to be created in the first place.

Rules of Inference & Reasoning Based on Hierarchical-Bayesian Prediction Structure, Neuronal Selection, Associations, and Abstraction

While PP tends to focus on perception and action in particular, I’ve mentioned that I see the same general framework as being able to account for not only the folk psychological concepts of beliefs, desires, and emotions, but also that the hierarchical predictive structure it entails should plausibly be able to account for language and ontology and help explain the relationship between the two.  It seems reasonable to me that the associations between all of these hierarchically structured beliefs or predicted causal relations at varying levels of abstraction, can provide a foundation for our reasoning as well, whether intuitive or logical forms of reasoning.

To illustrate some of the importance of associations between beliefs, consider an example like the belief in object permanence (i.e. that objects persist or continue to exist even when I can no longer see them).  This belief of ours has an extremely high prior because our entire life experience has only served to support this prediction in a large number of ways.  This means that it’s become embedded or implicit in a number of other beliefs.  If I didn’t predict that object permanence was a feature of my reality, then an enormous number of everyday tasks would become difficult if not impossible to do because objects would be treated as if they are blinking into and out of existence.

We have a large number of beliefs that require object permanence (and which are thus associated with object permanence), and so it is a more fundamental lower-level prediction (though not as low level as sensory information entering the visual cortex) and we use this lower-level prediction to build upon into any number of higher-level predictions in the overall conceptual/predictive hierarchy.  When I put money in a bank, I expect to be able to spend it even if I can’t see it anymore (such as with a check or debit card).  This is only possible if my money continues to exist even when out of view (regardless of if the money is in a paper/coin or electronic form).  This is just one of many countless everyday tasks that depend on this belief.  So it’s no surprise that this belief (this set of predictions) would have an incredibly high Bayesian prior, and therefore I would treat it as a non-negotiable fact about reality.

On the other hand, when I was a newborn infant, I didn’t have this belief of object permanence (or at best, it was a very weak belief).  Most psychologists estimate that our belief in object permanence isn’t acquired until after several months of brain development and experience.  This would translate to our having a relatively low Bayesian prior for this belief early on in our lives, and only once a person begins to form predictions based on these kinds of recognized causal relations can we begin to increase that prior and perhaps eventually reach a point that results in a subjective experience of a high degree in certainty for this particular belief.  From that point on, we are likely to simply take that belief for granted, no longer questioning it.  The most important thing to note here is that the more associations made between beliefs, the higher their effective weighting (their priors), and thus the higher our confidence in those beliefs becomes.

Neural Implementation, Spontaneous or Random Neural Activity & Generative Model Selection

This all seems pretty reasonable if a neuronal implementation worked to strengthen Bayesian priors as a function of the neuronal/synaptic connectivity (among other factors), where neurons that fire together are more likely to wire together.  And connectivity strength will increase the more often this happens.  On the flip-side, the less often this happens or if it isn’t happening at all then the connectivity is likely to be weakened or non-existent.  So if a concept (or a belief composed of many conceptual relations) is represented by some cluster of interconnected neurons and their activity, then it’s applicability to other concepts increases its chances of not only firing but also increasing the strength of wiring with those other clusters of neurons, thus plausibly increasing the Bayesian priors for the overlapping concept or belief.

Another likely important factor in the Bayesian inferential process, in terms of the brain forming new generative models or predictive hypotheses to test, is the role of spontaneous or random neural activity and neural cluster generation.  This random neural activity could plausibly provide a means for some randomly generated predictions or random changes in the pool of predictive models that our brain is able to select from.  Similar to the role of random mutation in gene pools which allows for differential reproductive rates and relative fitness of offspring, some amount of randomness in neural activity and the generative models that result would allow for improved models to be naturally selected based on those which best minimize prediction error.  The ability to minimize prediction error could be seen as a direct measure of the fitness of the generative model, within this evolutionary landscape.

This idea is related to the late Gerald Edelman’s Theory of Neuronal Group Selection (NGS), also known as Neural Darwinism, which I briefly explored in a post I wrote long ago.  I’ve long believed that this kind of natural selection process is applicable to a number of different domains (aside from genetics), and I think any viable version of PP is going to depend on it to at least some degree.  This random neural activity (and the naturally selected products derived from them) could be thought of as contributing to a steady supply of new generative models to choose from and thus contributing to our overall human creativity as well whether for reasoning and problem solving strategies or simply for artistic expression.

Increasing Abstraction, Language, & New Rules of Inference

This kind of use it or lose it property of brain plasticity combined with dynamic associations between concepts or beliefs and their underlying predictive structure, would allow for the brain to accommodate learning by extracting statistical inferences (at increasing levels of abstraction) as they occur and modifying or eliminating those inferences by changing their hierarchical associative structure as prediction error is encountered.  While some form of Bayesian inference (or an approximation to it) underlies this process, once lower-level inferences about certain causal relations have been made, I believe that new rules of inference can be derived from this basic Bayesian foundation.

To see how this might work, consider how we acquire a skill like learning how to speak and write in some particular language.  The rules of grammar, the syntactic structure and so forth which underlie any particular language are learned through use.  We begin to associate words with certain conceptual structures (see part 2 of this post-series for more details on language and ontology) and then we build up the length and complexity of our linguistic expressions by adding concepts built on higher levels of abstraction.  To maximize the productivity and specificity of our expressions, we also learn more complex rules pertaining to the order in which we speak or write various combinations of words (which varies from language to language).

These grammatical rules can be thought of as just another higher-level abstraction, another higher-level causal relation that we predict will convey more specific information to whomever we are speaking to.  If it doesn’t seem to do so, then we either modify what we have mistakenly inferred to be those grammatical rules, or depending on the context, we may simply assume that the person we’re talking to hasn’t conformed to the language or grammar that my community seems to be using.

Just like with grammar (which provides a kind of logical structure to our language), we can begin to learn new rules of inference built on the same probabilistic predictive bedrock of Bayesian inference.  We can learn some of these rules explicitly by studying logic, induction, deduction, etc., and consciously applying those rules to infer some new piece of knowledge, or we can learn these kinds of rules implicitly based on successful predictions (pertaining to behaviors of varying complexity) that happen to result from stumbling upon this method of processing causal relations within various contexts.  As mentioned earlier, this would be accomplished in part by the natural selection of randomly-generated neural network changes that best reduce the incoming prediction error.

However, language and grammar are interesting examples of an acquired set of rules because they also happen to be the primary tool that we use to learn other rules (along with anything we learn through verbal or written instruction), including (as far as I can tell) various rules of inference.  The logical structure of language (though it need not have an exclusively logical structure), its ability to be used for a number of cognitive short-cuts, and it’s influence on our thought complexity and structure, means that we are likely dependent on it during our reasoning processes as well.

When we perform any kind of conscious reasoning process, we are effectively running various mental simulations where we can intentionally manipulate our various generative models to test new predictions (new models) at varying levels of abstraction, and thus we also manipulate the linguistic structure associated with those generative models as well.  Since I have a lot more to say on reasoning as it relates to PP, including more on intuitive reasoning in particular, I’m going to expand on this further in my next post in this series, part 5.  I’ll also be exploring imagination and memory including how they relate to the processes of reasoning.

Predictive Processing: Unlocking the Mysteries of Mind & Body (Part III)

In the first post in this series I re-introduced the basic concepts underling the Predictive Processing (PP) theory of perception, in particular how the brain uses a form of active Bayesian inference to form predictive models that effectively account for perception as well as action.  I also mentioned how the folk psychological concepts of beliefs, desires and emotions can fit within the PP framework.  In the second post in this series I expanded the domain of PP a bit to show how it relates to language and ontology, and how we perceive the world to be structured as discrete objects, objects with “fuzzy boundaries”, and other more complex concepts all stemming from particular predicted causal relations.

It’s important to note that this PP framework differs from classical computational frameworks for brain function in a very big way, because the processing and learning steps are no longer considered separate stages within the PP framework, but rather work at the same time (learning is effectively occurring all the time).  Furthermore, classical computational frameworks for brain function treat the brain more or less like a computer which I think is very misguided, and the PP framework offers a much better alternative that is far more creative, economical, efficient, parsimonious and pragmatic.  The PP framework holds the brain to be more of a probabilistic system rather than a deterministic computational system as held in classical computationalist views.  Furthermore, the PP framework puts a far greater emphasis on the brain using feed-back loops whereas traditional computational approaches tend to suggest that the brain is primarily a feed-forward information processing system.

Rather than treating the brain like a generic information processing system that passively waits for new incoming data from the outside world, it stays one step ahead of the game, by having formed predictions about the incoming sensory data through a very active and creative learning process built-up by past experiences through a form of active Bayesian inference.  Rather than utilizing some kind of serial processing scheme, this involves primarily parallel neuronal processing schemes resulting in predictions that have a deeply embedded hierarchical structure and relationship with one another.  In this post, I’d like to explore how traditional and scientific notions of knowledge can fit within a PP framework as well.

Knowledge as a Subset of Predicted Causal Relations

It has already been mentioned that, within a PP lens, our ontology can be seen as basically composed of different sets of highly differentiated, probabilistic causal relations.  This allows us to discriminate one object or concept from another, as they are each “composed of” or understood by the brain as causes that can be “explained away” by different sets of predictions.  Beliefs are just another word for predictions, with higher-level beliefs (including those that pertain to very specific contexts) consisting of a conjunction of a number of different lower level predictions.  When we come to “know” something then, it really is just a matter of inferring some causal relation or set of causal relations about a particular aspect of our experience.

Knowledge is often described by philosophers using various forms of the Platonic definition: Knowledge = Justified True Belief.  I’ve talked about this to some degree in previous posts (here, here) and I came up with what I think is a much better working definition for knowledge which could be defined as such:

Knowledge consists of recognized patterns of causality that are stored into memory for later recall and use, that positively and consistently correlate with reality, and for which that correlation has been validated by empirical evidence (i.e. successful predictions made and/or goals accomplished through the use of said recalled patterns).

We should see right off the bat how this particular view of knowledge fits right in to the PP framework, where knowledge consists of predictions of causal relations, and the brain is in the very business of making predictions of causal relations.  Notice my little caveat however, where these causal relations should positively and consistently correlate with “reality” and be supported by empirical evidence.  This is because I wanted to distinguish all predicted causal relations (including those that stem from hallucinations or unreliable inferences) from the subset of predicted causal relations that we have a relatively high degree of certainty in.

In other words, if we are in the game of trying to establish a more reliable epistemology, we want to distinguish between all beliefs and the subset of beliefs that have a very high likelihood of being true.  This distinction however is only useful for organizing our thoughts and claims based on our level of confidence in their truth status.  And for all beliefs, regardless of the level of certainty, the “empirical evidence” requirement in my definition given above is still going to be met in some sense because the incoming sensory data is the empirical evidence (the causes) that support the brain’s predictions (of those causes).

Objective or Scientific Knowledge

Within a domain like science however, where we want to increase the reliability or objectivity of our predictions pertaining to any number of inferred causal relations in the world, we need to take this same internalized strategy of modifying the confidence levels of our predictions by comparing them to the predictions of others (third-party verification) and by testing these predictions with externally accessible instrumentation using some set of conventions or standards (including the scientific method).

Knowledge then, is largely dependent on or related to our confidence levels in our various beliefs.  And our confidence level in the truth status of a belief is just another way of saying how probable such a belief is to explain away a certain set of causal relations, which is equivalent to our brain’s Bayesian prior probabilities (our “priors”) that characterize any particular set of predictions.  This means that I would need a lot of strong sensory evidence to overcome a belief with high Bayesian priors, not least because it is likely to be associated with a large number of other beliefs.  This association between different predictions seems to me to be a crucial component not only for knowledge generally (or ontology as mentioned in the last post), but also for our reasoning processes.  In the next post of this series, I’m going to expand a bit on reasoning and how I view it through a PP framework.