Irrational Man: An Analysis (Part 3, Chapter 10: Sartre)

In the previous post from this series on William Barrett’s Irrational Man, I delved into some of Martin Heidegger’s work, particularly his philosophy as represented in his most seminal work, Being and Time.  Jean-Paul Sartre was heavily influenced by Heidegger and so it’s fortunate for us that Barrett explores him in the very next chapter.  As such, Sartre will be the person of interest in this post.

To get a better sense of Sartre’s philosophy, we should start by considering what life was like in the French Resistance from 1940-45.  Sartre’s idea of human freedom from an existentialist perspective is made clear in his The Republic of Silence where he describes the mode of human existence for the French during the Nazi German occupation of France:

“…And because of all this we were free.  Because the Nazi venom seeped into our thoughts, every accurate thought was a conquest.  Because an all-powerful police tried to force us to hold our tongues, every word took on the value of a declaration of principles.  Because we were hunted down, every one of our gestures had the weight of a solemn commitment…”

I think we can interpret this as describing the brute authenticity of all the most salient features of our existence that can result in the face of severe forces of oppression.  By living knee-deep in an oppressive atmosphere with little if any libertarian freedom, one may inevitably reevaluate what they consider to be most important; they may exercise a great deal more care in terms of deciding what to think about, what to say, or what to do at any given time; and all of this may serve to make that which is thought about, said, or done, to be far more meaningful and free of the noise and superficiality that usually accompanies our day-to-day goings on.  Putting it this way is not to diminish the heinousness of any kind of violent oppression, but merely to point out that oppression can make one consider every part of their existence with a lot more care and concern, taking absolutely nothing for granted while living in such a precarious position.

And when standing in the face of death day after day, not only is the radical contingency of human existence made more clear than ever, but so is that which is intrinsically most valuable to us:

“Exile, captivity, and especially death (which we usually shrink from facing at all in happier days) became for us the habitual objects of our concern…And the choice that each of us made of his life was an authentic choice because it was made face to face with death, because it could always have been expressed in these terms: “Rather death than…”

This is certainly a powerful way of defining authenticity, or at least setting its upper bound; by saying that you’d rather die than abstain from some particular thought, speech, or action is to make a declaration of what’s truly the most meaningful to any individual given their current knowledge and the conditions of their own existence; and this is true even if their thoughts or actions are immoral or uninformed, for nothing matters other than the fact that one has done so with an unparalleled amount of care and personal significance.

While all of this was going on, the French government and the bourgeoisie were seemingly powerless, undetermined, and incapable of any significant recourse:

“…’Les salauds’ became a potent term for Sartre in those days-the salauds, the stinkers, the stuffy and self-righteous people congealed in the insincerity of their virtues and vices.  This atmosphere of decay breathes through Sartre’s first novel, Nausea, and it is no accident that the quotation on the flyleaf is from Celine, the poet of the abyss, of the nihilism and disgust of that period.  The nausea in Sartre’s book is the nausea of existence itself; and to those who are ready to use this as an excuse for tossing out the whole of Sartrian philosophy, we may point out that it is better to encounter one’s existence in disgust than never to encounter it at all-as the salaud in his academic or bourgeois or party-leader strait jacket never does.”

The sheer lack of personal commitment and passion plaguing so many would certainly help to reinforce a feeling of disgust or even propagate feelings of nihilism and helplessness.  And the kind of superficiality that Sartre perceived around him was (and of course, still is) in so many ways the norm; it’s the general way that people conduct themselves, and not only socially but even personally in the sense that one’s view of themselves and who they ought to be is heavily and externally imposed from collective norms and expectations.

“The essential freedom, the ultimate and final freedom that cannot be taken from a man, is to say No.  This is the basic premise in Sartre’s view of human freedom: freedom is in its very essence negative, though this negativity is also creative.”

This is an interesting conception, thinking of the essence of freedom as having a negative character, although perhaps we can compare this to or analogize this with the general concept of negative liberty (freedom from interference from others) as opposed to positive liberty (having the means and freedom to achieve one’s goals).  Since there are a number of contingencies that can prevent positive liberty from being realized, negative liberty (which could include the freedom to say ‘No’) seems to be the more fundamental or basic of the two.  It makes sense to think of this as the final freedom too, as Sartre does, since physical force and contingency can take away anyone’s ability to do all else, except to say ‘No’ or at the very least to think ‘No’ (in the event that one is coerced to say ‘Yes’).  In other words, one can have a number of freedoms taken away, but nobody can take away your freedom to desire one outcome over another, even if that desire can never be satisfied.

At this point it’s also worth mentioning that Sartre’s writings seem to reflect conflicting views on what constitutes human freedom.  His views on freedom may have changed over the course of his written works or he may just have had two different conceptions in mind.  On the one hand, in his earlier works, he seemed to refer to freedom in an ontological sense, as a property of human consciousness.  In this conception, he seemed to view freedom as the ability of a conscious agent to choose the attitude they hold and how they react toward the circumstances they find themselves in.  This would mean that everybody, even a prisoner held captive, is free insofar as they have the freedom to choose if they’ll accept or resist their situation.  Later on, Sartre began to focus on material freedom, that is, freedom from coercion.  In any case, both of these conceptions of freedom still seem to resonate with the concept of negative liberty mentioned earlier.  The only distinction would be that ontological freedom is the one form of negative liberty that is truly inalienable and final:

“Where all the avenues of action are blocked for a man, this freedom may seem a tiny and unimportant thing; but it is in fact total and absolute, and Sartre is right to insist upon it as such, for it affords man his final dignity, that of being man.”

Along with this equating human freedom with consciousness (or at least with conscious will) is the consideration that consciousness is the only epistemological certainty we have:

“He (Descartes) proposes to reject all beliefs so long as they can in any way be doubted, to resist all temptations to say Yes until his understanding is convinced according to its own light; so he rejects belief in the existence of an external world, of minds other than his own, of his own body, of his memories and sensations.  What he cannot doubt is his own consciousness, for to doubt is to be conscious, and therefore by doubting its existence he would affirm it.  “

Sartre takes this Cartesian position to the nth degree, by constraining his interpretation of human beings with his interpretation of Cartesian skepticism:

“But before this certitude shone for him (Descartes), (and even after it, before he passed on to other truths), he was a nothingness, a negativity, existing outside of nature and history, for he had temporarily abolished all belief in a world of bodies and memories.  Thus man cannot be interpreted, Sartre says, as a solid substantial thing existing amid the plenitude of things that make up a world; he is beyond nature because in his negative capability he transcends it.  Man’s freedom is to say No, and this means that he is the being by whom nothingness comes into being.”

To conclude that one is in fact a nothingness, even within the context of Cartesian doubt is, I think, going a bit too far.  While I can agree that consciousness in many ways transcends physical objects and the kind of essence we ascribe to them, it doesn’t transcend existence itself; and what is a nothingness really other than a lack of existence?

And one can think of consciousness as having some attribute of negativity, in the sense that we can think of things and even think of ourselves as being not this or not that; we can think of our desires in terms of what we don’t want; but I think this quality of negation can only serve to differentiate our conscious will (or perhaps ego) from the rest of our experience.  Since our consciousness can’t negate itself, I don’t think that this quality can negate us such that we become an actual nothingness.

“For Sartre there is no unalterable structure of essences or values given prior to man’s own existence.  That existence has meaning, finally, only as the liberty to say No, and by saying No to create a world.  If we remove God from the picture, the liberty which reveals itself in the Cartesian doubt is total and absolute; but thereby also the more anguished, and this anguish is the irreducible destiny and dignity of man.”

And here we see that common element within existentialism: the idea that existence precedes essence, where human beings are thought to have no inherent essence but only an essence that is derived from one’s own existence and one’s own consciousness.  Going back to Sartre’s conception of human freedom, perhaps the idea of saying No to create a world is grounded on the principle of our setting boundaries or limits to the world we make for ourselves; an idea that I can certainly get behind.

However, even if we remove God from the equation, which is certainly justified from the perspective of Cartesian skepticism, let alone from the lack of empirical evidence to support such a belief, we still can’t get around our having some essential human character or qualities (even if no God existed to conceive of such an essence).  While I agree that of the possible lives that you or I can have, it shouldn’t be up to others to choose which life that should be, nor what its ultimate meaning or purpose is, if we agree that there are a finite number of possible lives to choose from (individually and collectively as a species), given our constraints as finite beings (a thoroughly existential claim at that), then I think we can say that we do have an inherent essence that is defined by these biological and physical limitations.

So I do share Sartre’s view that our lives shouldn’t be defined by others, by social norms, nor defined by qualities found in only some of the possible lives one has access to; but it needs to be said that there are still only a finite number of possible life trajectories which will necessarily exclude some of the possibilities given to other types of organisms, and which will exclude any options that are not physically or logically possible.  In other words, I think human beings do have an inherent essence that is defined to some degree by the possibility space we as a species exist within; we just tend to think of essence as something that has to be well-defined, like some short list of properties or functions (which I think is too limited a view).

“Thus Sartre ends by allotting to man the kind of freedom that Descartes has ascribed only to God.  It is, he says, the freedom Descartes secretly would have given to man had he not been limited by the theological convictions of his time and place.”

And this is certainly a valid point.  What’s interesting to me with respect to the whole Cartesian experiment was that Descartes began by doubting the external world and only by introducing an ad hoc conception of a deity, let alone a benevolent deity, could he trust that his senses weren’t deceiving him.  This is despite the fact that this belief in God would be just as likely to be a deception as any other belief (and so Descartes attempted to solve this problem through invalid circular reasoning).

More to the point however is the fact that we wouldn’t be able to tell the difference between a “real” external world and one that is imagined or illusory (unbeknownst to us) anyway, and so we might as well simply stick with the fewest assumptions that yield the best predictions of future experiences.  And this includes the assumption of our having some degree of human freedom that directs our conscious will and desires; a view of our having some control over what trajectory we take in life:

“He is not subordinate to a realm of essences: rather, He creates essences and causes them to be what they are.  Hence such a God transcends the laws of logic and mathematics.  As His (God’s) existence precedes all essences, so man’s existence precedes his essence: he exists, and out of the free project which his existence can be he makes himself what he is.  “

It’s worth considering the psychological motivations for positing and defining a God that has qualities reminiscent of human creativity and teleological intentionality; that is to say, we often project our human capacity for creation and goal-directedness onto some idealized being conception that accounts for our existence in the same way that we see ourselves as accounting for the existence of any and all human constructs, be they watches, paintings, or entire cities.  We see ourselves giving purpose and functional descriptions to the things we make and then can’t help but ask what our own purposes may be; and if so, then we feel inclined to assume it must be someone other than ourselves that gives us this purpose, finally arriving at a God-concept of some kind.  And lo and behold, throughout history the gods we see in all the religions have been remarkably human with their anger, jealousy, pettiness, impulsiveness, and retributive sense of justice, thus illustrating that gods are made in man’s image as opposed to the contrary.  But this kind of thinking only leads to our diminishing our own degree of personal freedom, bestowing it to a concept instead (whether bestowed to “society” or to a “deity”), and thus it abstracts away our own individuality and responsibility.

Despite the God concept permeating much of human history, we eventually challenged it with the greatest challenges precipitating from the Enlightenment, causing many of us to discover the freedom we had all along:

“When God dies, man takes the place of God.  Such had been the prophecy of Dostoevski and Nietzsche, and Sartre on this point is their heir.  The difference, however, is that Dostoevski and Nietzsche were frenzied prophets, whereas Sartre advances his view with all the lucidity of Cartesian reason and advances it, moreover, as a basis for humanitarian and democratic social action.  To put man in the place of God may seem, to traditionalists, an unspeakable piece of diabolism; but in Sartre’s case it is done by a thinker who, to judge from his writings, is a man of overwhelming good will and generosity.”

And this idea of human beings taking the place of God, is really nothing more than the realization of our having projected qualities that were ours from the very beginning.  This may make some people uncomfortable because they think that by eliminating the God concept, that humans are somehow seeing ourselves as a God; but we still have the same limitations that human beings have always had, with our limited knowledge and our existing in a world with many forces out of our control.  But now we have an added sense of freedom that further empowers us as individuals to do the most we can with our lives and to give our lives meaning on our own terms, rather than having them externally imposed on us (see the previous post on Heidegger, and the concept of Das Man or the “They” self).  Sartre’s appreciation and promotion of this idea is admirable to say the least, as it seeks to preferentially value human beings over ideas, to value consciousness over abstractions, and to maximize each person having a voice that they can call their own.

1.  Being-for-itself and Being-in-itself

“Being-for-itself (coextensive with the realm of consciousness) is perpetually beyond itself.  Our thought goes beyond itself, toward tomorrow or yesterday, and toward the outer edges of the world.  Human existence is thus a perpetual self-transcendence: in existing we are always beyond ourselves.  Consequently we never possess our being as we possess a thing.”

This definitely resonates with Heidegger’s conception of temporality, where our understanding of our world is constrained by our past experiences and is always projecting into the future in terms of our always wanting to become a certain kind of person (even if this vision of our future self changes over time).  And here, Sartre’s conception of Being-for-itself is meant to be contrasted with his concept of Being-in-itself; where the former represents consciousness and the latter non-conscious or unconscious material objects.

As mentioned in my last post, I agree with there being a temporally transcendent quality of consciousness, and I think it can be best explained by the fact that our perception of the world is operating under a schema of prediction: our brains are always trying to predict what will happen next in terms of our sensations and these predictions are what we actually experience (as opposed to sensory information itself), where we experience a kind of controlled hallucination that evolves its modeling in response to any prediction error encountered.  We try to make these perceptual predictions come true and reduce the prediction error by either changing our mental models of the world’s causal structure to better account for the incoming sensory information or through a series of actions involving our manipulating the world in some way or another.

“This notion of the For-itself may seem obscure, but we encounter it on the most ordinary occasions.  I have been to a party; I come away, and with a momentary pang of sadness I say, “I am not myself.”  It is necessary to take this proposition quite literally as something that only man can say of himself, because only man can say it to himself.  I have the feeling of coming to myself after having lost or mislaid my being momentarily in a social encounter that estranged me from myself.  This is the first and immediate level on which the term yields its meaning.”

And here’s where the For-itself concept shows the distinction between our individual self (what Heidegger would likely call our authentic self) and the externalized self that is more or less a product of social norms and expectations stemming from the human collective.

“But the next and deeper level of meaning occurs when the feeling of sadness leads me to think in a spirit of self-reproach that I am not myself in a still more fundamental sense: I have not realized so many of the plans or projects that make up my being; I am not myself because I do not measure up to myself. “

So aside from what society expects of us, we also have our own expectations and goals, many of which that never come into fruition.  Sartre sees this as a kind of comparison where we’re evaluating our current view of ourselves with the more idealized view of ourselves that we’re constantly striving for.  And we can see this anytime we think or say something like “I shouldn’t have done that because I’m better than that…”  We’re obviously not better than ourselves, but it’s because we’re always comparing our perspective to the ideal model we hold of ourselves that we in some sense transcend any present sense of self, and this view allows us to better understand Sartre’s idea that we’re always existing beyond ourselves.

Barrett describes Sartre’s view in more detail:

“Beneath this level too there is still another and deeper meaning, rooted in the very nature of my being: I am not myself, and I can never be myself, because my being stretching out beyond itself at any given moment exceeds itself.  I am always simultaneously more and less than I am.  Herein lies the fundamental uneasiness, or anxiety, of the human condition, for Sartre.  Because we are perpetually flitting beyond ourselves, or falling behind our possibilities, we seek to ground our existence, to make it more secure.”

We might even say that we conjure up a kind of essence of who we are because we’re trying to solidify our existence such that we’re more or less like the objects we interact with in our world.  And we do this through our imagination by projecting our identity into the future, effectively defining ourselves in terms of the person we want to become rather than the person we are now.  However, our imagination is both a blessing and a curse, for it is through imagination that we express our creativity, simulating new worlds for ourselves that we can choose from; but, our imagination is also a means of comparing our current state of affairs to some ideal model, in many cases showing us what we don’t have (yet want) and showing us a number of lost opportunities which can further constrain our future space of possibilities.  Whether we love it or hate it, our capacity for imagination is both the cause and the cure for our suffering; and it is a capacity that is fundamental to our existence as human beings.

“The For-itself struggles to become the In-itself, to attain the rocklike and unshakable solidity of a thing.  But this is can never do so long as it is conscious and alive.  Man is doomed to the radical insecurity and contingency of his being; for without it he would not be man but merely a thing and would not have the human capacity for transcendence of his given situation.”

It seems then that our projection into the future, equating our self-hood with some preconceived future self, not only serves as an attempt to objectify our subjectivity, but it may also be psychologically motivated by the uncertainty that we recognize in both our short-term and long-term lives.  We feel uncomfortable and anxious about the contingency of our lives and the sheer number of possible future life trajectories, and so we conceptualize one path as if it were a fixed object and simply refer to it as “me”.

“With enormous ingenuity and virtuosity Sartre interweaves these two notions-Being-in-itself and Being-for-itself-to elucidate the complexities of human psychology.”

Sartre’s conception of Being-in-itself and Being-for-itself are largely grounded on the distinction between consciousness and that which is not conscious.  On the surface, it looks like merely another way of expressing the distinction between mind and matter, but the distinction is far more nuanced and complex.  Being-for-itself is defined by its relation to Being-in-itself, where, for Sartre, Being-for-itself is, among other things, in a constant state of want for synthesis with Being-in-itself, despite the fact that Being-for-itself involves a kind of knowledge that it is not in-itself.  Being-for-itself is in a constant state of incompleteness, where a lack of some kind permeates its mode of being; it’s constantly aware of what it is not, and whatever it may be, lacking any essential structure to build off of, is created out of nothingness.  In other words, Being-for-itself creates its own being with a blank canvas.

As I mentioned above, I don’t particularly agree with Sartre’s conception here, that we as conscious human beings create our being or our essence with a blank canvas, as there are (to stick with the artistic analogy) a limited number of ways that the canvas can be “painted”, there are a limited number of existential “colors” to work with, and while the future may be undetermined or unpredictable to some degree, this doesn’t mean that any possible future that comes to mind can come into being.  As human beings that have evolved from other forms of life on this planet, we have a number of innate or biologically-constrained predispositions including: our instincts and basic behavioral drives, the kinds of conditioning that actually work to modify our behavior, and a psychology that can only thrive within a finite range of behavioral and environmental conditions.

I see this “blank canvas” analogy as just another version of the blank slate or tabula rasa model of human knowledge within the social sciences.  But this model has been falsified with findings in behavioral genetics, psychology, and neurobiology, where various forms of evidence have shown that human beings are pre-wired for social interaction, and have genetically dependent capacities such as intelligence, memory, and reason.  And even taking into account the complicated relation between genes and environment, where the expression of the former is dependent on the latter, we still have environmental conditions that are finite, contingent, externally imposed, and which drastically shape our behavior whether we realize it or not.

On top of this, as I’ve argued elsewhere, we don’t have a libertarian form of free will either, which means that if we could go back in time to some set of initial conditions (where we had made a conscious choice of some kind, and perhaps a significant life altering choice at that), we wouldn’t be able to choose an alternative course of action, barring any quantum randomness.  This doesn’t mean that we can’t still make use of some compatibilist conceptions of free will, nor does it mean that we shouldn’t be held accountable for our actions, as these are all important for maintaining effective behavioral conditioning; but it does mean that we don’t have the kind of blank canvas that Sartre had envisioned.

The role that nothingness plays in Sartre’s philosophy goes beyond the incompleteness resulting from a self that is in some sense defined by our projection into the future.

“The Self, indeed, is in Sartre’s treatment, as in Buddhism, a bubble, and a bubble has nothing at its center. (but not nihilism)…For Sartre, on the other hand, the nothingness of the Self is the basis for the will to action: the bubble is empty and will collapse, and so what is left us but the energy and passion to spin that bubble out?”

I can certainly see some parallels between Buddhism and Sartre’s conception of the Self, where, for example, the fact that we lack a persistent identity, instead having a mode of being that’s dynamic and based on a projection into an uncertain future, is analogous to or at least consistent with the Buddhist conception of the Self being a kind of illusion.  I also tend to agree with the basic idea that we don’t have any ultimate control or ownership over our thoughts and feelings even if we feel like we do from time to time, and our experiences seem to be just one infinitesimal, fleeting moment after another.  Our attention and behavior are absorbed and shaped by the world around us, and it’s only when we fixate or identify ourselves as a concept, a thought, or a feeling, or a certain combination of these elements, that we conjure up a seemingly objective and essential sense of Self.  While Sartre’s philosophy appears to grant us a greater level of free will and human freedom than that of Buddhist thought, it’s this essential self, this eternal self or identity, interrelated with the concept of a permanent soul that both Sartre and Buddhism reject.

“Man’s existence is absurd in the midst of a cosmos that knows him not; the only meaning he can give himself is through the free project that he launches out of his own nothingness.”

This is one of the core ideas in existentialism, and also in Sartre’s own work, that I most appreciate: the idea that we (have to) give our own lives their meaning and purpose, as opposed to their being externally imposed on us from a religion, a deity (whether real or imagined), a culture, or any other source aside from ourselves, if we’re to avoid submitting the core of our being to some kind of authoritarian tyranny.

Human beings were not designed for some particular purpose, aside from natural selection having “designed” us for survival, and perhaps a thermodynamic argument could be made that all life serves the “purpose” of generating more entropy than if our planet had no life at all (which means that life actually speeds up the inevitable heat death of the universe).  But, because our intelligence and the kind of consciousness we’ve gained through evolution grants us a kind of open-ended creativity and imagination, with technology and cultural or memetic evolution effectively superseding our “selfish genes”, we no longer have to live with survival as our primary directive.  We now have the power to realize that our existence transcends our evolutionary past, and we have the ability to define ourselves based on our primary goals in life, whatever they may be, given the constraints of the world we find ourselves born into or embedded within at the present moment.  And it’s this transcendent quality that we find in a lot of Sartre’s work.

“He (Sartre) misses the very root of all of Heidegger’s thinking, which is Being itself.  There is, in Sartre, Being-for-itself and Being-in-itself but there is no Being…Sartre has advanced as the fundamental thesis of his Existentialism the proposition that existence precedes essence.  This thesis is true for Heidegger as well, in the historical, social, and biographical sense that man comes into existence and makes himself to be what he is.  But for Heidegger another proposition is even more basic than this: namely, Being precedes existence.”

In contrast with Sartre then, with Heidegger we see an ontological requirement for Being itself, prior to even having the possibility for Sartre’s distinction between Being-for-itself and Being-in-itself.  That is to say, these kinds of distinctions would, for Heidegger at least, require a more basic field of Being within which one may be able to posit distinctive types of Being.  It would seem that some ultimate region of Being would be necessary in order for the transcendent quality of consciousness to manifest in any way whatsoever:

“To be sure, Sartre has gone a considerable step beyond Descartes by making the essence of human consciousness to be transcendence: that is, to be conscious is, immediately and as such, to point beyond that isolated act of consciousness and therefore to be beyond or above it…But this step forward by Sartre is not so considerable if the transcending subject has nowhere to transcend himself: if there is not an open field or region of Being in which the fateful dualism of subject and object ceases to be.”

One might also wonder how this all fits in with the question of how a conscious subject can ever truly know an object in its entirety.  For example, Kant believed that any object in itself, which he referred to more generally as the noumenon (as opposed to the phenomenon, the object as we perceive it), was inherently unknowable.  Nietzsche thought that it didn’t matter whether or not we could know the object in itself, if such an inaccessible realm of knowledge pertaining to any object even existed in the first place.  Rather, he thought the only thing that mattered was our ability to master the object, to manipulate it and use it according to our own will (the will to power).  For Sartre, the only thing that really mattered was the will to action, which was primarily a will to some form of revolutionary action, so he doesn’t really address the problem of knowledge with respect to the subject truly knowing some object or other (nor does he specify whether or not there even is any problem, at least within his seminal work Being and Nothingness).

Heidegger was less concerned with this issue of knowledge and was more concerned with grounding the possibility of there being a subject or object in the first place:

“For what Heidegger proposes is a more basic question than that of Descartes and Kant: namely, how is it possible for the subject to be? and for the object to be?  And his answer is: Because both stand out in the truth, or un-hiddenness, of Being.  This notion of truth of Being is absent from the philosophy of Sartre; indeed, nowhere in his vast Being and Nothingness does he deal with the problem of truth in a radical and existential way: so far as he understands truth at all, he takes it in the ordinary intellectualistic sense that has been traditional with non-existential philosophers.  In the end (as well as at his very beginning) Sartre turns out thus to be a Cartesian rationalist-one, to be sure, whose material is impassioned and existential, but for all that not any less a Cartesian in his ultimate dualism between the For-itself and the In-itself.”

I don’t think it’s fair to label Sartre a Cartesian rationalist as Barrett does here, since Sartre seems to actually depart from Descartes’ form of rationalism when it comes to evaluating the overall structure of consciousness.  And there doesn’t seem to be any ultimate dualism between Being-for-itself and Being-in-itself either, if the former is instantiated or experienced as a kind of negation of the latter.  Perhaps if Sartre had used the terms Being-in-itself and not-Being-in-itself, this would be made more clear.  In any case, it’s not as if these two instantiations of being need be separate substances, especially if, as Sartre says in Being and Nothingness, that:

“The ontological error of Cartesian rationalism is not to have seen that if the absolute is defined by the primacy of existence over essence, it cannot be conceived as a substance.”

So at the very least, it seems mistaken to say that Sartre is a substance dualist given his position on existence preceding essence, even if he may be adequately described as some kind of property dualist.

“But, again like every humanism, it leaves unasked the question: What is the root of man?  In this search for roots for man-a search that has, as we have seen, absorbed thinkers and caused the malaise of poets for the last hundred and fifty years-Sartre does not participate.  He leaves man rootless.”

Sartre may not have explicitly described any kind of truth for man that wasn’t a truth of the intellect, unlike Heidegger who had his conception of the truth of Being, but Sartre did also describe a pre-reflective aspect of consciousness that didn’t seem to be identified as either the subject nor anything about the subject.  There’s a mysterious aspect of consciousness that’s simply hard to pin down, and it may be the kind of ethereal concept that one could equate with some kind of pre-propositional realm of Being.  This may not be enough to say that Sartre, in the end, leaves us rooted (so to speak), but it seems to be a concept that implicitly pulls his philosophy in that direction.

2. Literature as a Mode of Action

Sartre places a lot of importance on literature as a means of expressing the author’s freedom in a way that depends on an interaction with the freedom of the reader.

“…the perfect example of what he believes a writer should do and what he himself tries to do in his own later fiction: that is, grapple with the problems of man in his time and milieu.”

As mentioned earlier in this post, Sartre’s personal experience during the time of the French Resistance colored his written works with an imperative to assert humanity’s radical freedom and did so in relation to the historical developments that affected the masses, the working class, and the oppressed.

“It is always to the idea, and particularly the idea as it leads to social action, that Sartre responds.  Hence he cannot do justice, either in his critical theory or in his actual practice of literary criticism, to poetry, which is precisely that form of human expression in which the poet-and the reader who would enter the poet’s world-must let Being be, to use Heidegger’s phrase, and not attempt to coerce it by the will to action or the will to intellectualization.  The absence of the poet in Sartre, as a literary man, is thus another evidence of what, on the philosophical level, leads to a deficiency in his theory of Being.”

And though Sartre may not be any kind of poet, that doesn’t mean his work doesn’t have some of the subjective, emotionally charged character that one finds in poetry.  But, Barret’s point is well taken as Sartre’s works don’t have the kind of aesthetic quality, the irrationality, nor do they make as much use of metaphor, as the majority of popular poets have.

“In discharging his freedom man also wills to accept the responsibility of it, thus becoming heavy with his own guilt.  Conscience, Heidegger has said, is the will to be guilty-that is, to accept the guilt that we know will be ours whatever course of action we take.”

Freedom and responsibility clearly go hand in hand for Sartre (and for Heidegger as well), and this very basic idea makes sense from the perspective that if one believes themselves to be free, then they must necessarily accept that they own their actions insofar as they were done freely.  As for conscience, rather than Heidegger’s conception of the will to be guilty, I’d prefer to describe it as simply a manifestation of our perception of both responsibility and duty, although one can also aptly describe conscience as a product of evolution that tends to produce in us a more pro-social set of behaviors.

The tendency for many to try and absolve themselves of their freedom, by submitting themselves to the conception of self that society or others draw up for them is addressed in Sartre’s 1944 French play No Exit.  In this play, the three main characters find themselves in Hell, being punished in a way that is analogous to that of Dante’s Inferno, in this case taunted by a symbol representing the inauthentic or superficial way they lived their lives:

“Having practiced “bad faith” in life-which, in Sartre’s terms, is the surrendering of one’s human liberty in order to possess, or try to possess, one’s being as a thing-the three characters now have what they had sought to surrender themselves to.  Having died, they cannot change anything in their past lives, which are exactly what they are, no more and no less, just like the static being of things…But this is exactly what they long for in life-to lose their own subjective being by identifying themselves with what they were in the eyes of other people.”

By defining ourselves in any essential or fixed way, for example, by defining ourselves based on the unchangeable past (as all its features are indeed fixed), we lose the freedom we’d have if our identity had been directed toward the uncertain future and the possible goals and projects contained therein.  In this case with No Exit, the characters were defining themselves based on the desires and expectations of society or at least of several members of society that these patrons of Hell deemed high in status or social clout.

And going back to the relation between freedom and responsibility, we can see how one can try and rid themselves of responsibility by holding an attitude that they have no freedom regarding who they are or what they do, that they must do what it is they do (perhaps because society or others “say so”, although it may be other reasons).  This can be a slippery thing to contemplate however, when we consider different conceptions of free will.

While it’s technically true that we don’t have any kind of causa sui free will (since it’s logically impossible to have this in a deterministic or random/indeterministic world), it’s still useful to conceive of our having a kind of freedom of the will that distinguishes between animals or people that have varying degrees of autonomy.  This is similar to the conception of free will that a court of law uses, to distinguish between instances of coercion or not being of sound mind and that of conscious intentions made with a reasonable capacity for evaluating the legality or consequences off those intentions.

It’s also important to account for the fact that we can’t predict all of our own actions (nor the actions of others), and so there’s also a folk psychological concept of free will implied by this uncertainty.  Most of these conceptions are psychologically useful, they play a positive role in maintaining the efficacy of our punishment reward systems, and overall they resonate with how we lives our lives as human beings.

In general then, I think that Sartre’s conception of tying freedom and responsibility together jives with our folk psychological conceptions of free will, and it serves a use for promoting a very functional and motivating way to live one’s life.

“As a writer Sartre is always the impassioned rhetorician of the idea; and the rhetorician, no matter how great and how eloquent his rhetoric, never has the full being of the artist.  If Sartre were really a poet and an artist, we would have from him a different philosophy…”

Although Barrett seems to be pigeonholing artists to some degree here (which isn’t entirely fair to Sartre), it’s true that what we find in Sartre’s written works are not artistic in any traditional or classical sense.  Even so, I think it’s fair to say that postmodern art has benefited from and been influenced by the works of many great thinkers including Sartre.  And the task of the artist is often informing us of our own human condition in its particular time and place, including many aspects of that condition that aren’t typically talked about or explicitly in our stream of consciousness.  Sartre’s works, both written and performed, did just that; they informed us of our modern condition and of a number of interesting perspectives of our own human psychology, and they did so in a thoroughly existential way.  Sartre was an artist then, in my mind at least.

3. Existential Psychology

“Behind all Sartre’s intellectual dialectic we perceive that the In-itself is for him the archetype of nature: excessive, fruitful, blooming nature-the woman, the female.  The For-itself, by contrast, is for Sartre the masculine aspects of human psychology: it is that in virtue of which man chooses himself in his radical liberty, makes projects, and thereby gives his life what strictly human meaning it has.”

And here some of the sexist nuances, both explicitly and implicitly in the minds of Sartre and his contemporaries, starts to become apparent.  The idea that nature has more of a female character is certainly far more justifiable and sensible because females are the only sex giving birth to new life and continuing the cycle, but the idea that the male traits subsume the human qualities of choice, freedom, and worthwhile projects, is one resulting from historical dominance hierarchies, which were traditionally patriarchal or androcentric.  But modern human life has illustrated this to be no longer applicable to humans generally.  In any case, we can still use traditional archetypes that make use of this kind of sexual categorization, as long as we’re careful to avoid taking those archetypes too seriously in terms of influencing an essentialist view of the human sexes and what our roles in society ought to be.

“The essence of man…lies not in the Oedipus complex (as Freud held) nor in the inferiority complex (as Adler maintained); it lies rather in the radical liberty of man’s existence by which he chooses himself and so makes himself what he is.”

And this of course is just a reiteration of the Sartrean existence precedes essence adage, but it’s interesting to see that Sartre seems to dismiss much of modern psychology, evolutionary psychology and any role for nature or innate predispositions in human beings.  One may wonder as well how humans can be free of traditional essentialism in Sartre’s mind while at the same time he assumes some kinds of roles or archetypes that are essentially male and female.  I don’t think we can avoid essentialism in its entirety, even if the crux of Sartre’s existential adage is valid.  And I don’t think Sartre avoids it either, even if he tries to.

“Man is not to be seen as the passive plaything of unconscious forces, which determine what he is to be.  In fact, Sartre denies the existence of an unconscious mind altogether; wherever the mind manifests itself, he holds, it is conscious.”

I’m not sure if Sartre’s denial of an unconscious mind is just a play on semantics (i.e. he’s defining “mind” exclusively as the neurological activity that directly leads to consciousness) or if he truly rejected the idea that there were any forces motivating our behavior that weren’t explicit in our consciousness.  If the latter is true, then how does Sartre account for marketing strategies, cognitive biases, denial, moral intuition, and a host of other things that rely on or operate under psychological predispositions that are largely unknown to the person who possesses them?  The totality of our behavior simply can’t be accounted for without some form of neurological processing happening under the radar (so to speak), which serves some psychological purpose.

“A human personality or human life is not to be understood in terms of some hypothetical unconscious at work behind the scenes and pulling all the wires that manipulate the puppet of consciousness.  A man is his life, says Sartre; which means that he is nothing more nor less than the totality of acts that make up that life.”

This is interesting when viewed through the lens of the future, our intended projects, and the kind of person we may strive to be.  It seems that in order for this excerpt above to be correct, according to Sartre’s (and perhaps Heidegger’s) own reasoning, that we’d have to include our intentions and goals as well; we can’t simply look at the facticity of our past, but must also look at how the self transcends into the future.  And although Sartre or others may not want to identify themselves with some of the forces operating in their unconscious, the fact of the matter is, if we want to be able to predict our own behavior and that of others as effectively as possible, then we have no choice but to make use of some concept of an unconscious mind, unconscious psychology, etc. It may not be the primary way we ought to think of ourselves, but it’s an important part of any viable model of human behavior.

Analogously, one may say that they don’t want to be identified with their brain (“I am not my brain!”), and yet if you asked them if they’d rather have a brain transplant or a body transplant (say, everything below the neck), they would undoubtedly choose the latter.  Why might this be so?  An obvious answer is because their entire personality, all that they value, believe, and know, is all made manifest in the structure of their brain.  Now we may see a brain on a table and have difficulty equating it with a person, and yet we can have an entire human body, only lacking a brain, and we have no person at all.  The only additional thing needed for personhood is that brain lying on the table, assuming it is a brain inherently capable of instantiating a personality, consciousness, etc.

Another aspect of Sartrean psychology is the conception of how the Other (some other conscious self) views us given the constraints of consciousness and our ability to see another person from that person’s own subjective point of view:

“This relation to the Other is one of the most sensational and best-known aspects of Sartre’s psychology.  To the other person, who look at me from the outside, I seem an object, a thing; my subjectivity with its inner freedom escapes his gaze.  Hence his tendency is always to convert me into the object he sees.  The gaze of the Other penetrates to the depths of my existence, freezes and congeals it.  It is this, according to Sartre, that turns love and particularly sexual love into a perpetual tension and indeed warfare.  The lover wishes to possess the beloved, but the freedom of the beloved (which is his or her human essence) cannot be possessed; hence, the lover tends to reduce the beloved to an object for the sake of possessing it.”

I think this view has merit and has a lot to do with how our brains make simplistic or heuristic models of the entities and causal structure we perceive around us.  Although we can recognize that some of those other entities are subjective beings such as ourselves, living with the same kinds of intrinsic conscious experiences that we do, it’s often easier and more automatic to treat them as if they were an object.  We can’t directly experience another person’s subjectivity, which is what makes it so easy to escape our attention and consideration.  More to the point though, Sartre was claiming how this tendency to objectify another penetrates many facets of our lives, introducing a source of conflict with our relations to one another.  Ironically, this could also be described as something we do unconsciously, where we don’t explicitly say to ourselves “this person is an object,” but rather we may explicitly see them as a person and yet treat them as an object in order to serve some implicit psychological goal (e.g. to feel that you possess your significant other).

“He is right to make the liberty of choice, which is the liberty of a conscious action, total and absolute, no matter how small the area of our power: in choosing, I have to say No somewhere, and this No, which is total and totally exclusive of other alternatives, is dreadful; but only by shutting myself up in it is any resoluteness of action possible.  “

Referring back to the beginning of this post, regarding the liberty of choice (in particular a kind of negative liberty) as the ultimate form of human freedom, we can also describe the choices we make in terms of the alternative choices we had to say no to first.  We can see our ability to say no as a means of creating a world for ourselves; a world with boundaries delineating what is and isn’t desired.  And the making of these choices, the effect that each choice has, and the world we create out of them are things that we and we alone own.  Understandably then, once this freedom and responsibility are recognized, they lead to our experiencing a kind of dread, anxiety, and perhaps a feeling of being overwhelmed by the sheer space of possibilities and thus overwhelmed by the number of possibilities that we have to willfully reject:

“A friend of mine, a very intelligent and sensitive man, was over a long period in the grip of a neurosis that took the form of indecision in the fact of almost every occasion of life; sitting in a restaurant, he could not look at the printed menu to choose his lunch without seeing the abyss of the negative open before his eyes, on the page, and so falling into a sweat…(this story) confirms Sartre’s analysis of freedom, for only because freedom is what he says it is could this man have been frightened by it and have retreated into the anxiety of indecision.”

This story also reminded me of the different perspectives that can result depending on whether one is a maximizer or a sufficer: the maximizer being the person who tries to exhaustively analyze every decision they make, hoping to maximize the benefits brought about by their careful decision-making; and the sufficer being the person who is much more easily pleased, willing to make a decision fairly quickly, and not worry about whether their decision was the best possible so much as whether it was simply good enough.  On average, maximizers are far less happy as well, despite the care they take in making decisions, simply because they fret over whether the decision they landed on was really best after all, and they tend to enjoy life less than sufficers because they’re missing many experiences and missing the natural flow of those experiences as a result of their constant worrying.  The maximizer concept definitely fits in line with Sartre’s conception of the anxiety of choice brought about by the void and negation implicit in decision making.

I for one try to be a sufficer, even if I occasionally find myself trying to maximize my decision making, and I do this for the same reason that I try to be an optimist rather than a pessimist: people live happier and more satisfying lives if they try and maintain a positive attitude and if they aren’t overly concerned with every choice they make.  We don’t want to make poor decisions by making them in haste or without any care, but we also don’t want to invest too many of our psychological resources in making those decisions; we may end up at the end of our lives realizing that we’ve missed out on a good life.

“…the example points up also where Sartre’s theory is decidedly lacking: it does not show us the kind of objects in relation to which our human subjectivity can define itself in a free choice that is meaningful and not neurotic.  This is so because Sartre’s doctrine of liberty was developed out of the experience of extreme situations: the victim says to his totalitarian oppressor, No, even if you kill me; and he shuts himself up in this No and will not be shaken from it.”

I agree with Barrett entirely here.  Much of Sartre’s philosophy was developed out of an experienced oppression under the threat of violence, and this quality makes it harder to apply to an everyday life that’s not embroiled with tyranny or fascism.

“…But he who shuts himself up in the No can be demoniacal, as Kierkegaard pointed out; he can say No against himself, against his own nature.  Sartre’s doctrine of freedom does not really comprehend the concrete man who is an undivided totality of body and mind, at once, and without division, both In-itself and For-itself; but rather an isolated aspect of this total condition, the aspect of man always at the margin of his existence.”

I think the biggest element missing from Sartre’s conception of freedom or from his philosophy generally is the evolutionary and biologically grounded aspects of our psychology, the facts pertaining to our innate impulses and predispositions, our cognitive biases and other unconscious processes (which he outright denied the existence of).  This makes his specific formula of existence preceding essence far less tenable and realistic.  Barrett agrees with some of this reasoning as well when he says:

“Because Sartre’s psychology recognizes only the conscious, it cannot comprehend a form of freedom that operates in that zone of the human personality where conscious and unconscious flow into each other.  Being limited to the conscious, it inevitably becomes an ego psychology; hence freedom is understood only as the resolute project of the conscious ego.”

Indeed, the human being as a whole includes both the conscious and unconscious aspects of ourselves.  The subjective experience that defines us is a product of both sides of our psyche, not to mention the interplay of the psyche as a whole with the world around us that we’re intimately connected to (think of Heidegger’s field of Being).

The final point I’d like to bring up regarding Sartre is in regard to his atheism:

“It has been remarked that Kierkegaard’s statement of the religious position is so severe that it has turned many people who thought themselves religious to atheism.  Analogously, Sartre’s view of atheism is so stark and bleak that it seems to turn many people toward religion.  This is exactly as it should be.  The choice must be hard either way; for man, a problematic being to his depths, cannot lay hold of his ultimate commitments with a smug and easy security.”

It’s understandable that most people gravitate toward religion because of the anxiety and fear that can accompany a worldview where one has to take on the burden of making meaning for their own lives rather than having someone else or some concept “decide” this for them.  Additionally, the fear of being alone, the fear of death, the fear of having a precarious existence generally throughout our lives and also given the fact that we live in an inhospitable universe where life appears to occupy a region in space comparable to the size of a proton within a large room.

All of these reasons lend themselves to reinforcing a desire for a deity and for a narrative that defines who we ought to be, so we don’t have to decide this for ourselves.  Personally, I don’t think one can live authentically with these kinds of world views, nor is it morally responsible to live life with belief systems that inhibit the ability to distinguish between fact and fiction.  But I still understand that it’s far more difficult to reject the comforts of religion and face the radical contingencies of life and the burden of choice.  I think people can live the most fulfilling lives in the most secure way only when they have a reliable epistemology in place and only when they take the limits of human psychology seriously.  We need to understand that some ways of living, some attitudes, some ways of thinking, etc., are better than others for living sustainable, fulfilling lives.

Sartre’s atheistic philosophy is certainly bleak, but his does not represent the philosophy of atheists.  My atheistic philosophy of life, for example, takes into account the usefulness of multiple descriptions (including holistic descriptions) of our existence; the recognition of our desire to connect to the transcendent; the need for morality, meaning, and purpose; and the need for emotional channels of expression.  And regardless of it’s bleakness, Sartre’s philosophy has some useful ideas, and any philosophy that’s likely going to be successful will gather what works from a number of different schools of thought and philosophies anyway.

Well, this concludes the post on Jean-Paul Sartre (Ch. 10), and ends part 3 of this series on William Barrett’s Irrational Man.  The last and final post in this series is on part 4, Integral vs. Rational Man, which consists of chapter 11, the final chapter in Barrett’s book, The Place of the Furies.  I will post the link here when that post is complete.

Advertisements

Irrational Man: An Analysis (Part 3, Chapter 8: Nietzsche)

In the last post on this series on William Barrett’s Irrational Man, we began exploring Part 3: The Existentialists, beginning with chapter 7 on Kierkegaard.  In this post, we’ll be taking a look at Nietzsche’s philosophy in more detail.

Ch. 8 – Nietzsche

Nietzsche shared many of the same concerns as Kierkegaard, given the new challenges of modernity brought about by the Enlightenment; but each of these great thinkers approached this new chapter of our history from perspectives that were in many ways, diametrically opposed.  Kierkegaard had a grand goal of sharing with others what he thought it meant to be a Christian, and he tried to revive Christianity within a society that he found to be increasingly secularized.  He also saw that of the Christianity that did exist, it was largely organized and depersonalized, and he felt the need to stress the importance of a much more personal form of the religion.  Nietzsche on the other hand, an atheist, rejected Christianity and stressed the importance of re-establishing our values given the fact that modernity was now living in a godless world, albeit a culturally Christianized world, but one that was now without a Christ.

Despite their differences, both philosophers felt that discovering the true meaning of our lives was paramount to living an authentic life, and this new quest for meaning was largely related to the fact that our view of ourselves had changed significantly:

“By the middle of the nineteenth century, as we have seen, the problem of man had begun to dawn on certain minds in a new and more radical form: Man, it was seen, is a stranger to himself and must discover, or rediscover, who he is and what his meaning is.”

Nietzsche also understood and vividly illustrated the crucial differences between mankind and the rest of nature, most especially our self-awareness:

“It was Nietzsche who showed in its fullest sense how thoroughly problematical is the nature of man: he can never be understood as an animal species within the zoological order of nature, because he has broken free of nature and has thereby posed the question of his own meaning-and with it the meaning of nature as well-as his destiny.”

And for Nietzsche, the meaning one was to find for their life included first abandoning what he saw as an oppressive and disempowering Christian morality, which he saw as severely inhibiting our potential as human beings.  He was much more sympathetic to Greek culture and morality, and even though (perhaps ironically) Christianity itself was derived from a syncretism between Judaism and Greek Hellenism, it’s moral and psychologically relevant differences were too substantial to avoid Nietzsche’s criticism.  He thought that a return to a more archaic form of Greek culture and mentality was the solution we needed to restore or at least re-validate our instincts:

“Dionysus reborn, Nietzsche thought, might become a savior-god for the whole race, which seemed everywhere to show symptoms of fatigue and decline.”

Nietzsche learned about the god Dionysus while he was studying Greek tragedy, and he gravitated toward the Dionysian symbolism, for it contained a kind of reconciliation between high culture and our primal instincts.  Dionysus was after all the patron god of the Greek tragic festivals and so was associated with beautiful works of art; but he was also the god of wine and ritual madness, bringing people together through the grape harvest and the joy of intoxication.  As Barrett describes it:

“This god thus united miraculously in himself the height of culture with the depth of instinct, bringing together the warring opposites that divided Nietzsche himself.”

It was this window into the hidden portions of our psyche that Nietzsche tried to look through and which became an integral basis for much of his philosophy.  But the chaotic nature of our species combined with our immense power to manipulate the environment also concerned him greatly; and he thought that the way modernity was repressing many of our instincts needed to be circumvented, lest we continue to build up this tension in our unconscious only to release itself in an explosive eruption of violence:

“It is no mere matter of psychological curiosity but a question of life and death for man in our time to place himself again in contact with the archaic life of his unconscious.  Without such contact he may become the Titan who slays himself.  Man, this most dangerous of the animals, as Nietzsche called him, now holds in his hands the dangerous power of blowing himself and his planet to bits; and it is not yet even clear that this problematic and complex being is really sane.”

Indeed our species has been removed from our natural evolutionary habitat; and we’ve built our modern lives around socio-cultural and technological constructs that have, in many ways at least, inhibited the open channel between our unconscious and conscious mind.  One could even say that the prefrontal cortex region of our brain, important as it is for conscious planning and decision making, has been effectively high-jacked by human culture (memetic selection) and now it’s constantly running on overtime, over-suppressing the emotional centers of the brain (such as the limbic system).

While we’ve come to realize that emotional suppression is sometimes necessary in order to function well as a cooperative social species that depends on complex long-term social relationships, we also need to maintain a healthy dose of emotional expression as well.  If we can’t release this more primal form of “psychological energy” (for lack of a better term), then it shouldn’t be surprising to see it eventually manifest into some kind of destructive behavior.

We ought not forget that we’re just like other animals in terms of our brain being best adapted to a specific balance of behaviors and cognitive functions; and if this balance is disrupted through hyperactive or hypoactive use of one region or another, doesn’t it stand to reason that we may wind up with either an impairment in brain function or less healthy behavior?  If we’re not making a sufficient use of our emotional capacities, shouldn’t we expect them to diminish or atrophy in some way or other?  And if some neurological version of the competitive exclusion principle exists, then this would further reinforce the idea that our suppression of emotion by other capacities may cause those other capacities to become permanently dominant.  We can only imagine the kinds of consequences that ensue when this psychological impairment falls upon a species with our level of power.

1.  Ecce Homo

“In the end one experiences only oneself,” Nietzsche observes in his Zarathustra, and elsewhere he remarks, in the same vein, that all the systems of the philosophers are just so many forms of personal confession, if we but had eyes to see it.”

Here Nietzsche is expressing one of his central ideas, by pointing out the fact that we can’t separate ourselves from our thoughts, and therefore the ideas put forward by any philosopher are really betraying some set of values they hold whether unconsciously or explicitly, and they also betray one’s individual perspective; an unavoidable perspective stemming from our subjective experience which colors our interpretation of those experiences.  Perspectivism, or the view that reality and our ideas about what’s true and what we value can be mapped onto many different possible conceptual schemes, was a big part of Nietzsche’s overall philosophy, and it was interwoven into his ideas on meaning, epistemology, and ontology.

As enlightening as Nietzsche was in giving us a much needed glimpse into some of the important psychological forces that give our lives the shape they have (for better or worse), his written works show little if any psychoanalytical insight applied to himself, in terms of the darker and less pleasant side of his own psyche.

“Nietzsche’s systematic shielding of himself from the other side is relevant to his explanation of the death of God: Man killed God, he says, because he could not bear to have anyone looking at his ugliest side.”

But even if he didn’t turn his psychoanalytical eye inward toward his own unconscious, he was still very effective in shining a light on the collective unconscious of humanity as a whole.  It’s not enough that we marvel over the better angels of our nature, even though it’s important to acknowledge and understand our capacities and our potential for bettering ourselves and the world we live in; we also have to acknowledge our flaws and shortcomings and while we may try to overcome these weaknesses in one way or another, some of them are simply a part of our nature and we should accept them as such even if we may not like them.

One of the prevailing theories stemming from Western Rationalism (mentioned earlier in Barrett’s book) was that human beings were inherently perfect and rational, and it was within the realm of possibility to realize that potential; but this kind of wishful thinking was relying on a fundamental denial of an essential part of the kind of animal we really are.  And Nietzsche did a good job of putting this fact out in the limelight, and explaining how we couldn’t truly realize our potential until we came to terms with this fairly ugly side of ourselves.

Barrett distinguishes between the attitudes stemming from various forms of atheism, including that of Nietzsche’s:

“The urbane atheism of Bertrand Russell, for example, presupposes the existence of believers against whom he can score points in an argument and get off some of his best quips.  The atheism of Sartre is a more somber affair, and indeed borrows some of its color from Nietzsche: Sartre relentlessly works out the atheistic conclusion that in a universe without God man is absurd, unjustified, and without reason, as Being itself is.”

In fact Nietzsche was more concerned with the atheists that hadn’t yet accepted the full ramifications of a Godless world, rather than believers themselves.  He felt that those who believed in God still had a more coherent belief system given many of their moral values and the meaning they attached to their lives, since they were largely derived from their religious beliefs (even if the religious beliefs were unwarranted or harmful).  Those that were atheists, many of them at least, hadn’t yet taken the time to build a new foundation for their previously held values and the meaning in their lives; and if they couldn’t rebuild this foundation then they’d have to change those values and what they find to be meaningful to reflect that shift in foundation.

Overall, Nietzsche even changes the game a little in terms of how the conception of God was to be understood in philosophy:

“If God is taken as a metaphysical object whose existence has to be proved, then the position held by scientifically minded philosophers like Russell must inevitably be valid: the existence of such an object can never be empirically proved.  Therefore, God must be a superstition held by primitive and childish minds.  But both these alternative views are abstract, whereas the reality of God is concrete, a thoroughly autonomous presence that takes hold of men but of which, of course, some men are more conscious than others.  Nietzsche’s atheism reveals the true meaning of God-and does so, we might add, more effectively than a good many official forms of theism.”

Even though God may not actually exist as a conscious being, the idea of God and the feelings associated with what one labels as an experience of God, is as real as any other idea or experience and so it should still be taken seriously even in a world where no gods exist.  As an atheist myself (formerly a born-again Protestant Christian), I’ve come to appreciate the real breadth of possible meaning attached to the term “God”.  Even though the God or gods described in various forms of theism do not track onto objective reality as was pointed out by Russell, this doesn’t negate the subjective reality underlying theism.  People ascribe the label “God” to a set of certain feelings and forces that they feel are controlling their lives, their values, and their overall vision of the world.  Nietzsche had his own vision of the world as well, and so one could label this as his “god”, despite the confusion that this may cause when discussing God on metaphysical terms.

2. What Happens In “Zarathustra”; Nietzsche as Moralist

One of the greatest benefits of producing art is our ability to tap into multiple perspectives that we might otherwise never explore.  And by doing this, we stand a better chance of creating a window into our own unconscious:

“[Thus Spoke Zarathustra] was Nietzsche’s poetic work and because of this he could allow the unconscious to take over in it, to break through the restraints imposed elsewhere by the philosophic intellect. [in poetry] One loses all perception of what is imagery and simile-that is to say, the symbol itself supersedes thought, because it is richer in meaning.”

By avoiding the common philosophical trap of submitting oneself to a preconceived structure and template for processing information, poetry and other works of art can inspire new ideas and novel perspectives through the use of metaphor and allegory, emotion, and the seemingly infinite powers of our own imagination.  Not only is this true for the artist herself, but also for the spectator who stands in a unique position to interpret what they see and to make sense of it the best they can.  And in the rarest of cases, one may be able to experience something in a work of art that fundamentally changes themselves in the process.  Either way, by presenting someone with an unusual and ambiguous stimulus that they are left to interpret, they can hope to gain at least some access into their unconscious and to inspire an expression of their creativity.

In his Thus Spoke Zarathustra, Nietzsche was focusing on how human beings could ever hope to regain a feeling of completeness; for modernity had fractured the individual and placed too much emphasis on collective specialization:

“For man, says Schiller, the problem is one of forming individuals.  Modern life has departmentalized, specialized, and thereby fragmented the being of man.  We now face the problem of putting the fragments together into a whole.  In the course of his exposition, Schiller even referred back, as did Nietzsche, to the example of the Greeks, who produced real individuals and not mere learned abstract men like those of the modern age.”

A part of this reassembly process would necessarily involve reincorporating what Nietzsche thought of as the various attributes of our human nature, which many of us are in denial of (to go back to the point raised earlier in this post).  For Nietzsche, this meant taking on some characteristics that may seem inherently immoral:

“Man must incorporate his devil or, as he put it, man must become better and more evil; the tree that would grow taller must send its roots down deeper.”

Nietzsche seems to be saying that if there are aspects of our psychology that are normal albeit seemingly dark or evil, then we ought to structure our moral values along the same lines rather than in opposition to these more primal parts of our psyche:

“(Nietzsche) The whole of traditional morality, he believed, had no grasp of psychological reality and was therefore dangerously one-sided and false.”

And there is some truth to this though I wouldn’t go as far as Nietzsche does, as I believe that there are many basic and cross-cultural moral prescriptions that do in fact take many of our psychological needs into account.  Most cultures have some place for various virtues such as compassion, honesty, generosity, forgiveness, commitment, integrity, and many others; and these virtues have been shown within the fields of moral psychology and sociology to help us flourish and lead psychologically fulfilling lives.  We evolved as a social species that operates within some degree of a dominance hierarchy, and sure enough our psychological traits are what we’d expect given our evolutionary history.

However, it’s also true that we ought to avoid slipping into some form of a naturalistic fallacy; for we don’t want to simply assume that all the behaviors that were more common prior to modernity let alone prior to civilization were beneficial to us and our psychology.  For example, raping and stealing were far more common per capita prior to humans establishing some kind of civil state or society.  And a number of other behaviors that we’ve discovered to be good or bad for our physical or psychological health have been the result of a long process of trial and error, cultural evolution, and the cultural transmission of newfound ideas from one generation to the next.  In any case, we don’t want to assume that just because something has been a tradition or even a common instinctual behavior for many thousands or even hundreds of thousands of years, that it is therefore a moral behavior.  Quite the contrary; instead we need to look closely at which behaviors lead to more fulfilling lives and overall maximal satisfaction and then aim to maximize those behaviors over those that detract from this goal.

Nietzsche did raise a good point however, and one that’s supported by modern moral psychology; namely, the fact that what we think we want or need isn’t always in agreement with our actual wants and needs, once we dive below the surface.  Psychology has been an indispensable tool in discovering many of our unconscious desires and the best moral theories are going to be those that take both our conscious and unconscious motivations and traits into account.  Only then can we have a moral theory that is truly going to be sufficiently motivating to follow in the long term as well as most beneficial to us psychologically.  If one is basing their moral theory on wishful thinking rather than facts about their psychology, then they aren’t likely to develop a viable moral theory.  Nietzsche realized this and promoted it in his writings, and this was an important contribution as it bridged human psychology with a number of important topics in philosophy:

“On this point Nietzsche has a perfectly sober and straightforward case against all those idealists, from Plato onward, who have set universal ideas over and above the individual’s psychological needs.  Morality itself is blind to the tangle of its own psychological motives, as Nietzsche showed in one of his most powerful book, The Genealogy of Morals, which traces the source of morality back to the drives of power and resentment.”

I’m sure that drives of power and resentment have played some role in certain moral prescriptions and moral systems, but I doubt that this is likely to be true generally speaking; and even in cases where a moral prescription or system arose out of one of these unconscious drives, such as resentment, this doesn’t negate it’s validity nor demonstrate whether or not it’s warranted or psychologically beneficial.  The source of morality isn’t as important as whether or not the moral system itself works, though the sources of morality can be both psychologically relevant and revealing.

Our ego certainly has a lot to do with our moral behavior and whether or not we choose to face our inner demons:

“Precisely what is hardest for us to take is the devil as the personification of the pettiest, paltriest, meanest part of our personality…the one that most cruelly deflates ones egotism.”

If one is able to accept both the brighter and darker sides of themselves, they’ll also be more likely to willingly accept Nietzsche’s idea of the Eternal Return:

“The idea of the Eternal Return thus expresses, as Unamuno has pointed out, Nietzsche’s own aspirations toward eternal and immortal life…For Nietzsche the idea of the Eternal Return becomes the supreme test of courage: If Nietzsche the man must return to life again and again, with the same burden of ill health and suffering, would it not require the greatest affirmation and love of life to say Yes to this absolutely hopeless prospect?”

Although I wouldn’t expect most people to embrace this idea, because it goes against our teleological intuitions of time and causation leading to some final state or goal, it does provide a valuable means of establishing whether or not someone is really able to accept this life for what it is with all of its absurdities, pains and pleasures.  If someone willingly accepts the idea, then by extension they likely accept themselves, their lives, and the rest of human history (and even the history of life on this planet, no less).  This doesn’t mean that by accepting the idea of the Eternal Return, that people have to put every action, event, or behavior on equal footing, for example, by condoning slavery or the death of millions as a result of countless wars, just because these events are to be repeated for eternity.  But one is still accepting that all of these events were necessary in some sense; that they couldn’t have been any other way, and this acceptance should translate to one’s attitude toward life reflecting this by some kind of overarching contentment.

I think that the more common reaction to this kind of idea is slipping into some kind of nihilism, where a person no longer cares about anything, and no longer finds any meaning or purpose in their lives.  And this reaction is perfectly understandable given our teleological intuitions; most people don’t or wouldn’t want to invest time and effort into some goal if they knew that the goal would never be accomplished or knew that it would eventually be undone.  But, oddly enough, even if the Eternal Return isn’t an actual fact of our universe, we still run into the same basic situation within our own lifetimes whereby we know that there are a number of goals that will never be achieved because of our own inevitable death.

There’s also the fact that even if we did achieve a number of goals, once we die, we can no longer reap any benefits from them.  There may be other people that can benefit from our own past accomplishments, but eventually they will die as well.  Eventually all life in the universe will expire, and this is bound to happen long before the inevitable heat death of the universe transpires; and once it happens, there won’t be anybody experiencing anything at all let alone reaping the benefits of a goal from someone’s distant past.  Once this is taken into account, Nietzsche’s idea doesn’t sound all that bad, does it?  If we knew that there would always be time and some amount of conscious experience for eternity, then there will always be some form of immortality for consciousness; and if it’s eternally recurring, then we end up becoming immortal in some sense.

Perhaps the idea of the Eternal Return (or Recurrence), even though this idea began many hundreds if not a few thousand years before Nietzsche, was motivated by Nietzsche’s own fear of death.  Even though he thought of the idea as horrifying and paralyzing (not least because of all the undesirable parts of our own personal existence), since he didn’t believe in a traditional afterlife, maybe the fear of death motivated him to adopt such an idea, even though he also saw it as a physically plausible consequence of probability and the laws of physics.  But either way, beyond it being far more plausible an idea compared to that of an eternal paradise after death, it’s also a very different idea; not only because one’s memory or identity isn’t preserved such that one actually experiences immortality when the “temporal loop” defining their lifetime starts all over again (i.e. the “movie” of one’s life is just played over and over again, unbeknownst to the person in the movie), but this idea is also different because it involves accepting the world the way it is for all eternity rather than an idea revolving around the way they wish it was.  The Eternal Return fully engages with the existentialist reality of human life with all its absurdities, contingencies, and the rest.

3. Power and Nihilism

Nietzsche, like Kierkegaard before him, was considered to be an unsystematic thinker by many, though Barrett points out that this view is mistaken when it comes to Nietzsche; a common view resulting from the form his writings took which were largely aphoristic.  Nietzsche himself even said that he was viewing science and philosophy through the eyes of art, but nevertheless his work eventually revealed a systematic structure:

“As thinking gradually took over the whole person, and everything else in his life being starved out, it was inevitable that this thought should tend to close itself off in a system.”

This systematization of his philosophy was revealed in the notes he was making for The Will to Power, a work he never finished but one that was more or less assembled and published posthumously by his sister Elisabeth.  The systematization was centered on the idea that a will to power was the underlying force that ultimately guided our behavior, striving to achieve the highest possible position in life as opposed to being fundamentally driven by some biological imperative to survive.  But it’s worth pointing out here that Nietzsche didn’t seem to be merely talking about a driving force behind human behavior, but rather he seemed to be referencing an even more fundamental driving force that was in some sense driving all of reality; and this more inclusive view would be analogous to Schopenhauer’s will to live.

It’s interesting to consider this perspective from the lens of biology, where if we try and reconcile this with a Darwinian account of evolution in which survival is key, we could surmise that a will to power is but a mechanism for survival.  If we grant Nietzsche’s position and the position of the various anti-Darwinian thinkers that influenced him, that is, that a will to survive or survival more generally is somehow secondary to a will to power, then we run into a problem: if survival is a secondary drive or goal, then we’d expect survival to be less important than power; but without survival you can’t have power and thus it makes far more evolutionary sense that a will to survive is more basic than a will to power.

However, I am willing to grant that as soon as organisms began evolving brains along with the rest of their nervous system, the will to power (or something analogous to it) became increasingly important; and by the time humans came on the scene with their highly complex brains, the will to power may have become manifest in our neurological drive to better predict our environment over time.  I wrote about this idea in a previous post where I had explored Nietzsche’s Beyond Good and Evil, where I explained what I saw as a correlation between Nietzsche’s will to power and the idea of the brain as a predictive processor:

“…I’d like to build on it a little and suggest that both expressions of a will to power can be seen as complementary strategies to fulfill one’s desire for maximal autonomy, but with the added caveat that this autonomy serves to fulfill a desire for maximal causal power by harnessing as much control over our experience and understanding of the world as possible.  On the one hand, we can try and change the world in certain ways to fulfill this desire (including through the domination of other wills to power), or we can try and change ourselves and our view of the world (up to and including changing our desires if we find them to be misdirecting us away from our greatest goal).  We may even change our desires such that they are compatible with an external force attempting to dominate us, thus rendering the external domination powerless (or at least less powerful than it was), and then we could conceivably regain a form of power over our experience and understanding of the world.

I’ve argued elsewhere that I think that our view of the world as well as our actions and desires can be properly described as predictions of various causal relations (this is based on my personal view of knowledge combined with a Predictive Processing account of brain function).  Reconciling this train of thought with Nietzsche’s basic idea of a will to power, I think we could…equate maximal autonomy with maximal predictive success (including the predictions pertaining to our desires). Looking at autonomy and a will to power in this way, we can see that one is more likely to make successful predictions about the actions of another if they subjugate the other’s will to power by their own.  And one can also increase the success of their predictions by changing them in the right ways, including increasing their complexity to better match the causal structure of the world, and by changing our desires and actions as well.”

On the other hand, if we treat the brain’s predictive activity as a kind of mechanistic description of how the brain works rather than a type of drive per se, and if we treat our underlying drives as something that merely falls under this umbrella of description, then we might want to consider whether or not any of the drives that are typically posited in psychological theory (such as power, sex, or a will to live) are actually more basic than any other or typically dominant over the others.  Barrett suggests an alternative, saying that we ought to look at these elements more holistically:

“What if the human psyche cannot be carved up into compartments and one compartment wedged in under another as being more basic?  What if such dichotomizing really overlooks the organic unity of the human psyche, which is such that a single impulse can be just as much an impulse toward love on the one hand as it is toward power on the other?”

I agree with Barrett at least in the sense that these drives seem to be operating together much of the time, and when they aren’t in unison, they often seem to be operating on the same level at least.  Another way to put this could be to say that as a social species, we’ve evolved a number of different modular behavioral strategies that are typically best suited for particular social circumstances, though some circumstances may be such that multiple strategies will work and thus multiple strategies may be used simultaneously without working against each other.

But I also think that our conscious attention plays a role as well where the primary drive(s) used to produce the subsequent behavior may be affected by thinking about certain things or in a certain way, such as how much you love someone, what you think you can gain from them, etc.  And this would mean that the various underlying drives for our behavior are individually selected or prioritized based on one’s current experiences and where their attention is directed at any particular moment.  It may still be the case that some drives are typically dominant over others, such as a will to power, even if certain circumstances can lead to another drive temporarily taking over, however short-lived that may be.

The will to power, even if it’s not primary, would still help to explain some of the enormous advancements we’ve made in our scientific and technological progress, while also explaining (at least to some degree) the apparent disparity between our standard of living and overall physio-psychological health on the one hand, and our immense power to manipulate the environment on the other:

“Technology in the twentieth century has taken such enormous strides beyond that of the nineteenth that it now bulks larger as an instrument of naked power than as an instrument for human well-being.”

We could fully grasp Barrett’s point here by thinking about the opening scene in Stanley Kubrick’s 2001: A Space Odyssey, where “The Dawn of Man” was essentially made manifest with the discovery of tools, specifically weapons.  Once this seemingly insignificant discovery was made, it completely changed the game for our species.  While it gave us a new means of defending ourselves and an ability to hunt other animals, thus providing us with the requisite surplus of protein and calories needed for brain development and enhanced intelligence, it also catalyzed an arms race.  And with the advent of human civilization, our historical trajectory around the globe became largely dominated by our differential means of destruction; whoever had the biggest stick, the largest army, the best projectiles, and ultimately the most concentrated form of energy, would likely win the battle, forever changing the fate of the parties involved.

Indeed war has been such a core part of human history, so it’s no wonder Nietzsche stumbled upon such an idea; and even if he hadn’t, someone else likely would have even if they wouldn’t have done so with the same level of creativity and intellectual rigor.  If our history has been so colored with war, which is a physical instantiation of a will to power in order to dominate another group, then we should expect many to see it as a fundamental part of who we are.  Personally, I don’t think we’re fundamentally war-driven creatures but rather that war results from our ability to engage in it combined with a perceived lack of and desire for resources, freedom, and stability in our lives; but many of these goods, freedom in particular, imply a particular level of power over oneself and their environment, so even a fight for freedom is in one way or another a kind of will to power as well.

And what are we to make of our quest for power in terms of the big picture?  Theoretically, there is an upper limit to the amount of power any individual or group can possibly attain, and if one gets to a point where they are seeking power for power’s sake, and using their newly acquired power to harness even more power, then what will happen when we eventually hit that ceiling?  If our values become centered around power, and we lose any anchor we once had to provide meaning for our lives, then it seems we would be on a direct path toward nihilism:

“For Nietzsche, the problem of nihilism arose out of the discovery that “God is dead.” “God” here means the historical God of the Christian faith.  But in a wider philosophical sense it means also the whole realm of supersensible reality-Platonic Ideas, the Absolute, or what not-that philosophy has traditionally posited beyond the sensible realm, and in which it has located man’s highest values.  Now that this other, higher, eternal realm is gone, Nietzsche declared, man’s highest values lose their value…The only value Nietzsche can set up to take the place of these highest values that have lost their value for contemporary man is: Power.”

As Barrett explains, Nietzsche seemed to think that power was the only thing that could replace the eternal realm that we valued so much; but of course this does nothing to alleviate the problem of nihilism in the long term since the infinite void still awaits those at the end of the finite road to maximal power.

“If this moment in Western history is but the fateful outcome of the fundamental ways of thought that lie at the very basis of our civilization-and particularly of that way of thought that sunders man from nature, sees nature as a realm of objects to be mastered and conquered, and can therefore end only with the exaltation of the will to power-then we have to find out how this one-sided and ultimately nihilistic emphasis upon the power over things may be corrected.”

I think the answer to this problem lies in a combination of strategies, but with the overarching goal of maximizing life fulfillment.  We need to reprogram our general attitude toward power such that it is truly instrumental rather than perceived as intrinsically valuable; and this means that our basic goal becomes accumulating power such that we can maximize our personal satisfaction and life fulfillment, and nothing more.  Among other things, the power we acquire should be centered around a power over ourselves and our own psychology; finding ways of living and thinking which are likely to include the fostering of a more respectful relationship with the nature that created us in the first place.

And now that we’re knee deep in the Information Age, we will be able to harness the potential of genetic engineering, artificial intelligence, and artificial reality; and within these avenues of research we will be able to change our own biology such that we can quickly adapt our psychology to the ever-changing cultural environment of the modern world, and we’ll be able to free up our time to create any world we want.  We just need to keep the ultimate goal in mind, fulfillment and contentment, and direct our increasing power towards this goal in particular instead of merely chipping away at it in the background as some secondary priority.  If we don’t prioritize it, then we may simply perpetuate and amplify the meaningless sources of distraction in our lives, eventually getting lost in the chaos, and forgetting about our ultimate potential and the imperatives needed to realize it.

I’ll be posting a link here to part 9 of this post-series, exploring Heidegger’s philosophy, once it has been completed.

Predictive Processing: Unlocking the Mysteries of Mind & Body (Part V)

In the previous post, part 4 in this series on Predictive Processing (PP), I explored some aspects of reasoning and how different forms of reasoning can be built from a foundational bedrock of Bayesian inference (click here for parts 1, 2, or 3).  This has a lot to do with language, but I also claimed that it depends on how the brain is likely generating new models, which I think is likely to involve some kind of natural selection operating on neural networks.  The hierarchical structure of the generative models for these predictions as described within a PP framework, also seems to fit well with the hierarchical structure that we find in the brain’s neural networks.  In this post, I’m going to talk about the relation between memory, imagination, and unconscious and conscious forms of reasoning.

Memory, Imagination, and Reasoning

Memory is of course crucial to the PP framework whether for constructing real-time predictions of incoming sensory information (for perception) or for long-term predictions involving high-level, increasingly abstract generative models that allow us to accomplish complex future goals (like planning to go grocery shopping, or planning for retirement).  Either case requires the brain to have stored some kind of information pertaining to predicted causal relations.  Rather than memories being some kind of exact copy of past experiences (where they’d be stored like data on a computer), research has shown that memory functions more like a reconstruction of those past experiences which are modified by current knowledge and context, and produced by some of the same faculties used in imagination.

This accounts for any false or erroneous aspects of our memories, where the recalled memory can differ substantially from how the original event was experienced.  It also accounts for why our memories become increasingly altered as more time passes.  Over time, we learn new things, continuing to change many of our predictive models about the world, and thus have a more involved reconstructive process the older the memories are.  And the context we find ourselves in when trying to recall certain memories, further affect this reconstruction process, adapting our memories in some sense to better match what we find most salient and relevant in the present moment.

Conscious vs. Unconscious Processing & Intuitive Reasoning (Intuition)

Another attribute of memory is that it is primarily unconscious, where we seem to have this pool of information that is kept out of consciousness until parts of it are needed (during memory recall or or other conscious thought processes).  In fact, within the PP framework we can think of most of our generative models (predictions), especially those operating in the lower levels of the hierarchy, as being out of our conscious awareness as well.  However, since our memories are composed of (or reconstructed with) many higher level predictions, and since only a limited number of them can enter our conscious awareness at any moment, this implies that most of the higher-level predictions are also being maintained or processed unconsciously as well.

It’s worth noting however that when we were first forming these memories, a lot of the information was in our consciousness (the higher-level, more abstract predictions in particular).  Within PP, consciousness plays a special role since our attention modifies what is called the precision weight (or synaptic gain) on any prediction error that flows upward through the predictive hierarchy.  This means that the prediction errors produced from the incoming sensory information or at even higher levels of processing are able to have a greater impact on modifying and updating the predictive models.  This makes sense from an evolutionary perspective, where we can ration our cognitive resources in a more adaptable way, by allowing things that catch our attention (which may be more important to our survival prospects) to have the greatest effect on how we understand the world around us and how we need to act at any given moment.

After repeatedly encountering certain predicted causal relations in a conscious fashion, the more likely those predictions can become automated or unconsciously processed.  And if this has happened with certain rules of inference that govern how we manipulate and process many of our predictive models, it seems reasonable to suspect that this would contribute to what we call our intuitive reasoning (or intuition).  After all, intuition seems to give people the sense of knowing something without knowing how it was acquired and without any present conscious process of reasoning.

This is similar to muscle memory or procedural memory (like learning how to ride a bike) which is consciously processed at first (thus involving many parts of the cerebral cortex), but after enough repetition it becomes a faster and more automated process that is accomplished more economically and efficiently by the basal ganglia and cerebellum, parts of the brain that are believed to handle a great deal of unconscious processing like that needed for procedural memory.  This would mean that the predictions associated with these kinds of causal relations begin to function out of our consciousness, even if the same predictive strategy is still in place.

As mentioned above, one difference between this unconscious intuition and other forms of reasoning that operate within the purview of consciousness is that our intuitions are less likely to be updated or changed based on new experiential evidence since our conscious attention isn’t involved in the updating process. This means that the precision weight of upward flowing prediction errors that encounter downward flowing predictions that are operating unconsciously will have little impact in updating those predictions.  Furthermore, the fact that the most automated predictions are often those that we’ve been using for most of our lives, means that they are also likely to have extremely high Bayesian priors, further isolating them from modification.

Some of these priors may become what are called hyperpriors or priors over priors (many of these believed to be established early in life) where there may be nothing that can overcome them, because they describe an extremely abstract feature of the world.  An example of a possible hyperprior could be one that demands that the brain settle on one generative model even when it’s comparable to several others under consideration.  One could call this a “tie breaker” hyperprior, where if the brain didn’t have this kind of predictive mechanism in place, it may never be able to settle on a model, causing it to see the world (or some aspect of it) as a superposition of equiprobable states rather than simply one determinate state.  We could see the potential problem in an organism’s survival prospects if it didn’t have this kind of hyperprior in place.  Whether or not a hyperprior like this is a form of innate specificity, or acquired in early learning is debatable.

An obvious trade-off with intuition (or any kind of innate biases) is that it provides us with fast, automated predictions that are robust and likely to be reliable much of the time, but at the expense of not being able to adequately handle more novel or complex situations, thereby leading to fallacious inferences.  Our cognitive biases are also likely related to this kind of unconscious reasoning whereby evolution has naturally selected cognitive strategies that work well for the kind of environment we evolved in (African savanna, jungle, etc.) even at the expense of our not being able to adapt as well culturally or in very artificial situations.

Imagination vs. Perception

One large benefit of storing so much perceptual information in our memories (predictive models with different spatio-temporal scales) is our ability to re-create it offline (so to speak).  This is where imagination comes in, where we are able to effectively simulate perceptions without requiring a stream of incoming sensory data that matches it.  Notice however that this is still a form of perception, because we can still see, hear, feel, taste and smell predicted causal relations that have been inferred from past sensory experiences.

The crucial difference, within a PP framework, is the role of precision weighting on the prediction error, just as we saw above in terms of trying to update intuitions.  If precision weighting is set or adjusted to be relatively low with respect to a particular set of predictive models, then prediction error will have little if any impact on the model.  During imagination, we effectively decouple the bottom-up prediction error from the top-down predictions associated with our sensory cortex (by reducing the precision weighting of the prediction error), thus allowing us to intentionally perceive things that aren’t actually in the external world.  We need not decouple the error from the predictions entirely, as we may want our imagination to somehow correlate with what we’re actually perceiving in the external world.  For example, maybe I want to watch a car driving down the street and simply imagine that it is a different color, while still seeing the rest of the scene as I normally would.  In general though, it is this decoupling “knob” that we can turn (precision weighting) that underlies our ability to produce and discriminate between normal perception and our imagination.

So what happens when we lose the ability to control our perception in a normal way (whether consciously or not)?  Well, this usually results in our having some kind of hallucination.  Since perception is often referred to as a form of controlled hallucination (within PP), we could better describe a pathological hallucination (such as that arising from certain psychedelic drugs or a condition like Schizophrenia) as a form of uncontrolled hallucination.  In some cases, even with a perfectly normal/healthy brain, when the prediction error simply can’t be minimized enough, or the brain is continuously switching between models, based on what we’re looking at, we experience perceptual illusions.

Whether it’s illusions, hallucinations, or any other kind of perceptual pathology (like not being able to recognize faces), PP offers a good explanation for why these kinds of experiences can happen to us.  It’s either because the models are poor (their causal structure or priors) or something isn’t being controlled properly, like the delicate balance between precision weighting and prediction error, any of which that could result from an imbalance in neurotransmitters or some kind of brain damage.

Imagination & Conscious Reasoning

While most people would tend to define imagination as that which pertains to visual imagery, I prefer to classify all conscious experiences that are not directly resulting from online perception as imagination.  In other words, any part of our conscious experience that isn’t stemming from an immediate inference of incoming sensory information is what I consider to be imagination.  This is because any kind of conscious thinking is going to involve an experience that could in theory be re-created by an artificial stream of incoming sensory information (along with our top-down generative models that put that information into a particular context of understanding).  As long as the incoming sensory information was a particular way (any way that we can imagine!), even if it could never be that way in the actual external world we live in, it seems to me that it should be able to reproduce any conscious process given the right top-down predictive model.  Another way of saying this is that imagination is simply another word to describe any kind of offline conscious mental simulation.

This also means that I’d classify any and all kinds of conscious reasoning processes as yet another form of imagination.  Just as is the case with more standard conceptions of imagination (within PP at least), we are simply taking particular predictive models, manipulating them in certain ways in order to simulate some result with this process decoupled (at least in part) from actual incoming sensory information.  We may for example, apply a rule of inference that we’ve picked up on and manipulate several predictive models of causal relations using that rule.  As mentioned in the previous post and in the post from part 2 of this series, language is also likely to play a special role here where we’ll likely be using it to help guide this conceptual manipulation process by organizing and further representing the causal relations in a linguistic form, and then determining the resulting inference (which will more than likely be in a linguistic form as well).  In doing so, we are able to take highly abstract properties of causal relations and apply rules to them to extract new information.

If I imagine a purple elephant trumpeting and flying in the air over my house, even though I’ve never experienced such a thing, it seems clear that I’m manipulating several different types of predicted causal relations at varying levels of abstraction and experiencing the result of that manipulation.  This involves inferred causal relations like those pertaining to visual aspects of elephants, the color purple, flying objects, motion in general, houses, the air, and inferred causal relations pertaining to auditory aspects like trumpeting sounds and so forth.

Specific instances of these kinds of experienced causal relations have led to my inferring them as an abstract probabilistically-defined property (e.g. elephantness, purpleness, flyingness, etc.) that can be reused and modified to some degree to produce an infinite number of possible recreated perceptual scenes.  These may not be physically possible perceptual scenes (since elephants don’t have wings to fly, for example) but regardless I’m able to add or subtract, mix and match, and ultimately manipulate properties in countless ways, only limited really by what is logically possible (so I can’t possibly imagine what a square circle would look like).

What if I’m performing a mathematical calculation, like “adding 9 + 9”, or some other similar problem?  This appears (upon first glance at least) to be very qualitatively different than simply imagining things that we tend to perceive in the world like elephants, books, music, and other things, even if they are imagined in some phantasmagorical way.  As crazy as those imagined things may be, they still contain things like shapes, colors, sounds, etc., and a mathematical calculation seems to lack this.  I think the key thing to realize here is the fundamental process of imagination as being able to add or subtract and manipulate abstract properties in any way that is logically possible (given our current set of predictive models).  This means that we can imagine properties or abstractions that lack all the richness of a typical visual/auditory perceptual scene.

In the case of a mathematical calculation, I would be manipulating previously acquired predicted causal relations that pertain to quantity and changes in quantity.  Once I was old enough to infer that separate objects existed in the world, then I could infer an abstraction of how many objects there were in some space at some particular time.  Eventually, I could abstract the property of how many objects without applying it to any particular object at all.  Using language to associate a linguistic symbol for each and every specific quantity would lay the groundwork for a system of “numbers” (where numbers are just quantities pertaining to no particular object at all).  Once this was done, then my brain could use the abstraction of quantity and manipulate it by following certain inferred rules of how quantities can change by adding to or subtracting from them.  After some practice and experience I would now be in a reasonable position to consciously think about “adding 9 + 9”, and either do it by following a manual iterative rule of addition that I’ve learned to do with real or imagined visual objects (like adding up some number of apples or dots/points in a row or grid), or I can simply use a memorized addition table and search/recall the sum I’m interested in (9 + 9 = 18).

Whether we consider imagining a purple elephant, mentally adding up numbers, thinking about what I’m going to say to my wife when I see her next, or trying to explicitly apply logical rules to some set of concepts, all of these forms of conscious thought or reasoning are all simply different sets of predictive models that I’m simply manipulating in mental simulations until I arrive at a perception that’s understood in the desired context and that has minimal prediction error.

Putting it all together

In summary, I think we can gain a lot of insight by looking at all the different aspects of brain function through a PP framework.  Imagination, perception, memory, intuition, and conscious reasoning fit together very well when viewed as different aspects of hierarchical predictive models that are manipulated and altered in ways that give us a much more firm grip on the world we live in and its inferred causal structure.  Not only that, but this kind of cognitive architecture also provides us with an enormous potential for creativity and intelligence.  In the next post in this series, I’m going to talk about consciousness, specifically theories of consciousness and how they may be viewed through a PP framework.

Learning About Our Cognitive Biases: An Antidote to Irrational & Dogmatic Thinking (Part 3 of 3)

This is part 3 of 3 of this post.  Click here to read part 2.

Reactive Devaluation

Studies have shown that if a claim is believed to have come from a friend or ally, the person receiving it will be more likely to think that it has merit and is truthful.  Likewise, if it is believed to have come from an enemy or some member of the opposition, it is significantly devalued.  This bias is known as reactive devaluation.  A few psychologists actually determined that the likeabillity of the source of the message is one of the key factors involved with this effect, and people will actually value information coming from a likeable source more than a source they don’t like, with the same order of magnitude that they would value information coming from an expert versus a non-expert.  This is quite troubling.  We’ve often heard about how political campaigns and certain debates are seen more as popularity contests than anything else, and this bias is likely a primary reason for why this is often the case.

Unfortunately, a large part of society has painted a nasty picture of atheism, skepticism, and the like.  As it turns out, people who do not believe in a god (mainly the Christian god) are the least trusted minority in America, according to a sociological study performed a few years ago (Johanna Olexy and Lee Herring, “Atheists Are Distrusted: Atheists Identified as America’s Most Distrusted Minority, According to Sociological Study,” American Sociological Association News, May 3, 2006).  Regarding the fight against dogmatism, the rational thinker needs to carefully consider how they can improve their likeability to the recipients of their message, if nothing else, by going above and beyond to remain kind, respectful, and maintain a positive and uplifting attitude during their argument presentation.  In any case, the person arguing will always be up against any societal expectations and preferences that may reduce that person’s likeability, so there’s also a need to remind others as well as ourselves about what is most important when discussing an issue — the message rather than the messenger.

The Backfire Effect & Psychological Reactance

There have already been several cognitive biases mentioned that interfere with people’s abilities to accept new evidence that contradicts their beliefs.  To make matters worse, psychologists have discovered that people often react to disconfirming evidence by actually strengthening their beliefs.  This is referred to as the backfire effect.  This can make refutations futile most of the time, but there are some strategies that help to combat this bias.  The bias is largely exacerbated by a person’s emotional involvement and the fact that people don’t want to appear to be unintelligent or incorrect in front of others.  If the debating parties let tempers cool down before resuming the debate, this can be quite helpful regarding the emotional element.  Additionally, if a person can show their opponent how accepting the disconfirming evidence will benefit them, they’ll be more likely to accept it.  There are times when some of the disconfirming evidence mentioned is at least partially absorbed by the person, and they just need some more time in a less tense environment to think things over a bit.  If the situation gets too heated, or if a person risks losing face with peers around them, the ability to persuade that person only decreases.

Another similar cognitive bias is referred to as reactance, which is basically a motivational reaction to instances of perceived infringement of behavioral freedoms.  That is, if a person feels that someone is taking away their choices or limiting the number of them, such as in cases where they are being heavily pressured into accepting a different point of view, they will often respond defensively.  They may feel that freedoms are being taken away from them, and it is only natural for a person to defend whatever freedoms they see themselves having or potentially losing.  Whenever one uses reverse psychology, they are in fact playing on at least an implicit knowledge of this reactance effect.  Does this mean that rational thinkers should use reverse psychology to persuade dogmatists to accept reason and evidence?  I personally don’t think that this is the right approach because this could also backfire and I would prefer to be honest and straightforward, even if this results in less efficacy.  At the very least however, one needs to be aware of this bias and needs to be careful with how they phrase their arguments, how they present the evidence, and to maintain a calm and respectful attitude during these discussions, so that the other person doesn’t feel the need to defend themselves with the cost of ignoring evidence.

Pareidolia & Other Perceptual Illusions

Have you ever seen faces of animals or people while looking at clouds, shadows, or other similar situations?  The brain has many pattern recognition modules and will often respond erroneously to vague or even random stimuli, such that we perceive something familiar, even when it isn’t actually there.  This is called pareidolia, and is a type of apophenia (i.e. seeing patterns in random data).  Not surprisingly, many people have reported instances of seeing various religious imagery and so forth, most notably the faces of prominent religious figures, from ordinary phenomena.  People have also erroneously heard “hidden messages” in records when playing them in reverse, and similar illusory perceptions.  This results from the fact that the brain often fills in gaps and if one expects to see a pattern (even if unconsciously driven by some emotional significance), then the brain’s pattern recognition modules can often “over-detect” or result in false positives by too much sensitivity and by conflating unconscious imagery with one’s actual sensory experiences, leading to a lot of distorted perceptions.  It goes without saying that this cognitive bias is perfect for reinforcing superstitious or supernatural beliefs, because what people tend to see or experience from this effect are those things that are most emotionally significant to them.  After it occurs, people believe that what they saw or heard must have been real and therefore significant.

Most people are aware that hallucinations occur in some people from time to time, under certain neurological conditions (generally caused by an imbalance of neurotransmitters).  A bias like pareidolia can effectively produce similar experiences, but without requiring the more specific neurological conditions that a textbook hallucination requires.  That is, the brain doesn’t need to be in any drug-induced state nor does it need to be physically abnormal in any way for this to occur, making it much more common than hallucinations.  Since pareidolia is also compounded with one’s confirmation bias, and since the brain is constantly implementing cognitive dissonance reduction mechanisms in the background (as mentioned earlier), this can also result in groups of people having the same illusory experience, specifically if the group shares the same unconscious emotional motivations for “seeing” or experiencing something in particular.  These circumstances along with the power of suggestion from even just one member of the group can lead to what are known as collective hallucinations.  Since collective hallucinations like these inevitably lead to a feeling of corroboration and confirmation between the various people experiencing it (say, seeing a “spirit” or “ghost” of someone significant), they can reinforce one another’s beliefs, despite the fact that the experience was based on a cognitive illusion largely induced by powerful human emotions.  It’s likely no coincidence that this type of group phenomenon seems to only occur with members of a religion or cult, specifically those that are in a highly emotional and synchronized psychological state.  There have been many cases of large groups of people from various religions around the world claiming to see some prominent religious figure or other supernatural being, illustrating just how universal this phenomena is within a highly emotional group context.

Hyperactive Agency Detection

There is a cognitive bias that is somewhat related to the aforementioned perceptual illusion biases which is known as hyperactive agency detection.  This bias refers to our tendency to erroneously assign agency to events that happen without an agent causing them.  Human beings all have a theory of mind that is based in part on detecting agency, where we assume that other people are causal agents in the sense that they have intentions and goals just like we do.  From an evolutionary perspective, we can see that our ability to recognize that others are intentional agents allows us to better predict another person’s behavior which is extremely beneficial to our survival (notably in cases where we suspect others are intending to harm us).

Unfortunately our brain’s ability to detect agency is hyperactive in that it errs on the side of caution by over-detecting possible agency, as opposed to under-detecting (which could negatively affect our survival prospects).  As a result, people will often ascribe agency to things and events in nature that have no underlying intentional agent.  For example, people often anthropomorphize (or implicitly assume agency to) machines and get angry when they don’t work properly (such as an automobile that breaks down while driving to work), often with the feeling that the machine is “out to get us”, is “evil”, etc.  Most of the time, if we have feelings like this, they are immediately checked and balanced by our rational minds that remind us that “it’s only a car”, or “it’s not conscious”, etc.  Similarly, when natural disasters or other similar events occur, our hyperactive agency detection snaps into gear, trying to find “someone to blame”.  This cognitive bias explains quite well, at least as a contributing factor as to why so many ancient cultures assumed that there were gods that caused earthquakes, lightning, thunder, volcanoes, hurricanes, etc.  Quite simply, they needed to ascribe the events as having been caused by someone.

Likewise, if we are walking in a forest and hear some strange sounds in the bushes, we may often assume that some intentional agent is present (whether another person or other animal), even though it may simply be the wind that is causing the noise.  Once again, we can see in this case the evolutionary benefit of this agency detection, even with the occasional “false positive”, the noise in the bushes could very well be a predator or other source of danger, so it is usually better for our brains to assume that the noise is coming from an intentional agent.  It’s also entirely possible that even in cases where the source of sound was determined to be the wind, early cultures may have ascribed an intentional stance to the wind itself (e.g. a “wind” god, etc.).  In all of these cases, we can see how our hyperactive agency detection can lead to irrational, theistic and other supernatural belief systems, and we need to be aware of such a bias when evaluating our intuitions of how the world works.

Risk Compensation

It is only natural that the more a person feels protected from various harms, the less careful they tend to behave, and vice versa.  This apparent tendency, sometimes referred to as risk compensation, is often a harmless approach because generally speaking, if one is well protected from harm, they do have less to worry about than one who is not, and thus they can take greater relative risks as a result of that larger cushion of safety.  Where this tends to cause a problem however, is in cases when the perceived level of protection is far higher than it actually is.  The worst case scenario that comes to mind is if a person that appeals to the supernatural thinks that it doesn’t matter what happens to them or that no harm can come to them because of some belief in a supernatural source of protection, let alone one that they believe provides the most protection possible.  In this scenario, their level of risk compensation is fully biased to a negative extreme, and they will be more likely to behave irresponsibly because they don’t have realistic expectations of the consequences of their actions.  In a post-enlightenment society, we’ve acquired a higher responsibility to heed the knowledge gained from scientific discoveries so that we can behave more rationally as we learn more about the consequences of our actions.

It’s not at all difficult to see how the effects of various religious dogma on one’s level of risk compensation can inhibit that society from making rational, responsible decisions.  We have several serious issues plaguing modern society including those related to environmental sustainability, climate change, pollution, war, etc.  If believers in the supernatural have an attitude that everything is “in God’s hands” or that “everything will be okay” because of their belief in a protective god, they are far less likely to take these kinds of issues seriously because they fail to acknowledge the obvious harm on everyone in society.  What’s worse is that not only are dogmatists reducing their own safety, but they’re also reducing the safety of their children, and of everyone else in society that is trying to behave rationally with realistic expectations.  If we had two planets, one for rational thinkers, and one for dogmatists, this would no longer be a problem for everyone.  The reality is that we share one planet and thus we all suffer the consequences of any one person’s irresponsible actions.  If we want to make more rational decisions, especially those related to the safety of our planet’s environment, society, and posterity, people need to adjust their risk compensation such that it is based on empirical evidence using a rational, proven scientific methodology.  This point clearly illustrates the need to phase out supernatural beliefs since false beliefs that don’t correspond with reality can be quite dangerous to everybody regardless of who carries the beliefs.  In any case, when one is faced with the choice between knowledge versus ignorance, history has demonstrated which choice is best.

Final thoughts

The fact that we are living in an information age with increasing global connectivity, and the fact that we are living in a society that is becoming increasingly reliant on technology, will likely help to combat the ill effects of these cognitive biases.  These factors are making it increasingly difficult to hide one’s self from the evidence and the proven efficacy of using the scientific method in order to form accurate beliefs regarding the world around us.  Societal expectations are also changing as a result of these factors, and it is becoming less and less socially acceptable to be irrational, dogmatic, and to not take scientific methodology more seriously.  In all of these cases, having knowledge about these cognitive biases will help people to combat them.  Even if we can’t eliminate the biases, we can safeguard ourselves by preparing for any potential ill effects that those biases may produce.  This is where reason and science come into play, providing us with the means to counter our cognitive biases by applying a rational skepticism to all of our beliefs, and by using a logical and reliable methodology based on empirical evidence to arrive at a belief that we can be justifiably confident (though never certain) in.  As Darwin once said, “Ignorance more frequently begets confidence than does knowledge; it is those who know little, and not those who know much, who so positively assert that this or that problem will never be solved by science.”  Science has constantly been ratcheting our way toward a better understanding of the universe we live in, despite the fact that the results we obtain from science are often at odds with our own intuitions about how we think the world is.  Only relatively recently has science started to provide us with information regarding why our intuitions are often at odds with the scientific evidence.

We’ve made huge leaps within neuroscience, cognitive science, and psychology, and these discoveries are giving us the antidote that we need in order to reduce or eliminate irrational and dogmatic thinking.  Furthermore, those that study logic have an additional advantage in that they are more likely to spot fallacious arguments that are used to support this or that belief, and thus they are less likely to be tricked or persuaded by arguments that often appear to be sound and strong, but actually aren’t.  People fall prey to fallacious arguments all the time, and it is mostly because they haven’t learned how to spot them (which courses in logic and other reasoning can help to mitigate).  Thus, overall I believe it is imperative that we integrate knowledge concerning our cognitive biases into our children’s educational curriculum, starting at a young age (in a format that is understandable and engaging of course).  My hope is that if we start to integrate cognitive education (and complementary courses in logic and reasoning) as a part of our foundational education curriculum, we will see a cascade effect of positive benefits that will guide our society more safely into the future.

Learning About Our Cognitive Biases: An Antidote to Irrational & Dogmatic Thinking (Part 2 of 3)

This is part 2 of 3 of this post.  Click here to read part 1.

The Feeling of Certainty & The Overconfidence Effect

There are a lot of factors that affect how certain we feel that one belief or another is in fact true.  One factor that affects this feeling of certainty is what I like to call the time equals truth fallacy.  With this cognitive bias, we tend to feel more certain about our beliefs as time progresses, despite not gaining any new evidence to support those beliefs. Since the time with a particular belief is the only factor involved here, it will have greater effects on those that are relatively isolated or sheltered from new information.  So if a person has been indoctrinated with a particular set of beliefs (let alone irrational beliefs), and they are effectively “cut off” from any sources of information that could serve to refute those beliefs, it will become increasingly more difficult to persuade them later on.  This situation sounds all too familiar, as it is often the case that dogmatic groups will isolate themselves and shelter their members from outside influences, for this very reason.  If the group can isolate their members for long enough (or the members actively isolate themselves), the dogma effectively gets burned into them, and the brainwashing is far more successful.  That this cognitive bias has been exploited by various dogmatic/religious leaders throughout history is hardly surprising considering how effective it is.

Though I haven’t researched or come across any proposed evolutionary reasons behind the development of this cognitive bias, I do have at least one hypothesis pertaining to its origin.  I’d say that this bias may have resulted from the fact that the beliefs that matter most (from an evolutionary perspective) are those that are related to our survival goals (e.g. finding food sources or evading a predator).  So naturally, any beliefs that didn’t cause noticeable harm to an organism weren’t correlated with harm, and thus were more likely to be correct (i.e. keep using whatever works and don’t fix what ain’t broke).  However, once we started adding many more beliefs to our repertoire, specifically those that didn’t directly affect our survival (including many beliefs in the supernatural), the same cognitive rules and heuristics were still being applied as before, although these new types of beliefs (when false) haven’t been naturally selected against, because they haven’t been detrimental enough to our survival (at least not yet, or not enough to force a significant evolutionary change).  So once again, evolution may have produced this bias for very advantageous reasons, but it is a sub-optimal heuristic (as always) and one that has become a significant liability after we started evolving culturally as well.

Another related cognitive bias, and one that is quite well established, is the overconfidence effect, whereby a person’s subjective confidence level or feeling of confidence in his or her judgements or beliefs are predictably higher than the actual objective accuracy of those judgements and beliefs. In a nutshell, people tend to have far more confidence in their beliefs and decisions than is warranted.  A common example cited to illustrate the intensity of this bias pertains to people taking certain quizzes, claiming to be “99% certain” about their answers to certain questions, and then finding out afterwards that they were wrong about half of the time.  In other cases, this overconfidence effect can be even worse.

In a sense, the feeling of certainty is like an emotion, which, like our other emotions, occur as a natural reflex, regardless of the cause, and independently of reason.  Just like other emotions such as anger, pleasure, or fear, the feeling of certainty can be produced by a seizure, certain drugs, and even electrical stimulation of certain regions of the brain.  In all of these cases, even when no particular beliefs are being thought about, the brain can produce a feeling of certainty nevertheless.  Research has shown that the feeling of knowing or certainty can also be induced through various brain washing and trance-inducing techniques such as a high repetition of words, rhythmic music, sleep deprivation (or fasting), and other types of social/emotional manipulation.  It is hardly a coincidence that many religions often employ a number of these techniques within their repertoire.

The most important point to take away from learning about these “confidence/certainty” biases is to understand that the feeling of certainty is not a reliable way of determining the accuracy of one’s beliefs.  Furthermore, one must realize that no matter how certain we may feel about a particular belief, we could be wrong, and often times are.  Dogmatic belief systems such as those found in many religions are often propagated by a misleading and mistaken feeling of certainty, even if that feeling is more potent than any ever experienced before.  Often times when people ascribing to these dogmatic belief systems are questioned about the lack of empirical evidence supporting their beliefs, even if all of their arguments have been refuted, they simply reply by saying “I just know.”  Irrational, and entirely unjustified responses like these illustrate the dire need for people to become aware of just how fallible their feeling of certainty can be.

Escalation of Commitment

If a person has invested their whole lives in some belief system, even if they encounter undeniable evidence that their beliefs were wrong, they are more likely to ignore it or rationalize it away than to modify their beliefs, and thus they will likely continue investing more time and energy in those false beliefs.  This is due to an effect known as escalation of commitment.  Basically, the higher the cumulative investment in a particular course of action, the more likely someone will feel justified in continuing to increase that investment, despite new evidence showing them that they’d be better off abandoning that investment and cutting their losses.  When it comes to trying to “convert” a dogmatic believer into a more rational, free thinker, this irrational tendency severely impedes any chance of success, more so when that person has been investing themselves in the dogma for a longer period of time, since they ultimately have a lot more to lose.  To put it another way, a person’s religion is often a huge part of their personal identity, so regardless of any undeniable evidence presented that refutes their beliefs, in order to accept that evidence they will have to abandon a large part of themselves which is obviously going to make that acceptance and intellectual honesty quite difficult to implement.  Furthermore, if a person’s family or friends have all invested in the same false beliefs as themselves, even if that person discovers that those beliefs are wrong, they risk their entire family and/or friends rejecting them and then forever losing those relationships that are dearest to them.  We can also see how this escalation of commitment is further reinforced by the time equals truth fallacy mentioned earlier.

Negativity Bias

When I’ve heard various Christians proselytizing to myself or others, one tactic that I’ve seen used over and over again is the use of fear-mongering with theologically based threats of eternal punishment and torture.  If we don’t convert, we’re told, we’re doomed to burn in hell for eternity.  Their incessant use of this tactic suggests that it was likely effective on themselves contributing to their own conversion (it was in fact one of the reasons for my former conversion to Christianity).  Similar tactics have been used in some political campaigns in order to persuade voters by deliberately scaring them into taking one position over another.  Though this strategy is more effective on some than others, there is an underlying cognitive bias in all of us that contributes to its efficacy.  This is known as the negativity bias.  With this bias, information or experiences that are of a more negative nature will tend to have a greater effect on our psychological states and resulting behavior when compared to positive information or experiences that are equally intense.

People will remember threats to their well-being a lot more than they remember pleasurable experiences.  This looks like another example of a simple survival strategy implemented in our brains.  Similar to my earlier hypothesis regarding the time equals truth heuristic, it is far more important to remember and avoid dangerous or life-threatening experiences than it is to remember and seek out pleasurable experiences when all else is equal.  It only takes one bad experience to end a person’s life, whereas it is less critical to experience some minimum number of pleasurable experiences.  Therefore, it makes sense as an evolutionary strategy to allocate more cognitive resources and memory for avoiding the dangerous and negative experiences, and a negativity bias helps us to accomplish that.

Unfortunately, just as with the time equals truth bias, since our cultural evolution has involved us adopting certain beliefs that no longer pertain directly to our survival, the heuristic is often being executed improperly or in the wrong context.  This increases the chances that we will fall prey to adopting irrational, dogmatic belief systems when they are presented to us in a way that utilizes fear-mongering and various forms of threats to our well-being.  When it comes to conceiving of an overtly negative threat, can anyone imagine one more significant than the threat of eternal torture?  It is, by definition, supposed to be the worst scenario imaginable, and thus it is the most effective kind of threat to play on our negativity bias, and lead to irrational beliefs.  If the dogmatic believer is also convinced that their god can hear their thoughts, they’re also far less likely to think about their dogmatic beliefs critically, for they have no mental privacy to do so.

This bias in particular reminds me of Blaise Pascal’s famous Wager, which basically asserts that it is better to believe in God than to risk the consequences of not doing so.  If God doesn’t exist, we have “only” a finite loss (according to Pascal).  If God does exist, then one’s belief leads to an eternal reward, and one’s disbelief leads to an eternal punishment.  Therefore, it is only logical (Pascal asserts) that one should believe in God, since it is the safest position to adopt.  Unfortunately, Pascal’s premises are not sound, and therefore the conclusion is invalid.  For one, is belief in God sufficient enough to avoid the supposed eternal punishment, or does it have to be more than that, such as some other religious tenets, declarations, or rituals?  Second, which god should one believe in?  There have been thousands of gods proposed by various believers over several millennia, and there were obviously many more than we currently have records of in history, therefore Pascal’s Wager merely narrows it down to a choice of several thousand known gods.  Third, even if we didn’t have to worry about choosing the correct god, if it turned out that there was no god, would a life with that belief not have carried a significant cost?  If the belief also involved a host of dogmatic moral prescriptions, rituals, and other specific ways to live one’s life, etc., including perhaps the requirement to abstain from many pleasurable human experiences, this cost could be quite great.  Furthermore, if one applies Pascal’s wager to any number of theoretical possibilities that posit a possible infinite loss over a finite loss, one would be inclined to apply the same principle to any of those possibilities, which is obviously irrational.  It is clear that Pascal hadn’t thought this one through very well, and I wouldn’t doubt that his judgement during this apologetic formulation was highly clouded by his own negativity bias (since the fear of punishment seems to be the primary focus of his “wager”).  As was mentioned earlier, we can see how effective this bias has been on a number of religious converts, and we need to be diligent about watching out for information that is presented to us with threatening strings attached, because it can easily cloud our judgement and lead to the adoption of irrational beliefs and dogma.

Belief Bias & Argument Proximity Effects

When we analyze arguments, we often judge the strength of those arguments based on how plausible their conclusion is, rather than how well those arguments support that conclusion.  In other words, people tend to focus their attention on the conclusion of an argument, and if they think that the conclusion is likely to be true (based on their prior beliefs), this affects their perspective of how strong or weak the arguments themselves appear to be.  This is obviously an incorrect way to analyze arguments, as within logic, only the arguments themselves can be used to determine the validity of the conclusion, not the other way around.  This implies that what is intuitive to us is often incorrect, and thus our cognitive biases are constantly at odds with logic and rationality.  This is why it takes a lot of practice to learn how to apply logic effectively in one’s thinking.  It just doesn’t come naturally, even though we often think it does.

Another cognitive deficit regarding how people analyze arguments irrationally is what I like to call the argument proximity effect.  Basically, when strong arguments are presented along with weak arguments that support a particular conclusion, the strong arguments will often appear to be weaker or less persuasive because of their proximity or association with the weaker ones.  This is partly due to the fact that if a person thinks that they can defeat the moderate or weak arguments, they will often believe that they can also defeat the stronger argument, if only they were given enough time to do so.  It is as if the strong arguments become “tainted” by the weaker ones, even though the opposite is true since the arguments are independent of one another.  That is, when a person has a number of arguments to support their position, a combination of strong and weak arguments is always better than only having the strong arguments, because there are simply more arguments that need to be addressed and rebutted in the former than in the latter.  Another reason for this effect is that the presence of weak arguments also weakens the credibility of the person presenting them, and so then the stronger arguments aren’t taken as seriously by the recipient.  Just as with our belief bias, this cognitive deficit is ultimately caused by not employing logic properly, if at all.  Making people aware of this cognitive flaw is only half the battle, as once again we also need to learn about logic and how to apply it effectively in our thinking.

To read part 3 of 3, click here.

Learning About Our Cognitive Biases: An Antidote to Irrational & Dogmatic Thinking (Part 1 of 3)

It is often surprising to think about the large number of people that still ascribe to dogmatic beliefs, despite our living in a post-enlightenment age of science, reason, and skeptical inquiry.  We have a plethora of scientific evidence and an enormous amount of acquired data illustrating just how fallacious various dogmatic belief systems are, as well as how dogmatism in general is utterly useless for gaining knowledge or making responsible decisions throughout one’s life.  We also have an ample means of distributing this valuable information to the public through academic institutions, educational television programming, online scientific resources, books, etc.  Yet, we still have an incredibly large number of people preferably believing in various dogmatic claims (and with a feeling of “certainty”) over those supported by empirical evidence and reason.  Furthermore, when some of these people are presented with the scientific evidence that refutes their dogmatic belief, they outright deny that the evidence exists or they simply rationalize it away.  This is a very troubling problem in our society as this kind of muddled thinking often prevents many people from making responsible, rational decisions.  Irrational thinking in general has caused many people to vote for political candidates for all the wrong reasons.  It has caused many people to raise their own children in ways that promote intolerance, prejudice, and that undervalue if not entirely abhor invaluable intellectual virtues such as rational skepticism and critical thought.
 .
Some may reasonably assume that (at least) many of these irrational belief systems and behaviors are simply a result of having low self-esteem, inadequate education and/or low intelligence, and it turns out that several dozen sociological studies have found evidence that supports this line of reasoning.  That is, a person that has lower self-esteem, lower education, and/or lower intelligence is more likely to be religious and dogmatic.  However, there is clearly much more to it than that, especially since there are still a lot of people that obtain these irrational belief systems that are also well educated and intelligent.  Furthermore, every human being is quite often irrational in their thinking.  So what else could be contributing to this irrationality (and dogmatism)?  Well, the answer seems to lie in the realms of cognitive science and psychology.  I’ve decided to split this post into three parts, because there is a lot of information I’d like to cover.  Here’s the link to part 2 of 3, which can also be found at the end of this post.

Cognitive Biases

Human beings are very intelligent relative to most other species on this planet, however, we are still riddled with various flaws in our brains which drastically reduce our ability to think logically or rationally.  This isn’t surprising after one recognizes that our brains are the product of evolution, and thus they weren’t “designed” in any way at all, let alone to operate logically or rationally.  Instead, what we see in human beings is a highly capable and adaptable brain, yet one with a number of cognitive biases and shortcomings.  Though a large number (if not all) of these biases developed as a sub-optimal yet fairly useful survival strategy, specifically within the context of our evolutionary past, most of them have become a liability in our modern civilization, and often impede our intellectual development as individuals and as a society.  A lot of these cognitive biases serve (or once served) as an effective way at making certain decisions rather quickly, that is, the biases have effectively served as heuristics for simplifying the decision making process with many everyday problems.  While heuristics are valuable and often make our decision making faculties more efficient, they are often far less than optimal due to the fact that increased efficiency is often afforded by a reduction in accuracy.  Again, this is exactly what we expect to find with products of evolution — a suboptimal strategy for solving some problem or accomplishing some goal, but one that has worked well enough to give the organism (in this case, human beings) an advantage over the competition within some environmental niche.

So what kinds of cognitive biases do we have exactly, and how many of them have cognitive scientists discovered?  A good list of them can be found here.  I’m going to mention a few of them in this post, specifically those that promote or reinforce dogmatic thinking, those that promote or reinforce an appeal to a supernatural world view, and ultimately those that seem to most hinder overall intellectual development and progress.  To begin, I’d like to briefly discuss Cognitive Dissonance Theory and how it pertains to the automated management of our beliefs.

Cognitive Dissonance Theory

The human mind doesn’t tolerate internal conflicts very well, and so when we have beliefs that contradict one another or when we are exposed to new information that conflicts with an existing belief, some degree of cognitive dissonance results and we are effectively pushed out of our comfort zone.  The mind attempts to rid itself of this cognitive dissonance through a few different methods.  A person may alter the importance of the original belief or that of the new information, they may change the original belief, or they may seek evidence that is critical of the new information.  Generally, the easiest path for the mind to take is the first and last method mentioned here, as it is far more difficult for a person to change their existing beliefs, partly due to the fact that a person’s existing set of beliefs is their only frame of reference when encountering new information, and there is an innate drive to maintain a feeling of familiarity and a feeling of certainty of our beliefs.

The primary problem with these automated cognitive dissonance reduction methods is that they tend to cause people to defend their beliefs in one way or another rather than to question them and try to analyze them objectively.  Unfortunately, this means that we often fail to consider new evidence and information using a rational approach, and this in turn can cause people to acquire quite a large number of irrational, false beliefs, despite having a feeling of certainty regarding the truth of those beliefs.  It is also important to note that many of the cognitive biases that I’m about to mention result from these cognitive dissonance reduction methods, and they can become compounded to create severe lapses in judgement.

Confirmation Bias, Semmelweis Reflex, and The Frequency Illusion

One of the most significant cognitive biases we have is what is commonly referred to as confirmation bias.  Basically, this bias refers to our tendency to seek out, interpret, or remember information in a particular way that serves to confirm our beliefs.  To put it more simply, we often will only see what we want to see.  It is a method that our brain uses in order to be able to sift through large amounts of information and piece it together with what we already believe into a meaningful story line or explanation.  Just as we expect, it ultimately optimizes for efficiency over accuracy, and therein lies the problem.  Even though this bias applies to everyone, the dogmatist in particular has a larger cognitive barrier to overcome because they’re also using a fallacious epistemological methodology right from the start (i.e. appealing to some authority rather than to reason and evidence).  So while everyone is affected by this bias to some degree, the dogmatic believer in particular has an epistemological flaw that serves to compound the issue and make matters worse.  They will, to a much higher degree, live their life unconsciously ignoring any evidence encountered that refutes their beliefs (or fallaciously reinterpreting it to be in their favor), and they will rarely if ever attempt to actively seek out such evidence to try and disprove their beliefs.  A more rational thinker on the other hand, has a belief system primarily based on a reliable epistemological methodology (which uses empirical evidence to support it), and one that has been proven to work and provide increasingly accurate knowledge better than any other method (i.e. the scientific method).  Nevertheless, everyone is affected by this bias, and because it is operating outside the conscious mind, we all fail to notice it as it actively modifies our perception of reality.

One prominent example in our modern society, illustrating how this cognitive bias can reinforce dogmatic thinking (and with large groups of people), is the ongoing debate between Young Earth Creationists and the scientific consensus regarding evolutionary theory.  Even though the theory of evolution is a scientific fact (much like many other scientific theories, such as the theory of gravity, or the Germ theory of disease), and even though there are several hundred thousand scientists that can attest to its validity, as well as a plethora of evidence within a large number of scientific fields supporting its validity, Creationists seem to be in complete and utter denial of this actuality.  Not only do they ignore the undeniable wealth of evidence that is presented to them, but they also misinterpret evidence (or cherry pick) to suit their position.  This problem is compounded by the fact that their beliefs are only supported by fallacious tautologies [e.g. “It’s true because (my interpretation of) a book called the Bible says so…”], and other irrational arguments based on nothing more than a particular religious dogma and a lot of intellectual dishonesty.

I’ve had numerous debates with these kinds of people (and I used to BE one of them many years ago, so I understand how many of them arrived at these beliefs), and I’ll often quote several scientific sources and various logical arguments to support my claims (or to refute theirs), and it is often mind boggling to witness their response, with the new information seemingly going in one ear and coming out the other without undergoing any mental processing or critical consideration.  It is as if they didn’t hear a single argument that was pointed out to them and merely executed some kind of rehearsed reflex.  The futility of the communication is often confirmed when one finds themselves having to repeat the same arguments and refutations over and over again, seemingly falling on deaf ears, with no logical rebuttal of the points made.  On a side note, this reflex-like response where a person quickly rejects new evidence because it contradicts their established belief system is known as the “Semmelweis reflex/effect”, but it appears to be just another form of confirmation bias.  Some of these cases of denial and irrationality are nothing short of ironic, since these people are living in a society that owes its technological advancements exclusively to rational, scientific methodologies, and these people are indeed patronizing and utilizing many of these benefits of science everyday of their lives (even if they don’t realize it).  Yet we still find many of them hypocritically abhorring or ignoring science and reason whenever it is convenient to do so, such as when they try to defend their dogmatic beliefs.

Another prominent example of confirmation bias reinforcing dogmatic thinking relates to various other beliefs in the supernatural.  People that believe in the efficacy of prayer, for example, will tend to remember prayers that were “answered” and forget or rationalize away any prayers that weren’t “answered”, since their beliefs reinforce such a selective memory.  Similarly, if a plane crash or other disaster occurs, some people with certain religious beliefs will often mention or draw attention to the idea that their god or some supernatural force must have intervened or played a role in preventing the death of any survivors (if there are any), while they seem to downplay or ignore the facts pertaining to the many others that did not survive (if there are any survivors at all).  In the case of prayer, numerous studies have shown that prayer for another person (e.g. to heal them from an illness) is ineffective when that person doesn’t know that they’re being prayed for, thus any efficacy of prayer has been shown to be a result of the placebo effect and nothing more.  It may be the case that prayer is one of the most effective placebos, largely due to the incredibly high level of belief in its efficacy, and the fact that there is no expected time frame for it to “take effect”.  That is, since nobody knows when a prayer will be “answered”, then even if the prayer isn’t “answered” for several months or even several years (and time ranges like this are not unheard of for many people that have claimed to have prayers answered), then confirmation bias will be even more effective than a traditional medical placebo.  After all, a typical placebo pill or treatment is expected to work within a reasonable time frame comparable to other medicine or treatments taken in the past, but there’s no deadline for a prayer to be answered by.  It’s reasonable to assume that even if these people were shown the evidence that prayer is no more effective than a placebo (even if it is the most effective placebo available), they would reject it or rationalize it away, once again, because their beliefs require it to be so.  The same bias applies to numerous other purportedly paranormal phenomena, where people see significance in some event because of how their memory or perception is operating in order to promote or sustain their beliefs.  As mentioned earlier, we often see what we want to see.

There is also a closely related bias known as congruence bias, which occurs because people rely too heavily on directly testing a given hypothesis, and often neglect to indirectly test that hypothesis.  For example, suppose that you were introduced to a new lighting device with two buttons on it, a green button and a red button.  You are told that the device will only light up by pressing the green button.  A direct test of this hypothesis would be to press the green button, and see if it lights up.  An indirect test of this hypothesis would be to press the red button, and see if it doesn’t light up.  Congruence bias illustrates that we tend to avoid the indirect testing of hypotheses, and thus we can start to form irrational beliefs by mistaking correlation with causation, or by forming incomplete explanations of causation.  In the case of the aforementioned lighting device, it could be the case that both buttons cause it to light up.  Think about how this congruence bias affects our general decision making process, where when we combine it with our confirmation bias, we are inclined to not only reaffirm our beliefs, but to avoid trying to disprove them (since we tend to avoid indirect testing of those beliefs).  This attitude and predisposition reinforces dogmatism by assuming the truth of one’s beliefs and not trying to verify them in any rational, critical way.

An interesting cognitive bias that often works in conjunction with a person’s confirmation bias is something referred to as the frequency illusion, whereby some detail of an event or some specific object may enter a person’s thoughts or attention, and then suddenly it seems that they are experiencing the object or event at a higher than normal frequency.  For example, if a person thinks about a certain animal, say a turtle, they may start noticing lots of turtles around them that they would have normally overlooked.  This person may even go to a store and suddenly notice clothing or other gifts with turtle patterns or designs on them that they didn’t seem to notice before.  After all, “turtle” is on their mind, or at least in the back of their mind, so their brain is unconsciously “looking” for turtles, and the person isn’t aware of their own unconscious pattern recognition sensitivity.  As a result, they may think that this perceived higher frequency of “seeing turtles” is abnormal and that it must be more than simply a coincidence.  If this happens to a person that appeals to the supernatural, their confirmation bias may mistake this frequency illusion for a supernatural event, or something significant.  Since this happens unconsciously, people can’t control this illusion or prevent it from happening.  However, once a person is aware of the frequency illusion as a cognitive bias that exists, they can at least reassess their experiences with a larger toolbox of rational explanations, without having to appeal to the supernatural or other irrational belief systems.  So in this particular case, we can see how various cognitive biases can “stack up” with one another and cause serious problems in our reasoning abilities and negatively affect how accurately we perceive reality.

To read part 2 of 3, click here.

Neuroscience Arms Race & Our Changing World View

At least since the time of Hippocrates, people began to realize that the brain was the physical correlate of consciousness and thought.  Since then, the fields of psychology, neuroscience, and several inter-related fields have emerged.  There have been numerous advancements made within the field of neuroscience during the last decade or so, and in that same time frame there has also been an increased interest in the social, religious, philosophical, and moral implications that have precipitated from such a far-reaching field.  Certainly the medical knowledge we’ve obtained from the neurosciences has been the primary benefit of such research efforts, as we’ve learned quite a bit more about how the brain works, how it is structured, and the ongoing neuropathology that has led to improvements in diagnosing and treating various mental illnesses.  However, it is the other side of neuroscience that I’d like to focus on in this post — the paradigm shift relating to how we are starting to see the world around us (including ourselves), and how this is affecting our goals as well as how to achieve them.

Paradigm Shift of Our World View

Aside from the medical knowledge we are obtaining from the neurosciences, we are also gaining new perspectives on what exactly the “mind” is.  We’ve come a long way in demonstrating that “mental” or “mind” states are correlated with physical brain states, and there is an ever growing plethora of evidence which suggests that these mind states are in fact caused by these brain states.  It should come as no surprise then that all of our thoughts and behaviors are also caused by these physical brain states.  It is because of this scientific realization that society is currently undergoing an important paradigm shift in terms of our world view.

If all of our thoughts and behaviors are mediated by our physical brain states, then many everyday concepts such as thinking, learning, personality, and decision making can take on entirely new meanings.  To illustrate this point, I’d like to briefly mention the well known “nature vs. nurture” debate.  The current consensus among scientists is that people (i.e. their thoughts and behavior) are ultimately products of both their genes and their environment.

Genes & Environment

From a neuroscientific perspective, the genetic component is accounted for by noting that genes have been shown to play a very large role in directing the initial brain wiring schema of an individual during embryological development and through gestation.  During this time, the brain is developing very basic instinctual behavioral “programs” which are physically constituted by vastly complex neural networks, and the body’s developing sensory organs and systems are also connected to particular groups of these neural networks.  These complex neural networks, which have presumably been naturally selected for in order to benefit the survival of the individual, continue being constructed after gestation and throughout the entire ontogenic evolution of the individual (albeit to lesser degrees over time).

As for the environmental component, this can be further split into two parts: the internal and the external environment.  The internal environment within the brain itself, including various chemical concentration gradients partly mediated by random Brownian motion, provides some gene expression constraints as well as some additional guidance to work with the genetic instructions to help guide neuronal growth, migration, and connectivity.  The external environment, consisting of various sensory stimuli, seems to modify this neural construction by providing a form of inputs which may cause the constituent neurons within these neural networks to change their signal strength, change their action potential threshold, and/or modify their connections with particular neurons (among other possible changes).

Causal Constraints

This combination of genetic instructions and environmental interaction and input produces a conscious, thinking, and behaving being through a large number of ongoing and highly complex hardware changes.  It isn’t difficult to imagine why these insights from neuroscience might modify our conventional views of concepts such as thinking, learning, personality, and decision making.  Prior to these developments over the last few decades, the brain was largely seen as a sort of “black box”, with its internal milieu and functional properties remaining mysterious and inaccessible.  From that time and prior to it, for millennia, many people have assumed that our thoughts and behaviors were self-caused or causa sui.  That is, people believed that they themselves (i.e. some causally free “consciousness”, or “soul”, etc.) caused their own thoughts and behavior as opposed to those thoughts and behaviors being ultimately caused by physical processes (e.g. neuronal activity, chemical reactions, etc.).

Neuroscience (as well as biochemistry and its underlying physics) has shed a lot of light on this long-held assumption and, as it stands, the evidence has shown this prior assumption to be false.  The brain is ultimately controlled by the laws of physics since every chemical reaction and neural event that physically produces our thoughts, choices, and behaviors, have never been shown to be causally free from these physically guiding constraints.  I will mention that quantum uncertainty or quantum “randomness” (if ontologically random) does provide some possible freedom from physical determinism.  However, these findings from quantum physics do not provide any support for self-caused thoughts or behaviors.  Rather, it merely shows that those physically constrained thoughts and behaviors may never be completely predictable by physical laws no matter how much data is obtained.  In other words, our thoughts and behaviors are produced by highly predictable (although not necessarily completely predictable) physical laws and constraints as well as some possible random causal factors.

As a result of these physical causal constraints, the conventional perspective of an individual having classical free will has been shattered.  Our traditional views of human attributes including morality, choices, ideology, and even individualism are continuing to change markedly.  Not surprisingly, there are many people uncomfortable with these scientific discoveries including members of various religious and ideological groups that are largely based upon and thus depend on the very presupposition of precepts such as classical free will and moral responsibility.  The evidence that is compiling from the neurosciences is in fact showing that while people are causally responsible for their thoughts, choices, and behavior (i.e. an individual’s thoughts and subsequent behavior are constituents of a causal chain of events), they are not morally responsible in the sense that they can choose to think or behave any differently than they do, for their thoughts and behavior are ultimately governed by physically constrained neural processes.

New World View

Now I’d like to return to what I mentioned earlier and consider how these insights from neuroscience may be drastically modifying how we look at concepts such as thinking, learning, personality, and decision making.  If our brain is operating via these neural network dynamics, then conscious thought appears to be produced by a particular subset of these neural network configurations and processes.  So as we continue to learn how to more directly control or alter these neural network arrangements and processes (above and beyond simply applying electrical potentials to certain neural regions in order to bring memories or other forms of imagery into consciousness, as we’ve done in the past), we should be able to control thought generation from a more “bottom-up” approach.  Neuroscience is definitely heading in this direction, although there is a lot of work to be done before we have any considerable knowledge of and control over such processes.

Likewise, learning seems to consist of a certain type of neural network modification (involving memory), leading to changes in causal pattern recognition (among other things) which results in our ability to more easily achieve our goals over time.  We’ve typically thought of learning as the successful input, retention, and recall of new information, and we have been achieving this “learning” process through the input of environmental stimuli via our sensory organs and systems.  In the future, it may be possible to once again, as with the aforementioned bottom-up thought generation, physically modify our neural networks to directly implant memories and causal pattern recognition information in order to “learn” without any actual sensory input, and/or we may be able to eventually “upload” information in a way that bypasses the typical sensory pathways thus potentially allowing us to catalyze the learning process in unprecedented ways.

If we are one day able to more directly control the neural configurations and processes that lead to specific thoughts as well as learned information, then there is no reason that we won’t be able to modify our personalities, our decision-making abilities and “algorithms”, etc.  In a nutshell, we may be able to modify any aspect of “who” we are in extraordinary ways (whether this is a “good” or “bad” thing is another issue entirely).  As we come to learn more about the genetic components of these neural processes, we may also be able to use various genetic engineering techniques to assist with the necessary neural modifications required to achieve these goals.  The bottom line here is that people are products of their genes and environment, and by manipulating both of those causal constraints in more direct ways (e.g. through the use of neuroscientific techniques), we may be able to achieve previously unattainable abilities and perhaps in a relatively miniscule amount of time.  It goes without saying that these methods will also significantly affect our evolutionary course as a species, allowing us to enter new landscapes through our substantially enhanced ability to adapt.  This may be realized through these methods by finding ways to improve our intelligence, memory, or other cognitive faculties, effectively giving us the ability to engineer or re-engineer our brains as desired.

Neuroscience Arms Race

We can see that increasing our knowledge and capabilities within the neurosciences has the potential for drastic societal changes, some of which are already starting to be realized.  The impact that these fields will have on how we approach the problem of criminal, violent, or otherwise undesirable behavior can not be overstated.  Trying to correct these issues by focusing our efforts on the neural or cognitive substrate that underlie them, as opposed to using less direct and more external means (e.g. social engineering methods) that we’ve been using thus far, may lead to much less expensive solutions as well as solutions that may be realized much, much more quickly.

As with any scientific discovery or subsequent technology produced from it, neuroscience has the power to bestow on us both benefits as well as disadvantages.  I’m reminded of the ground-breaking efforts made within nuclear physics several decades ago, whereby physicists not only gained precious information about subatomic particles (and their binding energies) but also how to release these enormous amounts of energy from nuclear fusion and fission reactions.  It wasn’t long after these breakthrough discoveries were made before they were used by others to create the first atomic bombs.  Likewise, while our increasing knowledge within neuroscience has the power to help society improve by optimizing our brain function and behavior, it can also be used by various entities to manipulate the populace for unethical reasons.

For example, despite the large number of free market proponents who claim that the economy need not be regulated by anything other than rational consumers and their choices of goods and services, corporations have clearly increased their use of marketing strategies that take advantage of many humans’ irrational tendencies (whether it is “buy one get one free” offers, “sales” on items that have artificially raised prices, etc.).  Politicians and other leaders have been using similar tactics by taking advantage of voters’ emotional vulnerabilities on certain controversial issues that serve as nothing more than an ideological distraction in order to reduce or eliminate any awareness or rational analysis of the more pressing issues.

There are already research and development efforts being made by these various entities in order to take advantage of these findings within neuroscience such that they can have greater influence over people’s decisions (whether it relates to consumers’ purchases, votes, etc.).  To give an example of some of these R&D efforts, it is believed that MRI (Magnetic Resonance Imaging) or fMRI (functional Magnetic Resonance Imaging) brain scans may eventually be able to show useful details about a person’s personality or their innate or conditioned tendencies (including compulsive or addictive tendencies, preferences for certain foods or behaviors, etc.).  This kind of capability (if realized) would allow marketers to maximize how many dollars they can squeeze out of each consumer by optimizing their choices of goods and services and how they are advertised. We have already seen how purchases made on the internet, if tracked, begin to personalize the advertisements that we see during our online experience (e.g. if you buy fishing gear online, you may subsequently notice more advertisements and pop-ups for fishing related goods and services).  If possible, the information found using these types of “brain probing” methods could be applied to other areas, including that of political decision making.

While these methods derived from the neurosciences may be beneficial in some cases, for instance, by allowing the consumer more automated access to products that they may need or want (which will likely be a selling point used by these corporations for obtaining consumer approval of such methods), it will also exacerbate unsustainable consumption and other personal or societally destructive tendencies and it is likely to continue to reduce (or eliminate) whatever rational decision making capabilities we still have left.

Final Thoughts

As we can see, neuroscience has the potential to (and is already starting to) completely change the way we look at the world.  Further advancements in these fields will likely redefine many of our goals as well as how to achieve them.  It may also allow us to solve many problems that we face as a species, far beyond simply curing mental illnesses or ailments.  The main question that comes to mind is:  Who will win the neuroscience arms race?  Will it be those humanitarians, scientists, and medical professionals that are striving to accumulate knowledge in order to help solve the problems of individuals and societies as well as to increase their quality of life?  Or will it be the entities that are trying to accumulate similar knowledge in order to take advantage of human weaknesses for the purposes of gaining wealth and power, thus exacerbating the problems we currently face?