“Primordial Bounds”

“Primordial Bounds”

Animating forces danced in the abyss
From cosmic clouds did Helios arise
Upon the sands of Gaia shining bright
Warming seas to hold her in embrace
Effervescent stirrings in the depths below
From whence primordial bounds emerged
Tendril seeds could then begin to grow
Entropic channels out of chaos born
Spreading far and wide, were favored so
The spark of life ignited, on it goes
Destined imperfections bringing forth
Beauty and a mass of creatures flow

Senses born, interpretations formed
Rosetta stone with energetic tone
Neuronal trees to carve the world of one
Integrated symphony, the source of all divinity
Then the eye began to gaze within
Branches twisted, turned, and formed the self
Archetypal images and dreams to undergo
Ego and unconsciousness, a battle for the soul
Psyches feeding culture with the food of all the gods
Imagination, future selves, to suffer and rejoice
Cogitation flowing, individuation growing
‘Til cybernetic unity subsumes the human story

Advertisement

Irrational Man: An Analysis (Part 4, Chapter 11: The Place of the Furies)

In my last post in this series on William Barrett’s Irrational Man, I examined some of the work of existential philosopher Jean-Paul Sartre, which concluded part 3 of Barrett’s book.  The final chapter, Ch. 11: The Place of the Furies, will be briefly examined here and this will conclude my eleven part post-series.  I’ve enjoyed this very much, but it’s time to move on to new areas of interest, so let’s begin.

1. The Crystal Palace Unmanned

“The fact is that a good dose of intellectualism-genuine intellectualism-would be a very helpful thing in American life.  But the essence of the existential protest is that rationalism can pervade a whole civilization, to the point where the individuals in that civilization do less and less thinking, and perhaps wind up doing none at all.  It can bring this about by dictating the fundamental ways and routines by which life itself moves.  Technology is one material incarnation of rationalism, since it derives from science; bureaucracy is another, since it aims at the rational control and ordering of social life; and the two-technology and bureaucracy-have come more and more to rule our lives.”

Regarding the importance and need for more intellectualism in our society, I think this can be better described as the need for more critical thinking skills and the need for people to be able to discern fact from fiction, to recognize their own cognitive biases, and to respect the adage that extraordinary claims require extraordinary evidence.  At the same time, in order to appreciate the existentialist’s concerns, we ought to recognize that there are aspects of human psychology including certain psychological needs that are inherently irrational, including with respect to how we structure our lives, how we express our emotions and creativity, how we maintain a balance in our psyche, etc.  But, since technology is not only a material incarnation of rationalism, but also an outlet for our creativity, there has to be a compromise here where we shouldn’t want to abandon technology, but simply to keep it in check such that it doesn’t impede our psychological health and our ability to live a fulfilling life.

“But it is not so much rationalism as abstractness that is the existentialists’ target; and the abstractness of life in this technological and bureaucratic age is now indeed something to reckon with.  The last gigantic step forward in the spread of technologism has been the development of mass art and mass media of communication: the machine no longer fabricates only material products; it also makes minds. (stereotypes, etc.).”

Sure enough, we’re living in a world where many of our occupations are but one of many layers of abstraction constituting our modern “machine of civilization”.  And the military industrial complex that has taken over the modern world has certainly gone beyond the mass production of physical stuff to be consumed by the populace, and now includes the mass production and viral dissemination of memes as well.  Ideas can spread like viruses and in our current globally interconnected world (which Barrett hadn’t yet seen to the same degree when writing this book), the spread of these ideas is much faster and influential on culture than ever before.  The degree of indoctrination, and the perpetuated cycles of co-dependence between citizens and the corporatocratic, sociopolitical forces ruling our lives from above, have resulted in making our way of life and our thinking much more collective and less personal than at any other time in human history.

“Kierkegaard condemned the abstractness of his time, calling it an Age of Reflection, but what he seems chiefly to have had in mind was the abstractness of the professorial intellectual, seeing not real life but the reflection of it in his own mind.”

Aside from the increasingly abstract nature of modern living then, there’s also the abstractness that pervades our thinking about life, which detracts from our ability to actually experience life.  Kierkegaard had a legitimate point here, by pointing out the fact that theory cannot be a complete substitute for practice; that thought cannot be a complete substitute for action.  We certainly don’t want the reverse either, since action without sufficient forethought leads to foolishness and bad consequences.

I think the main point here is that we don’t want to miss out on living life by thinking about it too much.  Since living a philosophically examined life is beneficial, it remains an open question exactly what balance is best for any particular individual to live the most fulfilling life.  In the mean time we ought to simply recognize that there is the potential for an imbalance, and to try our best to avoid it.

“To be rational is not the same as to be reasonable.  In my time I have heard the most hair-raising and crazy things from very rational men, advanced in a perfectly rational way; no insight or feelings had been used to check the reasoning at any point.”

If you ignore our biologically-grounded psychological traits, or ignore the fact that there’s a finite range of sociological conditions for achieving human psychological health and well-being, then you can’t develop any theory that’s supposed to apply to humans and expect it to be reasonable or tenable.  I would argue that ignoring this subjective part of ourselves when making theories that are supposed to guide our behavior in any way is irrational, at least within the greater context of aiming to have not only logically valid arguments but logically sound arguments as well.  But, if we’re going to exclude the necessity for logical soundness in our conception of rationality, then the point is well taken.  Rationality is a key asset in responsible decision making but it should be used in a way that relies on or seeks to rely on true premises, taking our psychology and sociology into account.

“The incident (making hydrogen bombs without questioning why) makes us suspect that, despite the increase in the rational ordering of life in modern times, men have not become the least bit more reasonable in the human sense of the word.  A perfect rationality might not even be incompatible with psychosis; it might, in fact, even lead to the latter.”

Again, I disagree with Barrett’s use or conception of rationality here, but semantics aside, his main point still stands.  I think that we’ve been primarily using our intelligence as a species to continue to amplify our own power and maximize our ability to manipulate the environment in any way we see fit.  But as our technological capacity advances, our cultural evolution is getting increasingly out of sync with our biological evolution, and we haven’t been anywhere close to sufficient in taking our psychological needs and limitations into account as we continue to engineer the world of tomorrow.

What we need is rationality that relies on true or at least probable premises, as this combination should actually lead a person to what is reasonable or likely to be reasonable.  I have no doubt that without making use of a virtue such as reasonableness, rationality can become dangerous and destructive, but this is the case with every tool we use whether it’s rationality or other capacities both mental and material; tools can always be harmful when misused.

“If, as the Existentialists hold, an authentic life is not handed to us on a platter but involves our own act of self-determination (self-finitization) within our time and place, then we have got to know and face up to that time, both in its (unique) threats and its promises.”

And our use of rationality on an individual level should be used to help reach this goal of self-finitization, so that we can balance the benefits of the collective with the freedom of each individual that makes up that collective.

“I for one am personally convinced that man will not take his next great step forward until he has drained to the lees the bitter cup of his own powerlessness.”

And when a certain kind of change is perceived as something to be avoided, the only way to go through with it is by perceiving the status quo as something that needs to be avoided even more so, so that a foray out of our comfort zone is perceived as an actual improvement to our way of life.  But as long as we see ourselves as already all powerful and masters over our domain, we won’t take any major leap in a new direction of self and collective improvement.  We need to come to terms with our current position in the modern world so that we can truly see what needs repair.  Whether or not we succumb to one or both of the two major existential threats facing our species, climate change and nuclear war, is contingent on whether or not we set our eyes on a new prize.

“Sartre recounts a conversation he had with an American while visiting in this country.  The American insisted that all international problems could be solved if men would just get together and be rational; Sartre disagreed and after a while discussion between them became impossible.  “I believe in the existence of evil,” says Sartre, “and he does not.” What the American has not yet become aware of is the shadow that surrounds all human Enlightenment.”

Once again, if rationality is accompanied with true premises that take our psychology into account, then international problems could be solved (or many of them at least), but Sartre is also right insofar as there are bad ideas that exist, and people that have cultivated their lives around them.  It’s not enough to have people thinking logically, nor is some kind of rational idealism up to the task of dealing with human emotion, cognitive biases, psychopathy, and other complications in human behavior that exist.

The crux of the matter is that some ideas hurt us and other ideas help us, with our perception of these effects being the main arbiter driving our conception of what is good and evil; but there’s also a disparity between what people think is good or bad for them and what is actually good or bad for them.  I think that one could very plausibly argue that if people really knew what was good or bad for them, then applying rationality (with true or plausible premises) would likely work to solve a number of issues plaguing the world at large.

“…echoing the Enlightenment’s optimistic assumption that, since man is a rational animal, the only obstacles to his fulfillment must be objective and social ones.”

And here’s an assumption that’s certainly difficult to ground since it’s based on false premises, namely that humans are inherently rational.  We are unique in the animal kingdom in the sense that we are the only animal (or one of only a few animals) that have the capacity for rational thought, foresight, and the complex level of organization made possible from its use.  I also think that the obstacles to fulfillment are objective since they can be described as facts pertaining to our psychology, sociology, and biology, even if our psychology (for example) is instantiated in a subjective way.  In other words, our subjectivity and conscious experiences are grounded on or describable in objective terms relating to how our particular brains function, how humans as a social species interact with one another, etc.  But, fulfillment can never be reached let alone maximized without taking our psychological traits and idiosyncrasies into account, for these are the ultimate constraints on what can make us happy, satisfied, fulfilled, and so on.

“Behind the problem of politics, in the present age, lies the problem of man, and this is what makes all thinking about contemporary problems so thorny and difficult…anyone who wishes to meddle in politics today had better come to some prior conclusions as to what man is and what, in the end, human life is all about…The speeches of our politicians show no recognition of this; and yet in the hands of these men, on both sides of the Atlantic, lies the catastrophic power of atomic energy.”

And as of 2018, we’ve seen the Doomsday clock now reach two minutes to midnight, having inched one minute closer to our own destruction since 2017.  The dominance hierarchy being led and reinforced by the corporatocratic plutocracy are locked into a narrow form of tunnel vision, hell bent on maintaining if not exacerbating the wealth and power disparities that plague our country and the world as a whole, despite the fact that this is not sustainable in the long run, nor best for the fulfillment of those promoting it.

We the people do share a common goal of trying to live a good, fulfilling life; to have our basic needs met, and to have no fewer rights than anybody else in society.  You’d hardly know that this common ground exists between us when looking at the state of our political sphere, likely as polarized now in the U.S. (if not more so) than even during the Civil War.  Clearly, we have a lot of work to do to reevaluate what our goals ought to be, what our priorities ought to be, and we need a realistic and informed view of what it means to be human before any of these goals can be realized.

“Existentialism is the counter-Enlightenment come at last to philosophic expression; and it demonstrates beyond anything else that the ideology of the Enlightenment is thin, abstract, and therefore dangerous.”

Yes, but the Enlightenment has also been one of the main driving forces leading us out of theocracy, out of scientific illiteracy, and towards an appreciation of reason and evidence (something the U.S. at least, is in short supply of these days), and thus it has been crucial in giving us the means for increasing our standard of living, and solving many of our problems.  While the technological advancements derived from the Enlightenment have also been a large contributor to many of our problems, the current existential threats we face including climate change and nuclear war are more likely to be solved by new technologies, not an abolition of technology nor an abolition of the Enlightenment-brand of thinking that led to technological progress.  We simply need to better inform our technological goals of the actual needs and constraints of human beings, our psychology, and so on.

“The finitude of man, as established by Heidegger, is perhaps the death blow to the ideology of the Enlightenment, for to recognize this finitude is to acknowledge that man will always exist in untruth as well as truth.  Utopians who still look forward to a future when all shadows will be dispersed and mankind will dwell in a resplendent Crystal Palace will find this recognition disheartening.  But on second thought, it may not be such a bad thing to free ourselves once and for all from the worship of the idol of progress; for utopianism-whether the brand of Marx or or Nietzsche-by locating the meaning of man in the future leaves human beings here and how, as well as all mankind up to this point, without their own meaning.  If man is to be given meaning, the Existentialists have shown us, it must be here and now; and to think this insight through is to recast the whole tradition of Western thought.”

And we ought to take our cue from Heidegger, at the very least, to admit that we are finite, our knowledge is limited, and it always will be.  We will not be able to solve every problem, and we would do ourselves and the world a lot better if we admitted our own limitations.  But to avoid being overly cynical and simply damning progress altogether, we need to look for new ways of solving our existential problems.  Part of the solution that I see for humanity moving forward is going to be a combination of advancements in a few different fields.

By making better use of genetic engineering, we’ll one day have the ability to change ourselves in remarkable ways in order to become better adapted to our current world.  We will be able to re-sync our biological evolution with our cultural evolution so we no longer feel uprooted, like a fish out of water.  Continuing research in neuroscience will allow us to learn more about how our brains function and how to optimize that functioning.  Finally, the strides we make in computing and artificial intelligence should allow us to vastly improve our simulation power and arm us with greater intelligence for solving all the problems that we face.

Overall, I don’t see progress as the enemy, but rather that we have an alignment problem between our current measures of progress, and what will actually lead to maximally fulfilling lives.

“The realization that all human truth must not only shine against an enveloping darkness, but that such truth is even shot through with its own darkness may be depressing, and not only to utopians…But it has the virtue of restoring to man his sense of the primal mystery surrounding all things, a sense of mystery from which the glittering world of his technology estranges him, but without which he is not truly human.”

And if we actually use technology to change who we are as human beings, by altering the course of natural selection and our ongoing evolution (which is bound to happen with or without our influence, for better or worse), then it’s difficult to say what the future really holds for us.  There are cultural and technological forces that are leading to transhumanism, and this may mean that one day “human beings” (or whatever name is chosen for the new species that replaces us) will be inherently rational, or whatever we’d like our future species to be.  We’ve stumbled upon the power to change our very nature, and so it’s far too naive, simplistic, unimaginative, and short-sighted to say that humans will “always” or “never” be one way or another.  Even if this were true, it wouldn’t negate the fact that one day modern humans will be replaced by a superseding species which has different qualities than what we have now.

2. The Furies

“…Existentialism, as we have seen, seeks to bring the whole man-the concrete individual in the whole context of his everyday life, and in his total mystery and questionableness-into philosophy.”

This is certainly an admirable goal to combat simplistic abstractions of what it means to be human.  We shouldn’t expect to be able to abstract a certain capacity of human beings (such as rationality), consider it in isolation (no consideration of context), formulate a theory around that capacity, and then expect to get a result that is applicable to human beings as they actually exist in the world.  Furthermore, all of the uncertainties and complexities in our human experiences, no matter how difficult they may be to define or describe, should be given their due consideration in any comprehensive philosophical system.

“In modern philosophy particularly (philosophy since Descartes), man has figured almost exclusively as an epistemological subject-as an intellect that registers sense-data, makes propositions, reasons, and seeks the certainty of intellectual knowledge, but not as the man underneath all this, who is born, suffers, and dies…But the whole man is not whole without such unpleasant things as death, anxiety, guilt, fear and trembling, and despair, even though journalists and the populace have shown what they think of these things by labeling any philosophy that looks at such aspects of human life as “gloomy” or “merely a mood of despair.”  We are still so rooted in the Enlightenment-or uprooted in it-that these unpleasant aspects of life are like Furies for us: hostile forces from which we would escape (the easiest way is to deny that the Furies exist).”

My take on all of this is simply that multiple descriptions of human existence are needed to account for all of our experiences, thoughts, values, and behavior.  And it is what we value given our particular subjectivity that needs to be primary in these descriptions, and primary with respect to how we choose to engineer the world we live in.  Who we are as a species is a complex mixture of good and bad, lightness and darkness, and stability and chaos; and we shouldn’t deny any of these attributes nor repress them simply because they make us uncomfortable.  Instead, we would do much better to face who we are head on, and then try to make our lives better while taking our limitations into account.

“We are the children of an enlightenment, one which we would like to preserve; but we can do so only by making a pact with the old goddesses.  The centuries-long evolution of human reason is one of man’s greatest triumphs, but it is still in process, still incomplete, still to be.  Contrary to the rationalist tradition, we now know that it is not his reason that makes man man, but rather that reason is a consequence of that which really makes him man.  For it is man’s existence as a self-transcending self that has forged and formed reason as one of its projects.”

This is well said, although I’d prefer to couch this in different terms: it is ultimately our capacity to imagine that makes us human and able to transcend our present selves by pointing toward a future self and a future world.  We do this in part by updating our models of the world, simulating new worlds, and trying to better understand the world we live in by engaging with it, re-shaping it, and finding ways of better predicting its causal structure.  Reason and rationality have been integral in updating our models of the world, but they’ve also been high-jacked to some degree by a kind of super-normal stimuli reinforced by technology and our living in a world that is entirely alien to our evolutionary niche, and which have largely dominated our lives in a way that overlooks our individualism, emotions, and feelings.

Abandoning reason and rationality is certainly not the answer to this existential problem; rather, we just need to ensure that they’re being applied in a way that aligns with our moral imperatives and that they’re ultimately grounded on our subjective human psychology.

Irrational Man: An Analysis (Part 3, Chapter 10: Sartre)

In the previous post from this series on William Barrett’s Irrational Man, I delved into some of Martin Heidegger’s work, particularly his philosophy as represented in his most seminal work, Being and Time.  Jean-Paul Sartre was heavily influenced by Heidegger and so it’s fortunate for us that Barrett explores him in the very next chapter.  As such, Sartre will be the person of interest in this post.

To get a better sense of Sartre’s philosophy, we should start by considering what life was like in the French Resistance from 1940-45.  Sartre’s idea of human freedom from an existentialist perspective is made clear in his The Republic of Silence where he describes the mode of human existence for the French during the Nazi German occupation of France:

“…And because of all this we were free.  Because the Nazi venom seeped into our thoughts, every accurate thought was a conquest.  Because an all-powerful police tried to force us to hold our tongues, every word took on the value of a declaration of principles.  Because we were hunted down, every one of our gestures had the weight of a solemn commitment…”

I think we can interpret this as describing the brute authenticity of all the most salient features of our existence that can result in the face of severe forces of oppression.  By living knee-deep in an oppressive atmosphere with little if any libertarian freedom, one may inevitably reevaluate what they consider to be most important; they may exercise a great deal more care in terms of deciding what to think about, what to say, or what to do at any given time; and all of this may serve to make that which is thought about, said, or done, to be far more meaningful and free of the noise and superficiality that usually accompanies our day-to-day goings on.  Putting it this way is not to diminish the heinousness of any kind of violent oppression, but merely to point out that oppression can make one consider every part of their existence with a lot more care and concern, taking absolutely nothing for granted while living in such a precarious position.

And when standing in the face of death day after day, not only is the radical contingency of human existence made more clear than ever, but so is that which is intrinsically most valuable to us:

“Exile, captivity, and especially death (which we usually shrink from facing at all in happier days) became for us the habitual objects of our concern…And the choice that each of us made of his life was an authentic choice because it was made face to face with death, because it could always have been expressed in these terms: “Rather death than…”

This is certainly a powerful way of defining authenticity, or at least setting its upper bound; by saying that you’d rather die than abstain from some particular thought, speech, or action is to make a declaration of what’s truly the most meaningful to any individual given their current knowledge and the conditions of their own existence; and this is true even if their thoughts or actions are immoral or uninformed, for nothing matters other than the fact that one has done so with an unparalleled amount of care and personal significance.

While all of this was going on, the French government and the bourgeoisie were seemingly powerless, undetermined, and incapable of any significant recourse:

“…’Les salauds’ became a potent term for Sartre in those days-the salauds, the stinkers, the stuffy and self-righteous people congealed in the insincerity of their virtues and vices.  This atmosphere of decay breathes through Sartre’s first novel, Nausea, and it is no accident that the quotation on the flyleaf is from Celine, the poet of the abyss, of the nihilism and disgust of that period.  The nausea in Sartre’s book is the nausea of existence itself; and to those who are ready to use this as an excuse for tossing out the whole of Sartrian philosophy, we may point out that it is better to encounter one’s existence in disgust than never to encounter it at all-as the salaud in his academic or bourgeois or party-leader strait jacket never does.”

The sheer lack of personal commitment and passion plaguing so many would certainly help to reinforce a feeling of disgust or even propagate feelings of nihilism and helplessness.  And the kind of superficiality that Sartre perceived around him was (and of course, still is) in so many ways the norm; it’s the general way that people conduct themselves, and not only socially but even personally in the sense that one’s view of themselves and who they ought to be is heavily and externally imposed from collective norms and expectations.

“The essential freedom, the ultimate and final freedom that cannot be taken from a man, is to say No.  This is the basic premise in Sartre’s view of human freedom: freedom is in its very essence negative, though this negativity is also creative.”

This is an interesting conception, thinking of the essence of freedom as having a negative character, although perhaps we can compare this to or analogize this with the general concept of negative liberty (freedom from interference from others) as opposed to positive liberty (having the means and freedom to achieve one’s goals).  Since there are a number of contingencies that can prevent positive liberty from being realized, negative liberty (which could include the freedom to say ‘No’) seems to be the more fundamental or basic of the two.  It makes sense to think of this as the final freedom too, as Sartre does, since physical force and contingency can take away anyone’s ability to do all else, except to say ‘No’ or at the very least to think ‘No’ (in the event that one is coerced to say ‘Yes’).  In other words, one can have a number of freedoms taken away, but nobody can take away your freedom to desire one outcome over another, even if that desire can never be satisfied.

At this point it’s also worth mentioning that Sartre’s writings seem to reflect conflicting views on what constitutes human freedom.  His views on freedom may have changed over the course of his written works or he may just have had two different conceptions in mind.  On the one hand, in his earlier works, he seemed to refer to freedom in an ontological sense, as a property of human consciousness.  In this conception, he seemed to view freedom as the ability of a conscious agent to choose the attitude they hold and how they react toward the circumstances they find themselves in.  This would mean that everybody, even a prisoner held captive, is free insofar as they have the freedom to choose if they’ll accept or resist their situation.  Later on, Sartre began to focus on material freedom, that is, freedom from coercion.  In any case, both of these conceptions of freedom still seem to resonate with the concept of negative liberty mentioned earlier.  The only distinction would be that ontological freedom is the one form of negative liberty that is truly inalienable and final:

“Where all the avenues of action are blocked for a man, this freedom may seem a tiny and unimportant thing; but it is in fact total and absolute, and Sartre is right to insist upon it as such, for it affords man his final dignity, that of being man.”

Along with this equating human freedom with consciousness (or at least with conscious will) is the consideration that consciousness is the only epistemological certainty we have:

“He (Descartes) proposes to reject all beliefs so long as they can in any way be doubted, to resist all temptations to say Yes until his understanding is convinced according to its own light; so he rejects belief in the existence of an external world, of minds other than his own, of his own body, of his memories and sensations.  What he cannot doubt is his own consciousness, for to doubt is to be conscious, and therefore by doubting its existence he would affirm it.  “

Sartre takes this Cartesian position to the nth degree, by constraining his interpretation of human beings with his interpretation of Cartesian skepticism:

“But before this certitude shone for him (Descartes), (and even after it, before he passed on to other truths), he was a nothingness, a negativity, existing outside of nature and history, for he had temporarily abolished all belief in a world of bodies and memories.  Thus man cannot be interpreted, Sartre says, as a solid substantial thing existing amid the plenitude of things that make up a world; he is beyond nature because in his negative capability he transcends it.  Man’s freedom is to say No, and this means that he is the being by whom nothingness comes into being.”

To conclude that one is in fact a nothingness, even within the context of Cartesian doubt is, I think, going a bit too far.  While I can agree that consciousness in many ways transcends physical objects and the kind of essence we ascribe to them, it doesn’t transcend existence itself; and what is a nothingness really other than a lack of existence?

And one can think of consciousness as having some attribute of negativity, in the sense that we can think of things and even think of ourselves as being not this or not that; we can think of our desires in terms of what we don’t want; but I think this quality of negation can only serve to differentiate our conscious will (or perhaps ego) from the rest of our experience.  Since our consciousness can’t negate itself, I don’t think that this quality can negate us such that we become an actual nothingness.

“For Sartre there is no unalterable structure of essences or values given prior to man’s own existence.  That existence has meaning, finally, only as the liberty to say No, and by saying No to create a world.  If we remove God from the picture, the liberty which reveals itself in the Cartesian doubt is total and absolute; but thereby also the more anguished, and this anguish is the irreducible destiny and dignity of man.”

And here we see that common element within existentialism: the idea that existence precedes essence, where human beings are thought to have no inherent essence but only an essence that is derived from one’s own existence and one’s own consciousness.  Going back to Sartre’s conception of human freedom, perhaps the idea of saying No to create a world is grounded on the principle of our setting boundaries or limits to the world we make for ourselves; an idea that I can certainly get behind.

However, even if we remove God from the equation, which is certainly justified from the perspective of Cartesian skepticism, let alone from the lack of empirical evidence to support such a belief, we still can’t get around our having some essential human character or qualities (even if no God existed to conceive of such an essence).  While I agree that of the possible lives that you or I can have, it shouldn’t be up to others to choose which life that should be, nor what its ultimate meaning or purpose is, if we agree that there are a finite number of possible lives to choose from (individually and collectively as a species), given our constraints as finite beings (a thoroughly existential claim at that), then I think we can say that we do have an inherent essence that is defined by these biological and physical limitations.

So I do share Sartre’s view that our lives shouldn’t be defined by others, by social norms, nor defined by qualities found in only some of the possible lives one has access to; but it needs to be said that there are still only a finite number of possible life trajectories which will necessarily exclude some of the possibilities given to other types of organisms, and which will exclude any options that are not physically or logically possible.  In other words, I think human beings do have an inherent essence that is defined to some degree by the possibility space we as a species exist within; we just tend to think of essence as something that has to be well-defined, like some short list of properties or functions (which I think is too limited a view).

“Thus Sartre ends by allotting to man the kind of freedom that Descartes has ascribed only to God.  It is, he says, the freedom Descartes secretly would have given to man had he not been limited by the theological convictions of his time and place.”

And this is certainly a valid point.  What’s interesting to me with respect to the whole Cartesian experiment was that Descartes began by doubting the external world and only by introducing an ad hoc conception of a deity, let alone a benevolent deity, could he trust that his senses weren’t deceiving him.  This is despite the fact that this belief in God would be just as likely to be a deception as any other belief (and so Descartes attempted to solve this problem through invalid circular reasoning).

More to the point however is the fact that we wouldn’t be able to tell the difference between a “real” external world and one that is imagined or illusory (unbeknownst to us) anyway, and so we might as well simply stick with the fewest assumptions that yield the best predictions of future experiences.  And this includes the assumption of our having some degree of human freedom that directs our conscious will and desires; a view of our having some control over what trajectory we take in life:

“He is not subordinate to a realm of essences: rather, He creates essences and causes them to be what they are.  Hence such a God transcends the laws of logic and mathematics.  As His (God’s) existence precedes all essences, so man’s existence precedes his essence: he exists, and out of the free project which his existence can be he makes himself what he is.  “

It’s worth considering the psychological motivations for positing and defining a God that has qualities reminiscent of human creativity and teleological intentionality; that is to say, we often project our human capacity for creation and goal-directedness onto some idealized being conception that accounts for our existence in the same way that we see ourselves as accounting for the existence of any and all human constructs, be they watches, paintings, or entire cities.  We see ourselves giving purpose and functional descriptions to the things we make and then can’t help but ask what our own purposes may be; and if so, then we feel inclined to assume it must be someone other than ourselves that gives us this purpose, finally arriving at a God-concept of some kind.  And lo and behold, throughout history the gods we see in all the religions have been remarkably human with their anger, jealousy, pettiness, impulsiveness, and retributive sense of justice, thus illustrating that gods are made in man’s image as opposed to the contrary.  But this kind of thinking only leads to our diminishing our own degree of personal freedom, bestowing it to a concept instead (whether bestowed to “society” or to a “deity”), and thus it abstracts away our own individuality and responsibility.

Despite the God concept permeating much of human history, we eventually challenged it with the greatest challenges precipitating from the Enlightenment, causing many of us to discover the freedom we had all along:

“When God dies, man takes the place of God.  Such had been the prophecy of Dostoevski and Nietzsche, and Sartre on this point is their heir.  The difference, however, is that Dostoevski and Nietzsche were frenzied prophets, whereas Sartre advances his view with all the lucidity of Cartesian reason and advances it, moreover, as a basis for humanitarian and democratic social action.  To put man in the place of God may seem, to traditionalists, an unspeakable piece of diabolism; but in Sartre’s case it is done by a thinker who, to judge from his writings, is a man of overwhelming good will and generosity.”

And this idea of human beings taking the place of God, is really nothing more than the realization of our having projected qualities that were ours from the very beginning.  This may make some people uncomfortable because they think that by eliminating the God concept, that humans are somehow seeing ourselves as a God; but we still have the same limitations that human beings have always had, with our limited knowledge and our existing in a world with many forces out of our control.  But now we have an added sense of freedom that further empowers us as individuals to do the most we can with our lives and to give our lives meaning on our own terms, rather than having them externally imposed on us (see the previous post on Heidegger, and the concept of Das Man or the “They” self).  Sartre’s appreciation and promotion of this idea is admirable to say the least, as it seeks to preferentially value human beings over ideas, to value consciousness over abstractions, and to maximize each person having a voice that they can call their own.

1.  Being-for-itself and Being-in-itself

“Being-for-itself (coextensive with the realm of consciousness) is perpetually beyond itself.  Our thought goes beyond itself, toward tomorrow or yesterday, and toward the outer edges of the world.  Human existence is thus a perpetual self-transcendence: in existing we are always beyond ourselves.  Consequently we never possess our being as we possess a thing.”

This definitely resonates with Heidegger’s conception of temporality, where our understanding of our world is constrained by our past experiences and is always projecting into the future in terms of our always wanting to become a certain kind of person (even if this vision of our future self changes over time).  And here, Sartre’s conception of Being-for-itself is meant to be contrasted with his concept of Being-in-itself; where the former represents consciousness and the latter non-conscious or unconscious material objects.

As mentioned in my last post, I agree with there being a temporally transcendent quality of consciousness, and I think it can be best explained by the fact that our perception of the world is operating under a schema of prediction: our brains are always trying to predict what will happen next in terms of our sensations and these predictions are what we actually experience (as opposed to sensory information itself), where we experience a kind of controlled hallucination that evolves its modeling in response to any prediction error encountered.  We try to make these perceptual predictions come true and reduce the prediction error by either changing our mental models of the world’s causal structure to better account for the incoming sensory information or through a series of actions involving our manipulating the world in some way or another.

“This notion of the For-itself may seem obscure, but we encounter it on the most ordinary occasions.  I have been to a party; I come away, and with a momentary pang of sadness I say, “I am not myself.”  It is necessary to take this proposition quite literally as something that only man can say of himself, because only man can say it to himself.  I have the feeling of coming to myself after having lost or mislaid my being momentarily in a social encounter that estranged me from myself.  This is the first and immediate level on which the term yields its meaning.”

And here’s where the For-itself concept shows the distinction between our individual self (what Heidegger would likely call our authentic self) and the externalized self that is more or less a product of social norms and expectations stemming from the human collective.

“But the next and deeper level of meaning occurs when the feeling of sadness leads me to think in a spirit of self-reproach that I am not myself in a still more fundamental sense: I have not realized so many of the plans or projects that make up my being; I am not myself because I do not measure up to myself. “

So aside from what society expects of us, we also have our own expectations and goals, many of which that never come into fruition.  Sartre sees this as a kind of comparison where we’re evaluating our current view of ourselves with the more idealized view of ourselves that we’re constantly striving for.  And we can see this anytime we think or say something like “I shouldn’t have done that because I’m better than that…”  We’re obviously not better than ourselves, but it’s because we’re always comparing our perspective to the ideal model we hold of ourselves that we in some sense transcend any present sense of self, and this view allows us to better understand Sartre’s idea that we’re always existing beyond ourselves.

Barrett describes Sartre’s view in more detail:

“Beneath this level too there is still another and deeper meaning, rooted in the very nature of my being: I am not myself, and I can never be myself, because my being stretching out beyond itself at any given moment exceeds itself.  I am always simultaneously more and less than I am.  Herein lies the fundamental uneasiness, or anxiety, of the human condition, for Sartre.  Because we are perpetually flitting beyond ourselves, or falling behind our possibilities, we seek to ground our existence, to make it more secure.”

We might even say that we conjure up a kind of essence of who we are because we’re trying to solidify our existence such that we’re more or less like the objects we interact with in our world.  And we do this through our imagination by projecting our identity into the future, effectively defining ourselves in terms of the person we want to become rather than the person we are now.  However, our imagination is both a blessing and a curse, for it is through imagination that we express our creativity, simulating new worlds for ourselves that we can choose from; but, our imagination is also a means of comparing our current state of affairs to some ideal model, in many cases showing us what we don’t have (yet want) and showing us a number of lost opportunities which can further constrain our future space of possibilities.  Whether we love it or hate it, our capacity for imagination is both the cause and the cure for our suffering; and it is a capacity that is fundamental to our existence as human beings.

“The For-itself struggles to become the In-itself, to attain the rocklike and unshakable solidity of a thing.  But this is can never do so long as it is conscious and alive.  Man is doomed to the radical insecurity and contingency of his being; for without it he would not be man but merely a thing and would not have the human capacity for transcendence of his given situation.”

It seems then that our projection into the future, equating our self-hood with some preconceived future self, not only serves as an attempt to objectify our subjectivity, but it may also be psychologically motivated by the uncertainty that we recognize in both our short-term and long-term lives.  We feel uncomfortable and anxious about the contingency of our lives and the sheer number of possible future life trajectories, and so we conceptualize one path as if it were a fixed object and simply refer to it as “me”.

“With enormous ingenuity and virtuosity Sartre interweaves these two notions-Being-in-itself and Being-for-itself-to elucidate the complexities of human psychology.”

Sartre’s conception of Being-in-itself and Being-for-itself are largely grounded on the distinction between consciousness and that which is not conscious.  On the surface, it looks like merely another way of expressing the distinction between mind and matter, but the distinction is far more nuanced and complex.  Being-for-itself is defined by its relation to Being-in-itself, where, for Sartre, Being-for-itself is, among other things, in a constant state of want for synthesis with Being-in-itself, despite the fact that Being-for-itself involves a kind of knowledge that it is not in-itself.  Being-for-itself is in a constant state of incompleteness, where a lack of some kind permeates its mode of being; it’s constantly aware of what it is not, and whatever it may be, lacking any essential structure to build off of, is created out of nothingness.  In other words, Being-for-itself creates its own being with a blank canvas.

As I mentioned above, I don’t particularly agree with Sartre’s conception here, that we as conscious human beings create our being or our essence with a blank canvas, as there are (to stick with the artistic analogy) a limited number of ways that the canvas can be “painted”, there are a limited number of existential “colors” to work with, and while the future may be undetermined or unpredictable to some degree, this doesn’t mean that any possible future that comes to mind can come into being.  As human beings that have evolved from other forms of life on this planet, we have a number of innate or biologically-constrained predispositions including: our instincts and basic behavioral drives, the kinds of conditioning that actually work to modify our behavior, and a psychology that can only thrive within a finite range of behavioral and environmental conditions.

I see this “blank canvas” analogy as just another version of the blank slate or tabula rasa model of human knowledge within the social sciences.  But this model has been falsified with findings in behavioral genetics, psychology, and neurobiology, where various forms of evidence have shown that human beings are pre-wired for social interaction, and have genetically dependent capacities such as intelligence, memory, and reason.  And even taking into account the complicated relation between genes and environment, where the expression of the former is dependent on the latter, we still have environmental conditions that are finite, contingent, externally imposed, and which drastically shape our behavior whether we realize it or not.

On top of this, as I’ve argued elsewhere, we don’t have a libertarian form of free will either, which means that if we could go back in time to some set of initial conditions (where we had made a conscious choice of some kind, and perhaps a significant life altering choice at that), we wouldn’t be able to choose an alternative course of action, barring any quantum randomness.  This doesn’t mean that we can’t still make use of some compatibilist conceptions of free will, nor does it mean that we shouldn’t be held accountable for our actions, as these are all important for maintaining effective behavioral conditioning; but it does mean that we don’t have the kind of blank canvas that Sartre had envisioned.

The role that nothingness plays in Sartre’s philosophy goes beyond the incompleteness resulting from a self that is in some sense defined by our projection into the future.

“The Self, indeed, is in Sartre’s treatment, as in Buddhism, a bubble, and a bubble has nothing at its center. (but not nihilism)…For Sartre, on the other hand, the nothingness of the Self is the basis for the will to action: the bubble is empty and will collapse, and so what is left us but the energy and passion to spin that bubble out?”

I can certainly see some parallels between Buddhism and Sartre’s conception of the Self, where, for example, the fact that we lack a persistent identity, instead having a mode of being that’s dynamic and based on a projection into an uncertain future, is analogous to or at least consistent with the Buddhist conception of the Self being a kind of illusion.  I also tend to agree with the basic idea that we don’t have any ultimate control or ownership over our thoughts and feelings even if we feel like we do from time to time, and our experiences seem to be just one infinitesimal, fleeting moment after another.  Our attention and behavior are absorbed and shaped by the world around us, and it’s only when we fixate or identify ourselves as a concept, a thought, or a feeling, or a certain combination of these elements, that we conjure up a seemingly objective and essential sense of Self.  While Sartre’s philosophy appears to grant us a greater level of free will and human freedom than that of Buddhist thought, it’s this essential self, this eternal self or identity, interrelated with the concept of a permanent soul that both Sartre and Buddhism reject.

“Man’s existence is absurd in the midst of a cosmos that knows him not; the only meaning he can give himself is through the free project that he launches out of his own nothingness.”

This is one of the core ideas in existentialism, and also in Sartre’s own work, that I most appreciate: the idea that we (have to) give our own lives their meaning and purpose, as opposed to their being externally imposed on us from a religion, a deity (whether real or imagined), a culture, or any other source aside from ourselves, if we’re to avoid submitting the core of our being to some kind of authoritarian tyranny.

Human beings were not designed for some particular purpose, aside from natural selection having “designed” us for survival, and perhaps a thermodynamic argument could be made that all life serves the “purpose” of generating more entropy than if our planet had no life at all (which means that life actually speeds up the inevitable heat death of the universe).  But, because our intelligence and the kind of consciousness we’ve gained through evolution grants us a kind of open-ended creativity and imagination, with technology and cultural or memetic evolution effectively superseding our “selfish genes”, we no longer have to live with survival as our primary directive.  We now have the power to realize that our existence transcends our evolutionary past, and we have the ability to define ourselves based on our primary goals in life, whatever they may be, given the constraints of the world we find ourselves born into or embedded within at the present moment.  And it’s this transcendent quality that we find in a lot of Sartre’s work.

“He (Sartre) misses the very root of all of Heidegger’s thinking, which is Being itself.  There is, in Sartre, Being-for-itself and Being-in-itself but there is no Being…Sartre has advanced as the fundamental thesis of his Existentialism the proposition that existence precedes essence.  This thesis is true for Heidegger as well, in the historical, social, and biographical sense that man comes into existence and makes himself to be what he is.  But for Heidegger another proposition is even more basic than this: namely, Being precedes existence.”

In contrast with Sartre then, with Heidegger we see an ontological requirement for Being itself, prior to even having the possibility for Sartre’s distinction between Being-for-itself and Being-in-itself.  That is to say, these kinds of distinctions would, for Heidegger at least, require a more basic field of Being within which one may be able to posit distinctive types of Being.  It would seem that some ultimate region of Being would be necessary in order for the transcendent quality of consciousness to manifest in any way whatsoever:

“To be sure, Sartre has gone a considerable step beyond Descartes by making the essence of human consciousness to be transcendence: that is, to be conscious is, immediately and as such, to point beyond that isolated act of consciousness and therefore to be beyond or above it…But this step forward by Sartre is not so considerable if the transcending subject has nowhere to transcend himself: if there is not an open field or region of Being in which the fateful dualism of subject and object ceases to be.”

One might also wonder how this all fits in with the question of how a conscious subject can ever truly know an object in its entirety.  For example, Kant believed that any object in itself, which he referred to more generally as the noumenon (as opposed to the phenomenon, the object as we perceive it), was inherently unknowable.  Nietzsche thought that it didn’t matter whether or not we could know the object in itself, if such an inaccessible realm of knowledge pertaining to any object even existed in the first place.  Rather, he thought the only thing that mattered was our ability to master the object, to manipulate it and use it according to our own will (the will to power).  For Sartre, the only thing that really mattered was the will to action, which was primarily a will to some form of revolutionary action, so he doesn’t really address the problem of knowledge with respect to the subject truly knowing some object or other (nor does he specify whether or not there even is any problem, at least within his seminal work Being and Nothingness).

Heidegger was less concerned with this issue of knowledge and was more concerned with grounding the possibility of there being a subject or object in the first place:

“For what Heidegger proposes is a more basic question than that of Descartes and Kant: namely, how is it possible for the subject to be? and for the object to be?  And his answer is: Because both stand out in the truth, or un-hiddenness, of Being.  This notion of truth of Being is absent from the philosophy of Sartre; indeed, nowhere in his vast Being and Nothingness does he deal with the problem of truth in a radical and existential way: so far as he understands truth at all, he takes it in the ordinary intellectualistic sense that has been traditional with non-existential philosophers.  In the end (as well as at his very beginning) Sartre turns out thus to be a Cartesian rationalist-one, to be sure, whose material is impassioned and existential, but for all that not any less a Cartesian in his ultimate dualism between the For-itself and the In-itself.”

I don’t think it’s fair to label Sartre a Cartesian rationalist as Barrett does here, since Sartre seems to actually depart from Descartes’ form of rationalism when it comes to evaluating the overall structure of consciousness.  And there doesn’t seem to be any ultimate dualism between Being-for-itself and Being-in-itself either, if the former is instantiated or experienced as a kind of negation of the latter.  Perhaps if Sartre had used the terms Being-in-itself and not-Being-in-itself, this would be made more clear.  In any case, it’s not as if these two instantiations of being need be separate substances, especially if, as Sartre says in Being and Nothingness, that:

“The ontological error of Cartesian rationalism is not to have seen that if the absolute is defined by the primacy of existence over essence, it cannot be conceived as a substance.”

So at the very least, it seems mistaken to say that Sartre is a substance dualist given his position on existence preceding essence, even if he may be adequately described as some kind of property dualist.

“But, again like every humanism, it leaves unasked the question: What is the root of man?  In this search for roots for man-a search that has, as we have seen, absorbed thinkers and caused the malaise of poets for the last hundred and fifty years-Sartre does not participate.  He leaves man rootless.”

Sartre may not have explicitly described any kind of truth for man that wasn’t a truth of the intellect, unlike Heidegger who had his conception of the truth of Being, but Sartre did also describe a pre-reflective aspect of consciousness that didn’t seem to be identified as either the subject nor anything about the subject.  There’s a mysterious aspect of consciousness that’s simply hard to pin down, and it may be the kind of ethereal concept that one could equate with some kind of pre-propositional realm of Being.  This may not be enough to say that Sartre, in the end, leaves us rooted (so to speak), but it seems to be a concept that implicitly pulls his philosophy in that direction.

2. Literature as a Mode of Action

Sartre places a lot of importance on literature as a means of expressing the author’s freedom in a way that depends on an interaction with the freedom of the reader.

“…the perfect example of what he believes a writer should do and what he himself tries to do in his own later fiction: that is, grapple with the problems of man in his time and milieu.”

As mentioned earlier in this post, Sartre’s personal experience during the time of the French Resistance colored his written works with an imperative to assert humanity’s radical freedom and did so in relation to the historical developments that affected the masses, the working class, and the oppressed.

“It is always to the idea, and particularly the idea as it leads to social action, that Sartre responds.  Hence he cannot do justice, either in his critical theory or in his actual practice of literary criticism, to poetry, which is precisely that form of human expression in which the poet-and the reader who would enter the poet’s world-must let Being be, to use Heidegger’s phrase, and not attempt to coerce it by the will to action or the will to intellectualization.  The absence of the poet in Sartre, as a literary man, is thus another evidence of what, on the philosophical level, leads to a deficiency in his theory of Being.”

And though Sartre may not be any kind of poet, that doesn’t mean his work doesn’t have some of the subjective, emotionally charged character that one finds in poetry.  But, Barret’s point is well taken as Sartre’s works don’t have the kind of aesthetic quality, the irrationality, nor do they make as much use of metaphor, as the majority of popular poets have.

“In discharging his freedom man also wills to accept the responsibility of it, thus becoming heavy with his own guilt.  Conscience, Heidegger has said, is the will to be guilty-that is, to accept the guilt that we know will be ours whatever course of action we take.”

Freedom and responsibility clearly go hand in hand for Sartre (and for Heidegger as well), and this very basic idea makes sense from the perspective that if one believes themselves to be free, then they must necessarily accept that they own their actions insofar as they were done freely.  As for conscience, rather than Heidegger’s conception of the will to be guilty, I’d prefer to describe it as simply a manifestation of our perception of both responsibility and duty, although one can also aptly describe conscience as a product of evolution that tends to produce in us a more pro-social set of behaviors.

The tendency for many to try and absolve themselves of their freedom, by submitting themselves to the conception of self that society or others draw up for them is addressed in Sartre’s 1944 French play No Exit.  In this play, the three main characters find themselves in Hell, being punished in a way that is analogous to that of Dante’s Inferno, in this case taunted by a symbol representing the inauthentic or superficial way they lived their lives:

“Having practiced “bad faith” in life-which, in Sartre’s terms, is the surrendering of one’s human liberty in order to possess, or try to possess, one’s being as a thing-the three characters now have what they had sought to surrender themselves to.  Having died, they cannot change anything in their past lives, which are exactly what they are, no more and no less, just like the static being of things…But this is exactly what they long for in life-to lose their own subjective being by identifying themselves with what they were in the eyes of other people.”

By defining ourselves in any essential or fixed way, for example, by defining ourselves based on the unchangeable past (as all its features are indeed fixed), we lose the freedom we’d have if our identity had been directed toward the uncertain future and the possible goals and projects contained therein.  In this case with No Exit, the characters were defining themselves based on the desires and expectations of society or at least of several members of society that these patrons of Hell deemed high in status or social clout.

And going back to the relation between freedom and responsibility, we can see how one can try and rid themselves of responsibility by holding an attitude that they have no freedom regarding who they are or what they do, that they must do what it is they do (perhaps because society or others “say so”, although it may be other reasons).  This can be a slippery thing to contemplate however, when we consider different conceptions of free will.

While it’s technically true that we don’t have any kind of causa sui free will (since it’s logically impossible to have this in a deterministic or random/indeterministic world), it’s still useful to conceive of our having a kind of freedom of the will that distinguishes between animals or people that have varying degrees of autonomy.  This is similar to the conception of free will that a court of law uses, to distinguish between instances of coercion or not being of sound mind and that of conscious intentions made with a reasonable capacity for evaluating the legality or consequences off those intentions.

It’s also important to account for the fact that we can’t predict all of our own actions (nor the actions of others), and so there’s also a folk psychological concept of free will implied by this uncertainty.  Most of these conceptions are psychologically useful, they play a positive role in maintaining the efficacy of our punishment reward systems, and overall they resonate with how we lives our lives as human beings.

In general then, I think that Sartre’s conception of tying freedom and responsibility together jives with our folk psychological conceptions of free will, and it serves a use for promoting a very functional and motivating way to live one’s life.

“As a writer Sartre is always the impassioned rhetorician of the idea; and the rhetorician, no matter how great and how eloquent his rhetoric, never has the full being of the artist.  If Sartre were really a poet and an artist, we would have from him a different philosophy…”

Although Barrett seems to be pigeonholing artists to some degree here (which isn’t entirely fair to Sartre), it’s true that what we find in Sartre’s written works are not artistic in any traditional or classical sense.  Even so, I think it’s fair to say that postmodern art has benefited from and been influenced by the works of many great thinkers including Sartre.  And the task of the artist is often informing us of our own human condition in its particular time and place, including many aspects of that condition that aren’t typically talked about or explicitly in our stream of consciousness.  Sartre’s works, both written and performed, did just that; they informed us of our modern condition and of a number of interesting perspectives of our own human psychology, and they did so in a thoroughly existential way.  Sartre was an artist then, in my mind at least.

3. Existential Psychology

“Behind all Sartre’s intellectual dialectic we perceive that the In-itself is for him the archetype of nature: excessive, fruitful, blooming nature-the woman, the female.  The For-itself, by contrast, is for Sartre the masculine aspects of human psychology: it is that in virtue of which man chooses himself in his radical liberty, makes projects, and thereby gives his life what strictly human meaning it has.”

And here some of the sexist nuances, both explicitly and implicitly in the minds of Sartre and his contemporaries, starts to become apparent.  The idea that nature has more of a female character is certainly far more justifiable and sensible because females are the only sex giving birth to new life and continuing the cycle, but the idea that the male traits subsume the human qualities of choice, freedom, and worthwhile projects, is one resulting from historical dominance hierarchies, which were traditionally patriarchal or androcentric.  But modern human life has illustrated this to be no longer applicable to humans generally.  In any case, we can still use traditional archetypes that make use of this kind of sexual categorization, as long as we’re careful to avoid taking those archetypes too seriously in terms of influencing an essentialist view of the human sexes and what our roles in society ought to be.

“The essence of man…lies not in the Oedipus complex (as Freud held) nor in the inferiority complex (as Adler maintained); it lies rather in the radical liberty of man’s existence by which he chooses himself and so makes himself what he is.”

And this of course is just a reiteration of the Sartrean existence precedes essence adage, but it’s interesting to see that Sartre seems to dismiss much of modern psychology, evolutionary psychology and any role for nature or innate predispositions in human beings.  One may wonder as well how humans can be free of traditional essentialism in Sartre’s mind while at the same time he assumes some kinds of roles or archetypes that are essentially male and female.  I don’t think we can avoid essentialism in its entirety, even if the crux of Sartre’s existential adage is valid.  And I don’t think Sartre avoids it either, even if he tries to.

“Man is not to be seen as the passive plaything of unconscious forces, which determine what he is to be.  In fact, Sartre denies the existence of an unconscious mind altogether; wherever the mind manifests itself, he holds, it is conscious.”

I’m not sure if Sartre’s denial of an unconscious mind is just a play on semantics (i.e. he’s defining “mind” exclusively as the neurological activity that directly leads to consciousness) or if he truly rejected the idea that there were any forces motivating our behavior that weren’t explicit in our consciousness.  If the latter is true, then how does Sartre account for marketing strategies, cognitive biases, denial, moral intuition, and a host of other things that rely on or operate under psychological predispositions that are largely unknown to the person who possesses them?  The totality of our behavior simply can’t be accounted for without some form of neurological processing happening under the radar (so to speak), which serves some psychological purpose.

“A human personality or human life is not to be understood in terms of some hypothetical unconscious at work behind the scenes and pulling all the wires that manipulate the puppet of consciousness.  A man is his life, says Sartre; which means that he is nothing more nor less than the totality of acts that make up that life.”

This is interesting when viewed through the lens of the future, our intended projects, and the kind of person we may strive to be.  It seems that in order for this excerpt above to be correct, according to Sartre’s (and perhaps Heidegger’s) own reasoning, that we’d have to include our intentions and goals as well; we can’t simply look at the facticity of our past, but must also look at how the self transcends into the future.  And although Sartre or others may not want to identify themselves with some of the forces operating in their unconscious, the fact of the matter is, if we want to be able to predict our own behavior and that of others as effectively as possible, then we have no choice but to make use of some concept of an unconscious mind, unconscious psychology, etc. It may not be the primary way we ought to think of ourselves, but it’s an important part of any viable model of human behavior.

Analogously, one may say that they don’t want to be identified with their brain (“I am not my brain!”), and yet if you asked them if they’d rather have a brain transplant or a body transplant (say, everything below the neck), they would undoubtedly choose the latter.  Why might this be so?  An obvious answer is because their entire personality, all that they value, believe, and know, is all made manifest in the structure of their brain.  Now we may see a brain on a table and have difficulty equating it with a person, and yet we can have an entire human body, only lacking a brain, and we have no person at all.  The only additional thing needed for personhood is that brain lying on the table, assuming it is a brain inherently capable of instantiating a personality, consciousness, etc.

Another aspect of Sartrean psychology is the conception of how the Other (some other conscious self) views us given the constraints of consciousness and our ability to see another person from that person’s own subjective point of view:

“This relation to the Other is one of the most sensational and best-known aspects of Sartre’s psychology.  To the other person, who look at me from the outside, I seem an object, a thing; my subjectivity with its inner freedom escapes his gaze.  Hence his tendency is always to convert me into the object he sees.  The gaze of the Other penetrates to the depths of my existence, freezes and congeals it.  It is this, according to Sartre, that turns love and particularly sexual love into a perpetual tension and indeed warfare.  The lover wishes to possess the beloved, but the freedom of the beloved (which is his or her human essence) cannot be possessed; hence, the lover tends to reduce the beloved to an object for the sake of possessing it.”

I think this view has merit and has a lot to do with how our brains make simplistic or heuristic models of the entities and causal structure we perceive around us.  Although we can recognize that some of those other entities are subjective beings such as ourselves, living with the same kinds of intrinsic conscious experiences that we do, it’s often easier and more automatic to treat them as if they were an object.  We can’t directly experience another person’s subjectivity, which is what makes it so easy to escape our attention and consideration.  More to the point though, Sartre was claiming how this tendency to objectify another penetrates many facets of our lives, introducing a source of conflict with our relations to one another.  Ironically, this could also be described as something we do unconsciously, where we don’t explicitly say to ourselves “this person is an object,” but rather we may explicitly see them as a person and yet treat them as an object in order to serve some implicit psychological goal (e.g. to feel that you possess your significant other).

“He is right to make the liberty of choice, which is the liberty of a conscious action, total and absolute, no matter how small the area of our power: in choosing, I have to say No somewhere, and this No, which is total and totally exclusive of other alternatives, is dreadful; but only by shutting myself up in it is any resoluteness of action possible.  “

Referring back to the beginning of this post, regarding the liberty of choice (in particular a kind of negative liberty) as the ultimate form of human freedom, we can also describe the choices we make in terms of the alternative choices we had to say no to first.  We can see our ability to say no as a means of creating a world for ourselves; a world with boundaries delineating what is and isn’t desired.  And the making of these choices, the effect that each choice has, and the world we create out of them are things that we and we alone own.  Understandably then, once this freedom and responsibility are recognized, they lead to our experiencing a kind of dread, anxiety, and perhaps a feeling of being overwhelmed by the sheer space of possibilities and thus overwhelmed by the number of possibilities that we have to willfully reject:

“A friend of mine, a very intelligent and sensitive man, was over a long period in the grip of a neurosis that took the form of indecision in the fact of almost every occasion of life; sitting in a restaurant, he could not look at the printed menu to choose his lunch without seeing the abyss of the negative open before his eyes, on the page, and so falling into a sweat…(this story) confirms Sartre’s analysis of freedom, for only because freedom is what he says it is could this man have been frightened by it and have retreated into the anxiety of indecision.”

This story also reminded me of the different perspectives that can result depending on whether one is a maximizer or a sufficer: the maximizer being the person who tries to exhaustively analyze every decision they make, hoping to maximize the benefits brought about by their careful decision-making; and the sufficer being the person who is much more easily pleased, willing to make a decision fairly quickly, and not worry about whether their decision was the best possible so much as whether it was simply good enough.  On average, maximizers are far less happy as well, despite the care they take in making decisions, simply because they fret over whether the decision they landed on was really best after all, and they tend to enjoy life less than sufficers because they’re missing many experiences and missing the natural flow of those experiences as a result of their constant worrying.  The maximizer concept definitely fits in line with Sartre’s conception of the anxiety of choice brought about by the void and negation implicit in decision making.

I for one try to be a sufficer, even if I occasionally find myself trying to maximize my decision making, and I do this for the same reason that I try to be an optimist rather than a pessimist: people live happier and more satisfying lives if they try and maintain a positive attitude and if they aren’t overly concerned with every choice they make.  We don’t want to make poor decisions by making them in haste or without any care, but we also don’t want to invest too many of our psychological resources in making those decisions; we may end up at the end of our lives realizing that we’ve missed out on a good life.

“…the example points up also where Sartre’s theory is decidedly lacking: it does not show us the kind of objects in relation to which our human subjectivity can define itself in a free choice that is meaningful and not neurotic.  This is so because Sartre’s doctrine of liberty was developed out of the experience of extreme situations: the victim says to his totalitarian oppressor, No, even if you kill me; and he shuts himself up in this No and will not be shaken from it.”

I agree with Barrett entirely here.  Much of Sartre’s philosophy was developed out of an experienced oppression under the threat of violence, and this quality makes it harder to apply to an everyday life that’s not embroiled with tyranny or fascism.

“…But he who shuts himself up in the No can be demoniacal, as Kierkegaard pointed out; he can say No against himself, against his own nature.  Sartre’s doctrine of freedom does not really comprehend the concrete man who is an undivided totality of body and mind, at once, and without division, both In-itself and For-itself; but rather an isolated aspect of this total condition, the aspect of man always at the margin of his existence.”

I think the biggest element missing from Sartre’s conception of freedom or from his philosophy generally is the evolutionary and biologically grounded aspects of our psychology, the facts pertaining to our innate impulses and predispositions, our cognitive biases and other unconscious processes (which he outright denied the existence of).  This makes his specific formula of existence preceding essence far less tenable and realistic.  Barrett agrees with some of this reasoning as well when he says:

“Because Sartre’s psychology recognizes only the conscious, it cannot comprehend a form of freedom that operates in that zone of the human personality where conscious and unconscious flow into each other.  Being limited to the conscious, it inevitably becomes an ego psychology; hence freedom is understood only as the resolute project of the conscious ego.”

Indeed, the human being as a whole includes both the conscious and unconscious aspects of ourselves.  The subjective experience that defines us is a product of both sides of our psyche, not to mention the interplay of the psyche as a whole with the world around us that we’re intimately connected to (think of Heidegger’s field of Being).

The final point I’d like to bring up regarding Sartre is in regard to his atheism:

“It has been remarked that Kierkegaard’s statement of the religious position is so severe that it has turned many people who thought themselves religious to atheism.  Analogously, Sartre’s view of atheism is so stark and bleak that it seems to turn many people toward religion.  This is exactly as it should be.  The choice must be hard either way; for man, a problematic being to his depths, cannot lay hold of his ultimate commitments with a smug and easy security.”

It’s understandable that most people gravitate toward religion because of the anxiety and fear that can accompany a worldview where one has to take on the burden of making meaning for their own lives rather than having someone else or some concept “decide” this for them.  Additionally, the fear of being alone, the fear of death, the fear of having a precarious existence generally throughout our lives and also given the fact that we live in an inhospitable universe where life appears to occupy a region in space comparable to the size of a proton within a large room.

All of these reasons lend themselves to reinforcing a desire for a deity and for a narrative that defines who we ought to be, so we don’t have to decide this for ourselves.  Personally, I don’t think one can live authentically with these kinds of world views, nor is it morally responsible to live life with belief systems that inhibit the ability to distinguish between fact and fiction.  But I still understand that it’s far more difficult to reject the comforts of religion and face the radical contingencies of life and the burden of choice.  I think people can live the most fulfilling lives in the most secure way only when they have a reliable epistemology in place and only when they take the limits of human psychology seriously.  We need to understand that some ways of living, some attitudes, some ways of thinking, etc., are better than others for living sustainable, fulfilling lives.

Sartre’s atheistic philosophy is certainly bleak, but his does not represent the philosophy of atheists.  My atheistic philosophy of life, for example, takes into account the usefulness of multiple descriptions (including holistic descriptions) of our existence; the recognition of our desire to connect to the transcendent; the need for morality, meaning, and purpose; and the need for emotional channels of expression.  And regardless of it’s bleakness, Sartre’s philosophy has some useful ideas, and any philosophy that’s likely going to be successful will gather what works from a number of different schools of thought and philosophies anyway.

Well, this concludes the post on Jean-Paul Sartre (Ch. 10), and ends part 3 of this series on William Barrett’s Irrational Man.  The last and final post in this series is on part 4, Integral vs. Rational Man, which consists of chapter 11, the final chapter in Barrett’s book, The Place of the Furies.  I will post the link here when that post is complete.

Predictive Processing: Unlocking the Mysteries of Mind & Body (Part VI)

This is the last post I’m going to write for this particular post-series on Predictive Processing (PP).  Here’s the links to parts 1, 2, 3, 4, and 5.  I’ve already explored a bit on how a PP framework can account for folk psychological concepts like beliefs, desires, and emotions, how it accounts for action, language and ontology, knowledge, and also perception, imagination, and reasoning.  In this final post for this series, I’m going to explore consciousness itself and how some theories of consciousness fit very nicely within a PP framework.

Consciousness as Prediction (Predicting The Self)

Earlier in this post-series I explored how PP treats perception (whether online or an offline form like imagination) as simply predictions pertaining to incoming visual information with varying degrees of precision weighting assigned to the resulting prediction error.  In a sense then, consciousness just is prediction.  At the very least, it is a subset of the predictions, likely those that are higher up in the predictive hierarchy.  This is all going to depend on which aspect or level of consciousness we are trying to explain, and as philosophers and cognitive scientists well know, consciousness is difficult to pin down and define in any way.

By consciousness, we could mean any kind of awareness at all, or we could limit this term to only apply to a system that is aware of itself.  Either way we have to be careful here if we’re looking to distinguish between consciousness generally speaking (consciousness in any form, which may be unrecognizable to us) and the unique kind of consciousness that we as human beings experience.  If an ant is conscious, it doesn’t likely have any of the richness that we have in our experience nor is it likely to have self-awareness like we do (even though a dolphin, which has a large neocortex and prefrontal cortex, is far more likely to).  So we have to keep these different levels of consciousness in mind in order to properly assess their being explained by any cognitive framework.

Looking through a PP lens, we can see that what we come to know about the world and our own bodily states is a matter of predictive models pertaining to various inferred causal relations.  These inferred causal relations ultimately stem from bottom-up sensory input.  But when this information is abstracted at higher and higher levels, eventually one can (in principle) get to a point where those higher level models begin to predict the existence of a unified prediction engine.  In other words, a subset of the highest-level predictive models may eventually predict itself as a self.  We might describe this process as the emergence of some level of self-awareness, even if higher levels of self-awareness aren’t possible unless particular kinds of higher level models have been generated.

What kinds of predictions might be involved with this kind of emergence?  Well, we might expect that predictions pertaining to our own autobiographical history, which is largely composed of episodic memories of our past experience, would contribute to this process (e.g. “I remember when I went to that amusement park with my friend Mary, and we both vomited!”).  If we begin to infer what is common or continuous between those memories of past experiences (even if only 1 second in the past), we may discover that there is a form of psychological continuity or identity present.  And if this psychological continuity (this type of causal relation) is coincident with an incredibly stable set of predictions pertaining to (especially internal) bodily states, then an embodied subject or self can plausibly emerge from it.

This emergence of an embodied self is also likely fueled by predictions pertaining to other objects that we infer existing as subjects.  For instance, in order to develop a theory of mind about other people, that is, in order to predict how other people will behave, we can’t simply model their external behavior as we can for something like a rock falling down a hill.  This can work up to a point, but eventually it’s just not good enough as behaviors become more complex.  Animal behavior, most especially that of humans, is far more complex than that of inanimate objects and as such it is going to be far more effective to infer some internal hidden causes for that behavior.  Our predictions would work well if they involved some kind of internal intentionality and goal-directedness operating within any animate object, thereby transforming that object into a subject.  This should be no less true for how we model our own behavior.

If this object that we’ve now inferred to be a subject seems to behave in ways that we see ourselves as behaving (especially if it’s another human being), then we can begin to infer some kind of equivalence despite being separate subjects.  We can begin to infer that they too have beliefs, desires, and emotions, and thus that they have an internal perspective that we can’t directly access just as they aren’t able to access ours.  And we can also see ourselves from a different perspective based on how we see those other subjects from an external perspective.  Since I can’t easily see myself from an external perspective, when I look at others and infer that we are similar kinds of beings, then I can begin to see myself as having both an internal and external side.  I can begin to infer that others see me similar to the way that I see them, thus further adding to my concept of self and the boundaries that define that self.

Multiple meta-cognitive predictions can be inferred based on all of these interactions with others, and from the introspective interactions with our brain’s own models.  Once this happens, a cognitive agent like ourselves may begin to think about thinking and think about being a thinking being, and so on and so forth.  All of these cognitive moves would seem to provide varying degrees of self-hood or self-awareness.  And these can all be thought of as the brain’s best guesses that account for its own behavior.  Either way, it seems that the level of consciousness that is intrinsic to an agent’s experience is going to be dependent on what kinds of higher level models and meta-models are operating within the agent.

Consciousness as Integrated Information

One prominent theory of consciousness is the Integrated Information Theory of Consciousness, otherwise known as IIT.  This theory, initially formulated by Giulio Tononi back in 2004 and which has undergone development ever since, posits that consciousness is ultimately dependent on the degree of information integration that is inherent in the causal properties of some system.  Another way of saying this is that the causal system specified is unified such that every part of the system must be able to affect and be affected by the rest of the system.  If you were to physically isolate one part of a system from the rest of it (and if this part was the least significant to the rest of the system), then the resulting change in the cause-effect structure of the system would quantify the degree of integration.  A large change in the cause-effect structure based on this part’s isolation from the system (that is, by having introduced what is called a minimum partition to the system) would imply a high degree of information integration and vice versa.  And again, a high degree of integration implies a high degree of consciousness.

Notice how this information integration axiom in IIT posits that a cognitive system that is entirely feed-forward will not be conscious.  So if our brain processed incoming sensory information from the bottom up and there was no top-down generative model feeding downward through the system, then IIT would predict that our brain wouldn’t be able to produce consciousness.  PP on the other hand, posits a feedback system (as opposed to feed-forward) where the bottom-up sensory information that flows upward is met with a downward flow of top-down predictions trying to explain away that sensory information.  The brain’s predictions cause a change in the resulting prediction error, and this prediction error serves as feedback to modify the brain’s predictions.  Thus, a cognitive architecture like that suggested by PP is predicted to produce consciousness according to the most fundamental axiom of IIT.

Additionally, PP posits cross-modal sensory features and functionality where the brain integrates (especially lower level) predictions spanning various spatio-temporal scales from different sensory modalities, into a unified whole.  For example, if I am looking at and petting a black cat lying on my lap and hearing it purr, PP posits that my perceptual experience and contextual understanding of that experience are based on having integrated the visual, tactile, and auditory expectations that I’ve associated to constitute such an experience of a “black cat”.  It is going to be contingent on a conjunction of predictions that are occurring simultaneously in order to produce a unified experience rather than a barrage of millions or billions of separate causal relations (let alone those which stem from different sensory modalities) or having millions or billions of separate conscious experiences (which would seem to necessitate separate consciousnesses if they are happening at the same time).

Evolution of Consciousness

Since IIT identifies consciousness with integrated information, it can plausibly account for why it evolved in the first place.  The basic idea here is that a brain that is capable of integrating information is more likely to exploit and understand an environment that has a complex causal structure on multiple time scales than a brain that has informationally isolated modules.  This idea has been tested and confirmed to some degree by artificial life simulations (animats) where adaptation and integration are both simulated.  The organism in these simulations was a Braitenberg-like vehicle that had to move through a maze.  After 60,000 generations of simulated brains evolving through natural selection, it was found that there was a monotonic relationship between their ability to get through the maze and the amount of simulated information integration in their brains.

This increase in adaptation was the result of an effective increase in the number of concepts that the organism could make use of given the limited number of elements and connections possible in its cognitive architecture.  In other words, given a limited number of connections in a causal system (such as a group of neurons), you can pack more functions per element if the level of integration with respect to those connections is high, thus giving an evolutionary advantage to those with higher integration.  Therefore, when all else is equal in terms of neural economy and resources, higher integration gives an organism the ability to take advantage of more regularities in their environment.

From a PP perspective, this makes perfect sense because the complex causal structure of the environment is described as being modeled at many different levels of abstraction and at many different spatio-temporal scales.  All of these modeled causal relations are also described as having a hierarchical structure with models contained within models, and with many associations existing between various models.  These associations between models can be accounted for by a cognitive architecture that re-uses certain sets of neurons in multiple models, so the association is effectively instantiated by some literal degree of neuronal overlap.  And of course, these associations between multiply-leveled predictions allows the brain to exploit (and create!) as many regularities in the environment as possible.  In short, both PP and IIT make a lot of practical sense from an evolutionary perspective.

That’s All Folks!

And this concludes my post-series on the Predictive Processing (PP) framework and how I see it as being applicable to a far more broad account of mentality and brain function, than it is generally assumed to be.  If there’s any takeaways from this post-series, I hope you can at least appreciate the parsimony and explanatory scope of predictive processing and viewing the brain as a creative and highly capable prediction engine.

Predictive Processing: Unlocking the Mysteries of Mind & Body (Part IV)

In the previous post which was part 3 in this series (click here for parts 1 and 2) on Predictive Processing (PP), I discussed how the PP framework can be used to adequately account for traditional and scientific notions of knowledge, by treating knowledge as a subset of all the predicted causal relations currently at our brain’s disposal.  This subset of predictions that we tend to call knowledge has the special quality of especially high confidence levels (high Bayesian priors).  Within a scientific context, knowledge tends to have an even stricter definition (and even higher confidence levels) and so we end up with a smaller subset of predictions which have been further verified through comparing them with the inferred predictions of others and by testing them with external means of instrumentation and some agreed upon conventions for analysis.

However, no amount of testing or verification is going to give us direct access to any knowledge per se.  Rather, the creation or discovery of knowledge has to involve the application of some kind of reasoning to explain the causal inputs, and only after this reasoning process can the resulting predicted causal relations be validated to varying degrees by testing it (through raw sensory data, external instruments, etc.).  So getting an adequate account of reasoning within any theory or framework of overall brain function is going to be absolutely crucial and I think that the PP framework is well-suited for the job.  As has already been mentioned throughout this post-series, this framework fundamentally relies on a form of Bayesian inference (or some approximation) which is a type of reasoning.  It is this inferential strategy then, combined with a hierarchical neurological structure for it to work upon, that would allow our knowledge to be created in the first place.

Rules of Inference & Reasoning Based on Hierarchical-Bayesian Prediction Structure, Neuronal Selection, Associations, and Abstraction

While PP tends to focus on perception and action in particular, I’ve mentioned that I see the same general framework as being able to account for not only the folk psychological concepts of beliefs, desires, and emotions, but also that the hierarchical predictive structure it entails should plausibly be able to account for language and ontology and help explain the relationship between the two.  It seems reasonable to me that the associations between all of these hierarchically structured beliefs or predicted causal relations at varying levels of abstraction, can provide a foundation for our reasoning as well, whether intuitive or logical forms of reasoning.

To illustrate some of the importance of associations between beliefs, consider an example like the belief in object permanence (i.e. that objects persist or continue to exist even when I can no longer see them).  This belief of ours has an extremely high prior because our entire life experience has only served to support this prediction in a large number of ways.  This means that it’s become embedded or implicit in a number of other beliefs.  If I didn’t predict that object permanence was a feature of my reality, then an enormous number of everyday tasks would become difficult if not impossible to do because objects would be treated as if they are blinking into and out of existence.

We have a large number of beliefs that require object permanence (and which are thus associated with object permanence), and so it is a more fundamental lower-level prediction (though not as low level as sensory information entering the visual cortex) and we use this lower-level prediction to build upon into any number of higher-level predictions in the overall conceptual/predictive hierarchy.  When I put money in a bank, I expect to be able to spend it even if I can’t see it anymore (such as with a check or debit card).  This is only possible if my money continues to exist even when out of view (regardless of if the money is in a paper/coin or electronic form).  This is just one of many countless everyday tasks that depend on this belief.  So it’s no surprise that this belief (this set of predictions) would have an incredibly high Bayesian prior, and therefore I would treat it as a non-negotiable fact about reality.

On the other hand, when I was a newborn infant, I didn’t have this belief of object permanence (or at best, it was a very weak belief).  Most psychologists estimate that our belief in object permanence isn’t acquired until after several months of brain development and experience.  This would translate to our having a relatively low Bayesian prior for this belief early on in our lives, and only once a person begins to form predictions based on these kinds of recognized causal relations can we begin to increase that prior and perhaps eventually reach a point that results in a subjective experience of a high degree in certainty for this particular belief.  From that point on, we are likely to simply take that belief for granted, no longer questioning it.  The most important thing to note here is that the more associations made between beliefs, the higher their effective weighting (their priors), and thus the higher our confidence in those beliefs becomes.

Neural Implementation, Spontaneous or Random Neural Activity & Generative Model Selection

This all seems pretty reasonable if a neuronal implementation worked to strengthen Bayesian priors as a function of the neuronal/synaptic connectivity (among other factors), where neurons that fire together are more likely to wire together.  And connectivity strength will increase the more often this happens.  On the flip-side, the less often this happens or if it isn’t happening at all then the connectivity is likely to be weakened or non-existent.  So if a concept (or a belief composed of many conceptual relations) is represented by some cluster of interconnected neurons and their activity, then it’s applicability to other concepts increases its chances of not only firing but also increasing the strength of wiring with those other clusters of neurons, thus plausibly increasing the Bayesian priors for the overlapping concept or belief.

Another likely important factor in the Bayesian inferential process, in terms of the brain forming new generative models or predictive hypotheses to test, is the role of spontaneous or random neural activity and neural cluster generation.  This random neural activity could plausibly provide a means for some randomly generated predictions or random changes in the pool of predictive models that our brain is able to select from.  Similar to the role of random mutation in gene pools which allows for differential reproductive rates and relative fitness of offspring, some amount of randomness in neural activity and the generative models that result would allow for improved models to be naturally selected based on those which best minimize prediction error.  The ability to minimize prediction error could be seen as a direct measure of the fitness of the generative model, within this evolutionary landscape.

This idea is related to the late Gerald Edelman’s Theory of Neuronal Group Selection (NGS), also known as Neural Darwinism, which I briefly explored in a post I wrote long ago.  I’ve long believed that this kind of natural selection process is applicable to a number of different domains (aside from genetics), and I think any viable version of PP is going to depend on it to at least some degree.  This random neural activity (and the naturally selected products derived from them) could be thought of as contributing to a steady supply of new generative models to choose from and thus contributing to our overall human creativity as well whether for reasoning and problem solving strategies or simply for artistic expression.

Increasing Abstraction, Language, & New Rules of Inference

This kind of use it or lose it property of brain plasticity combined with dynamic associations between concepts or beliefs and their underlying predictive structure, would allow for the brain to accommodate learning by extracting statistical inferences (at increasing levels of abstraction) as they occur and modifying or eliminating those inferences by changing their hierarchical associative structure as prediction error is encountered.  While some form of Bayesian inference (or an approximation to it) underlies this process, once lower-level inferences about certain causal relations have been made, I believe that new rules of inference can be derived from this basic Bayesian foundation.

To see how this might work, consider how we acquire a skill like learning how to speak and write in some particular language.  The rules of grammar, the syntactic structure and so forth which underlie any particular language are learned through use.  We begin to associate words with certain conceptual structures (see part 2 of this post-series for more details on language and ontology) and then we build up the length and complexity of our linguistic expressions by adding concepts built on higher levels of abstraction.  To maximize the productivity and specificity of our expressions, we also learn more complex rules pertaining to the order in which we speak or write various combinations of words (which varies from language to language).

These grammatical rules can be thought of as just another higher-level abstraction, another higher-level causal relation that we predict will convey more specific information to whomever we are speaking to.  If it doesn’t seem to do so, then we either modify what we have mistakenly inferred to be those grammatical rules, or depending on the context, we may simply assume that the person we’re talking to hasn’t conformed to the language or grammar that my community seems to be using.

Just like with grammar (which provides a kind of logical structure to our language), we can begin to learn new rules of inference built on the same probabilistic predictive bedrock of Bayesian inference.  We can learn some of these rules explicitly by studying logic, induction, deduction, etc., and consciously applying those rules to infer some new piece of knowledge, or we can learn these kinds of rules implicitly based on successful predictions (pertaining to behaviors of varying complexity) that happen to result from stumbling upon this method of processing causal relations within various contexts.  As mentioned earlier, this would be accomplished in part by the natural selection of randomly-generated neural network changes that best reduce the incoming prediction error.

However, language and grammar are interesting examples of an acquired set of rules because they also happen to be the primary tool that we use to learn other rules (along with anything we learn through verbal or written instruction), including (as far as I can tell) various rules of inference.  The logical structure of language (though it need not have an exclusively logical structure), its ability to be used for a number of cognitive short-cuts, and it’s influence on our thought complexity and structure, means that we are likely dependent on it during our reasoning processes as well.

When we perform any kind of conscious reasoning process, we are effectively running various mental simulations where we can intentionally manipulate our various generative models to test new predictions (new models) at varying levels of abstraction, and thus we also manipulate the linguistic structure associated with those generative models as well.  Since I have a lot more to say on reasoning as it relates to PP, including more on intuitive reasoning in particular, I’m going to expand on this further in my next post in this series, part 5.  I’ll also be exploring imagination and memory including how they relate to the processes of reasoning.

Predictive Processing: Unlocking the Mysteries of Mind & Body (Part I)

I’ve been away from writing for a while because I’ve had some health problems relating to my neck.  A few weeks ago I had double-cervical-disc replacement surgery and so I’ve been unable to write and respond to comments and so forth for a little while.  I’m in the second week following my surgery now and have finally been able to get back to writing, which feels very good given that I’m unable to lift or resume martial arts for the time being.  Anyway, I want to resume my course of writing beginning with a post-series that pertains to Predictive Processing (PP) and the Bayesian brain.  I’ve written one post on this topic a little over a year ago (which can be found here) as I’ve become extremely interested in this topic for the last several years now.

The Predictive Processing (PP) theory of perception shows a lot of promise in terms of finding an overarching schema that can account for everything that the brain seems to do.  While its technical application is to account for the acts of perception and active inference in particular, I think it can be used more broadly to account for other descriptions of our mental life such as beliefs (and knowledge), desires, emotions, language, reasoning, cognitive biases, and even consciousness itself.  I want to explore some of these relationships as viewed through a PP lens more because I think it is the key framework needed to reconcile all of these aspects into one coherent picture, especially within the evolutionary context of an organism driven to survive.  Let’s begin this post-series by first looking at how PP relates to perception (including imagination), beliefs, emotions, and desires (and by extension, the actions resulting from particular desires).

Within a PP framework, beliefs can be best described as simply the set of particular predictions that the brain employs which encompass perception, desires, action, emotion, etc., and which are ultimately mediated and updated in order to reduce prediction errors based on incoming sensory evidence (and which approximates a Bayesian form of inference).  Perception then, which is constituted by a subset of all our beliefs (with many of them being implicit or unconscious beliefs), is more or less a form of controlled hallucination in the sense that what we consciously perceive is not the actual sensory evidence itself (not even after processing it), but rather our brain’s “best guess” of what the causes for the incoming sensory evidence are.

Desires can be best described as another subset of one’s beliefs, and a set of beliefs which has the special characteristic of being able to drive action or physical behavior in some way (whether driving internal bodily states, or external ones that move the body in various ways).  Finally, emotions can be thought of as predictions pertaining to the causes of internal bodily states and which may be driven or changed by changes in other beliefs (including changes in desires or perceptions).

When we believe something to be true or false, we are basically just modeling some kind of causal relationship (or its negation) which is able to manifest itself into a number of highly-weighted predicted perceptions and actions.  When we believe something to be likely true or likely false, the same principle applies but with a lower weight or precision on the predictions that directly corresponds to the degree of belief or disbelief (and so new sensory evidence will more easily sway such a belief).  And just like our perceptions, which are mediated by a number of low and high-level predictions pertaining to incoming sensory data, any prediction error that the brain encounters results in either updating the perceptual predictions to new ones that better reduce the prediction error and/or performing some physical action that reduces the prediction error (e.g. rotating your head, moving your eyes, reaching for an object, excreting hormones in your body, etc.).

In all these cases, we can describe the brain as having some set of Bayesian prior probabilities pertaining to the causes of incoming sensory data, and these priors changing over time in response to prediction errors arising from new incoming sensory evidence that fails to be “explained away” by the predictive models currently employed.  Strong beliefs are associated with high prior probabilities (highly-weighted predictions) and therefore need much more counterfactual sensory evidence to be overcome or modified than for weak beliefs which have relatively low priors (low-weighted predictions).

To illustrate some of these concepts, let’s consider a belief like “apples are a tasty food”.  This belief can be broken down into a number of lower level, highly-weighted predictions such as the prediction that eating a piece of what we call an “apple” will most likely result in qualia that accompany the perception of a particular satisfying taste, the lower level prediction that doing so will also cause my perception of hunger to change, and the higher level prediction that it will “give me energy” (with these latter two predictions stemming from the more basic category of “food” contained in the belief).  Another prediction or set of predictions is that these expectations will apply to not just one apple but a number of apples (different instances of one type of apple, or different types of apples altogether), and a host of other predictions.

These predictions may even result (in combination with other perceptions or beliefs) in an actual desire to eat an apple which, under a PP lens could be described as the highly weighted prediction of what it would feel like to find an apple, to reach for an apple, to grab it, to bite off a piece of it, to chew it, and to swallow it.  If I merely imagine doing such things, then the resulting predictions will necessarily carry such a small weight that they won’t be able to influence any actual motor actions (even if these imagined perceptions are able to influence other predictions that may eventually lead to some plan of action).  Imagined perceptions will also not carry enough weight (when my brain is functioning normally at least) to trick me into thinking that they are actual perceptions (by “actual”, I simply mean perceptions that correspond to incoming sensory data).  This low-weighting attribute of imagined perceptual predictions thus provides a viable way for us to have an imagination and to distinguish it from perceptions corresponding to incoming sensory data, and to distinguish it from predictions that directly cause bodily action.  On the other hand, predictions that are weighted highly enough (among other factors) will be uniquely capable of affecting our perception of the real world and/or instantiating action.

This latter case of desire and action shows how the PP model takes the organism to be an embodied prediction machine that is directly influencing and being influenced by the world that its body interacts with, with the ultimate goal of reducing any prediction error encountered (which can be thought of as maximizing Bayesian evidence).  In this particular example, the highly-weighted prediction of eating an apple is simply another way of describing a desire to eat an apple, which produces some degree of prediction error until the proper actions have taken place in order to reduce said error.  The only two ways of reducing this prediction error are to change the desire (or eliminate it) to one that no longer involves eating an apple, and/or to perform bodily actions that result in actually eating an apple.

Perhaps if I realize that I don’t have any apples in my house, but I realize that I do have bananas, then my desire will change to one that predicts my eating a banana instead.  Another way of saying this is that my higher-weighted prediction of satisfying hunger supersedes my prediction of eating an apple specifically, thus one desire is able to supersede another.  However, if the prediction weight associated with my desire to eat an apple is high enough, it may mean that my predictions will motivate me enough to avoid eating the banana, and instead to predict what it is like to walk out of my house, go to the store, and actually get an apple (and therefore, to actually do so).  Furthermore, it may motivate me to predict actions that lead me to earn the money such that I can purchase the apple (if I don’t already have the money to do so).  To do this, I would be employing a number of predictions having to do with performing actions that lead to me obtaining money, using money to purchase goods, etc.

This is but a taste of what PP has to offer, and how we can look at basic concepts within folk psychology, cognitive science, and theories of mind in a new light.  Associated with all of these beliefs, desires, emotions, and actions (which again, are simply different kinds of predictions under this framework), is a number of elements pertaining to ontology (i.e. what kinds of things we think exist in the world) and pertaining to language as well, and I’d like to explore this relationship in my next post.  This link can be found here.

Conscious Realism & The Interface Theory of Perception

A few months ago I was reading an interesting article in The Atlantic about Donald Hoffman’s Interface Theory of Perception.  As a person highly interested in consciousness studies, cognitive science, and the mind-body problem, I found the basic concepts of his theory quite fascinating.  What was most interesting to me was the counter-intuitive connection between evolution and perception that Hoffman has proposed.  Now it is certainly reasonable and intuitive to assume that evolutionary natural selection would favor perceptions that are closer to “the truth” or closer to the objective reality that exists independent of our minds, simply because of the idea that perceptions that are more accurate will be more likely to lead to survival than perceptions that are not accurate.  As an example, if I were to perceive lions as inert objects like trees, I would be more likely to be naturally selected against and eaten by a lion when compared to one who perceives lions as a mobile predator that could kill them.

While this is intuitive and reasonable to some degree, what Hoffman actually shows, using evolutionary game theory, is that with respect to organisms with comparable complexity, those with perceptions that are closer to reality are never going to be selected for nearly as much as those with perceptions that are tuned to fitness instead.  More so, truth in this case will be driven to extinction when it is up against perceptual models that are tuned to fitness.  That is to say, evolution will select for organisms that perceive the world in a way that is less accurate (in terms of the underlying reality) as long as the perception is tuned for survival benefits.  The bottom line is that given some specific level of complexity, it is more costly to process more information (costing more time and resources), and so if a “heuristic” method for perception can evolve instead, one that “hides” all the complex information underlying reality and instead provides us with a species-specific guide to adaptive behavior, that will always be the preferred choice.

To see this point more clearly, let’s consider an example.  Let’s imagine there’s an animal that regularly eats some kind of insect, such as a beetle, but it needs to eat a particular sized beetle or else it has a relatively high probability of eating the wrong kind of beetle (and we can assume that the “wrong” kind of beetle would be deadly to eat).  Now let’s imagine two possible types of evolved perception: it could have really accurate perceptions about the various sizes of beetles that it encounters so it can distinguish many different sizes from one another (and then choose the proper size range to eat), or it could evolve less accurate perceptions such that all beetles that are either too small or too large appear as indistinguishable from one another (maybe all the wrong-sized beetles whether too large or too small look like indistinguishable red-colored blobs) and perhaps all the beetles that are in the ideal size range for eating appear as green-colored blobs (that are again, indistinguishable from one another).  So the only discrimination in this latter case of perception is between red and green colored blobs.

Both types of perception would solve the problem of which beetles to eat or not eat, but the latter type (even if much less accurate) would bestow a fitness advantage over the former type, by allowing the animal to process much less information about the environment by not focusing on relatively useless information (like specific beetle size).  In this case, with beetle size as the only variable under consideration for survival, evolution would select for the organism that knows less total information about beetle size, as long as it knows what is most important about distinguishing the edible beetles from the poisonous beetles.  Now we can imagine that in some cases, the fitness function could align with the true structure of reality, but this is not what we ever expect to see generically in the world.  At best we may see some kind of overlap between the two but if there doesn’t have to be any then truth will go extinct.

Perception is Analogous to a Desktop Computer Interface

Hoffman analogizes this concept of a “perception interface” with the desktop interface of a personal computer.  When we see icons of folders on the desktop and drag one of those icons to the trash bin, we shouldn’t take that interface literally, because there isn’t literally a folder being moved to a literal trash bin but rather it is simply an interface that hides most if not all of what is really going on in the background — all those various diodes, resistors and transistors that are manipulated in order to modify stored information that is represented in binary code.

The desktop interface ultimately provides us with an easy and intuitive way of accomplishing these various information processing tasks because trying to do so in the most “truthful” way — by literally manually manipulating every diode, resistor, and transistor to accomplish the same task — would be far more cumbersome and less effective than using the interface.  Therefore the interface, by hiding this truth from us, allows us to “navigate” through that computational world with more fitness.  In this case, having more fitness simply means being able to accomplish information processing goals more easily, with less resources, etc.

Hoffman goes on to say that even though we shouldn’t take the desktop interface literally, obviously we should still take it seriously, because moving that folder to the trash bin can have direct implications on our lives, by potentially destroying months worth of valuable work on a manuscript that is contained in that folder.  Likewise we should take our perceptions seriously, even if we don’t take them literally.  We know that stepping in front of a moving train will likely end our conscious experience even if it is for causal reasons that we have no epistemic access to via our perception, given the species-specific “desktop interface” that evolution has endowed us with.

Relevance to the Mind-body Problem

The crucial point with this analogy is the fact that if our knowledge was confined to the desktop interface of the computer, we’d never be able to ascertain the underlying reality of the “computer”, because all that information that we don’t need to know about that underlying reality is hidden from us.  The same would apply to our perception, where it would be epistemically isolated from the underlying objective reality that exists.  I want to add to this point that even though it appears that we have found the underlying guts of our consciousness, i.e., the findings in neuroscience, it would be mistaken to think that this approach will conclusively answer the mind-body problem because the interface that we’ve used to discover our brains’ underlying neurobiology is still the “desktop” interface.

So while we may think we’ve found the underlying guts of “the computer”, this is far from certain, given the possibility of and support for this theory.  This may end up being the reason why many philosophers claim there is a “hard problem” of consciousness and one that can’t be solved.  It could be that we simply are stuck in the desktop interface and there’s no way to find out about the underlying reality that gives rise to that interface.  All we can do is maximize our knowledge of the interface itself and that would be our epistemic boundary.

Predictions of the Theory

Now if this was just a fancy idea put forward by Hoffman, that would be interesting in its own right, but the fact that it is supported by evolutionary game theory and genetic algorithm simulations shows that the theory is more than plausible.  Even better, the theory is actually a scientific theory (and not just a hypothesis), because it has made falsifiable predictions as well.  It predicts that “each species has its own interface (with some similarities between phylogenetically related species), almost surely no interface performs reconstructions (read the second link for more details on this), each interface is tailored to guide adaptive behavior in the relevant niche, much of the competition between and within species exploits strengths and limitations of interfaces, and such competition can lead to arms races between interfaces that critically influence their adaptive evolution.”  The theory predicts that interfaces are essential to understanding evolution and the competition between organisms, whereas the reconstruction theory makes such understanding impossible.  Thus, evidence of interfaces should be widespread throughout nature.

In his paper, he mentions the Jewel beetle as a case in point.  This beetle has a perceptual category, desirable females, which works well in its niche, and it uses it to choose larger females because they are the best mates.  According to the reconstructionist thesis, the male’s perception of desirable females should incorporate a statistical estimate of the true sizes of the most fertile females, but it doesn’t do this.  Instead, it has a category based on “bigger is better” and although this bestows a high fitness behavior for the male beetle in its evolutionary niche, if it comes into contact with a “stubbie” beer bottle, it falls into an infinite loop by being drawn to this supernormal stimuli since it is smooth, brown, and extremely large.  We can see that the “bigger is better” perceptual category relies on less information about the true nature of reality and instead chooses an “informational shortcut”.  The evidence of supernormal stimuli which have been found with many species further supports the theory and is evidence against the reconstructionist claim that perceptual categories estimate the statistical structure of the world.

More on Conscious Realism (Consciousness is all there is?)

This last link provided here shows the mathematical formalism of Hoffman’s conscious realist theory as proved by Chetan Prakash.  It contains a thorough explanation of the conscious realist theory (which goes above and beyond the interface theory of perception) and it also provides answers to common objections put forward by other scientists and philosophers on this theory.

Darwin’s Big Idea May Be The Biggest Yet

Back in 1859, Charles Darwin released his famous theory of evolution by natural selection whereby inherent variations in the individual members of some population of organisms under consideration would eventually lead to speciation events due to those variations producing a differential in survival and reproductive success and thus leading to the natural selection of some subset of organisms within that population.  As Darwin explained in his On The Origin of Species:

If during the long course of ages and under varying conditions of life, organic beings vary at all in the several parts of their organisation, and I think this cannot be disputed; if there be, owing to the high geometrical powers of increase of each species, at some age, season, or year, a severe struggle for life, and this certainly cannot be disputed; then, considering the infinite complexity of the relations of all organic beings to each other and to their conditions of existence, causing an infinite diversity in structure, constitution, and habits, to be advantageous to them, I think it would be a most extraordinary fact if no variation ever had occurred useful to each being’s own welfare, in the same way as so many variations have occurred useful to man. But if variations useful to any organic being do occur, assuredly individuals thus characterised will have the best chance of being preserved in the struggle for life; and from the strong principle of inheritance they will tend to produce offspring similarly characterised. This principle of preservation, I have called, for the sake of brevity, Natural Selection.

While Darwin’s big idea completely transformed biology in terms of it providing (for the first time in history) an incredibly robust explanation for the origin of the diversity of life on this planet, his idea has since inspired other theories pertaining to perhaps the three largest mysteries that humans have ever explored: the origin of life itself (not just the diversity of life after it had begun, which was the intended scope of Darwin’s theory), the origin of the universe (most notably, why the universe is the way it is and not some other way), and also the origin of consciousness.

Origin of Life

In order to solve the first mystery (the origin of life itself), geologists, biologists, and biochemists are searching for plausible models of abiogenesis, whereby the general scheme of these models would involve chemical reactions (pertaining to geology) that would have begun to incorporate certain kinds of energetically favorable organic chemistries such that organic, self-replicating molecules eventually resulted.  Now, where Darwin’s idea of natural selection comes into play with life’s origin is in regard to the origin and evolution of these self-replicating molecules.  First of all, in order for any molecule at all to build up in concentration requires a set of conditions such that the reaction leading to the production of the molecule in question is more favorable than the reverse reaction where the product transforms back into the initial starting materials.  If merely one chemical reaction (out of a countless number of reactions occurring on the early earth) led to a self-replicating product, this would increasingly favor the production of that product, and thus self-replicating molecules themselves would be naturally selected for.  Once one of them was produced, there would have been a cascade effect of exponential growth, at least up to the limit set by the availability of the starting materials and energy sources present.

Now if we assume that at least some subset of these self-replicating molecules (if not all of them) had an imperfect fidelity in the copying process (which is highly likely) and/or underwent even a slight change after replication by reacting with other neighboring molecules (also likely), this would provide them with a means of mutation.  Mutations would inevitably lead to some molecules becoming more effective self-replicators than others, and then evolution through natural selection would take off, eventually leading to modern RNA/DNA.  So not only does Darwin’s big idea account for the evolution of diversity of life on this planet, but the basic underlying principle of natural selection would also account for the origin of self-replicating molecules in the first place, and subsequently the origin of RNA and DNA.

Origin of the Universe

Another grand idea that is gaining heavy traction in cosmology is that of inflationary cosmology, where this theory posits that the early universe underwent a period of rapid expansion, and due to quantum mechanical fluctuations in the microscopically sized inflationary region, seed universes would have resulted with each one having slightly different properties, one of which that would have expanded to be the universe that we live in.  Inflationary cosmology is currently heavily supported because it has led to a number of predictions, many of which that have already been confirmed by observation (it explains many large-scale features of our universe such as its homogeneity, isotropy, flatness, and other features).  What I find most interesting with inflationary theory is that it predicts the existence of a multiverse, whereby we are but one of an extremely large number of other universes (predicted to be on the order of 10^500, if not an infinite number), with each one having slightly different constants and so forth.

Once again, Darwin’s big idea, when applied to inflationary cosmology, would lead to the conclusion that our universe is the way it is because it was naturally selected to be that way.  The fact that its constants are within a very narrow range such that matter can even form, would make perfect sense, because even if an infinite number of universes exist with different constants, we would only expect to find ourselves in one that has the constants within the necessary range in order for matter, let alone life to exist.  So any universe that harbors matter, let alone life, would be naturally selected for against all the other universes that didn’t have the right properties to do so, including for example, universes that had too high or too low of a cosmological constant (such as those that would have instantly collapsed into a Big Crunch or expanded into a heat death far too quickly for any matter or life to have formed), or even universes that didn’t have the proper strong nuclear force to hold atomic nuclei together, or any other number of combinations that wouldn’t work.  So any universe that contains intelligent life capable of even asking the question of their origins, must necessarily have its properties within the required range (often referred to as the anthropic principle).

After our universe formed, the same principle would also apply to each galaxy and each solar system within those galaxies, whereby because variations exist in each galaxy and within each substituent solar system (differential properties analogous to different genes in a gene pool), then only those that have an acceptable range of conditions are capable of harboring life.  With over 10^22 stars in the observable universe (an unfathomably large number), and billions of years to evolve different conditions within each solar system surrounding those many stars, it isn’t surprising that eventually the temperature and other conditions would be acceptable for liquid water and organic chemistries to occur in many of those solar systems.  Even if there was only one life permitting planet per galaxy (on average), that would add up to over 100 billion life permitting planets in the observable universe alone (with many orders of magnitude more life permitting planets in the non-observable universe).  So given enough time, and given some mechanism of variation (in this case, differences in star composition and dynamics), natural selection in a sense can also account for the evolution of some solar systems that do in fact have life permitting conditions in a universe such as our own.

Origin of Consciousness

The last significant mystery I’d like to discuss involves the origin of consciousness.  While there are many current theories pertaining to different aspects of consciousness, and while there has been much research performed in the neurosciences, cognitive sciences, psychology, etc., pertaining to how the brain works and how it correlates to various aspects of the mind and consciousness, the brain sciences (though neuroscience in particular) are in their relative infancy and so there are still many questions that haven’t been answered yet.  One promising theory that has already been shown to account for many aspects of consciousness is Gerald Edelman’s theory of neuronal group selection (NGS) otherwise known as neural Darwinism (ND), which is a large scale theory of brain function.  As one might expect from the name, the mechanism of natural selection is integral to this theory.  In ND, the basic idea consists of three parts as read on the Wiki:

  1. Anatomical connectivity in the brain occurs via selective mechanochemical events that take place epigenetically during development.  This creates a diverse primary neurological repertoire by differential reproduction.
  2. Once structural diversity is established anatomically, a second selective process occurs during postnatal behavioral experience through epigenetic modifications in the strength of synaptic connections between neuronal groups.  This creates a diverse secondary repertoire by differential amplification.
  3. Re-entrant signaling between neuronal groups allows for spatiotemporal continuity in response to real-world interactions.  Edelman argues that thalamocortical and corticocortical re-entrant signaling are critical to generating and maintaining conscious states in mammals.

In a nutshell, the basic differentiated structure of the brain that forms in early development is accomplished through cellular proliferation, migration, distribution, and branching processes that involve selection processes operating on random differences in the adhesion molecules that these processes use to bind one neuronal cell to another.  These crude selection processes result in a rough initial configuration that is for the most part fixed.  However, because there are a diverse number of sets of different hierarchical arrangements of neurons in various neuronal groups, there are bound to be functionally equivalent groups of neurons that are not equivalent in structure, but are all capable of responding to the same types of sensory input.  Because some of these groups should in theory be better than others at responding to some particular type of sensory stimuli, this creates a form of neuronal/synaptic competition in the brain, whereby those groups of neurons that happen to have the best synaptic efficiency for the stimuli in question are naturally selected over the others.  This in turn leads to an increased probability that the same network will respond to similar or identical signals in the future.  Each time this occurs, synaptic strengths increase in the most efficient networks for each particular type of stimuli, and this would account for a relatively quick level of neural plasticity in the brain.

The last aspect of the theory involves what Edelman called re-entrant signaling whereby a sampling of the stimuli from functionally different groups of neurons occurring at the same time leads to a form of self-organizing intelligence.  This would provide a means for explaining how we experience spatiotemporal consistency in our experience of sensory stimuli.  Basically, we would have functionally different parts of the brain, such as various maps in the visual centers that pertain to color versus others that pertain to orientation or shape, that would effectively amalgamate the two (previously segregated) regions such that they can function in parallel and thus correlate with one another producing an amalgamation of the two types of neural maps.  Once this re-entrant signaling is accomplished between higher order or higher complexity maps in the brain, such as those pertaining to value-dependent memory storage centers, language centers, and perhaps back to various sensory cortical regions, this would create an even richer level of synchronization, possibly leading to consciousness (according to the theory).  In all of the aspects of the theory, the natural selection of differentiated neuronal structures, synaptic connections and strengths and eventually that of larger re-entrant connections would be responsible for creating the parallel and correlated processes in the brain believed to be required for consciousness.  There’s been an increasing amount of support for this theory, and more evidence continues to accumulate in support of it.  In any case, it is a brilliant idea and one with a lot of promise in potentially explaining one of the most fundamental aspects of our existence.

Darwin’s Big Idea May Be the Biggest Yet

In my opinion, Darwin’s theory of evolution through natural selection was perhaps the most profound theory ever discovered.  I’d even say that it beats Einstein’s theory of Relativity because of its massive explanatory scope and carryover to other disciplines, such as cosmology, neuroscience, and even the immune system (see Edelman’s Nobel work on the immune system, where he showed how the immune system works through natural selection as well, as opposed to some type of re-programming/learning).  Based on the basic idea of natural selection, we have been able to provide a number of robust explanations pertaining to many aspects of why the universe is likely to be the way it is, how life likely began, how it evolved afterward, and it may possibly be the answer to how life eventually evolved brains capable of being conscious.  It is truly one of the most fascinating principles I’ve ever learned about and I’m honestly awe struck by its beauty, simplicity, and explanatory power.

The Origin and Evolution of Life: Part II

Even though life appears to be favorable in terms of the second law of thermodynamics (as explained in part one of this post), there have still been very important questions left unanswered regarding the origin of life including what mechanisms or platforms it could have used to get itself going initially.  This can be summarized by a “which came first, the chicken or the egg” dilemma, where biologists have wondered whether metabolism came first or if instead it was self-replicating molecules like RNA that came first.

On the one hand, some have argued that since metabolism is dependent on proteins and enzymes and the cell membrane itself, that it would require either RNA or DNA to code for those proteins needed for metabolism, thus implying that RNA or DNA would have to originate before metabolism could begin.  On the other hand, even the generation and replication of RNA or DNA requires a catalytic substrate of some kind and this is usually accomplished with proteins along with metabolic driving forces to accomplish those polymerization reactions, and this would seem to imply that metabolism along with some enzymes would be needed to drive the polymerization of RNA or DNA.  So biologists we’re left with quite a conundrum.  This was partially resolved when several decades ago, it was realized that RNA has the ability to not only act as a means of storing genetic information just like DNA, but it also has the additional ability of catalyzing chemical reactions just like an enzyme protein can.  Thus, it is feasible that RNA could act as both an information storage molecule as well as an enzyme.  While this helps to solve the problem if RNA began to self-replicate itself and evolve over time, the problem still remains of how the first molecules of RNA formed, because it seems that some kind of non-RNA metabolic catalyst would be needed to drive this initial polymerization.  Which brings us back to needing some kind of catalytic metabolism to drive these initial reactions.

These RNA polymerization reactions may have spontaneously formed on their own (or evolved from earlier self-replicating molecules that predated RNA), but the current models of how the early pre-biotic earth would have been around four billion years ago seem to suggest that there would have been too many destructive chemical reactions that would have suppressed the accumulation of any RNA and would have likely suppressed other self-replicating molecules as well.  What seems to be needed then is some kind of a catalyst that could create them quickly enough such that they would be able to accumulate in spite of any destructive reactions present, and/or some kind of physical barrier (like a cell wall) that protects the RNA or other self-replicating polymers so that they don’t interact with those destructive processes.

One possible solution to this puzzle that has been developing over the last several years involves alkaline hydrothermal vents.  We actually didn’t know that these kinds of vents existed until the year 2000 when they were discovered on a National Science Foundation expedition in the mid-Atlantic.  Then a few years later they were studied more closely to see what kinds of chemistries were involved with these kinds of vents.  Unlike the more well-known “black smoker” vents (which were discovered in the 1970’s), these alkaline hydrothermal vents have several properties that would have been hospitable to the emergence of life back during the Hadeon eon (between 4.6 and 4 billion years ago).

The ocean water during the Hadeon eon would have been much more acidic due to the higher concentrations of carbon dioxide (thus forming carbonic acid), and this acidic ocean water would have mixed with the hydrogen-rich alkaline water found within the vents, and this would have formed a natural proton gradient within the naturally formed pores of these rocks.  Also, electron transfer would have likely occurred when the hydrogen and methane-rich vent fluid contacted the carbon dioxide-rich ocean water, thus generating an electrical gradient.  This is already very intriguing because all living cells ultimately derive their metabolic driving forces from proton gradients or more generally from the flow of some kind of positive charge carrier and/or electrons.  Since the rock found in these vents undergoes a process called surpentization, which spreads the rock apart into various small channels and pockets, many different kinds of pores form in the rocks, and some of them would have been very thin-walled membranes separating the acidic ocean water from the alkaline hydrogen.  This would have facilitated the required semi-permeable barrier that modern cells have which we expect the earliest proto-cells to also have, and it would have provided the necessary source of energy to power various chemical reactions.

Additionally, these vents would have also provided a source of minerals (namely green rust and molybdenum) which likely would have behaved as enzymes, catalyzing reactions as various chemicals came into contact with them.  The green rust could have allowed the use of the proton gradient to generate molecules that contained phosphate, which could have stored the energy produced from the gradient — similar to how all living systems that we know of store their energy in ATP (Adenosine Tri-Phosphate).  The molybdenum on the other hand would have assisted in electron transfer through those membranes.

So this theory provides a very plausible way for catalytic metabolism as well as proto-cellular membrane formation to have resulted from natural geological processes.  These proto-cells would then likely have begun concentrating simple organic molecules formed from the reaction of CO2 and H2 with all the enzyme-like minerals that were present.  These molecules could then react with one another to polymerize and form larger and more complex molecules including eventually nucleotides and amino acids.  One promising clue that supports this theory is the fact that every living system on earth is known to share a common metabolic system, known as the citric acid cycle or Kreb’s cycle, where it operates in the forward direction for aerobic organisms and in the reverse direction for anaerobic organisms.  Since this cycle consists of only 11 molecules, and since all biological components and molecules that we know of in any species have been made by some number or combination of these 11 fundamental building blocks, scientists are trying to test (among other things) whether or not they can mimic these alkaline hydrothermal vent conditions along with the acidic ocean water that would have been present in the Hadrean era and see if it will precipitate some or all of these molecules.  If they can, it will show that this theory is more than plausible to account for the origin of life.

Once these basic organic molecules were generated, eventually proteins would have been able to form, some of which that could have made their way to the membrane surface of the pores and acted as pumps to direct the natural proton gradient to do useful work.  Once those proteins evolved further, it would have been possible and advantageous for the membranes to become less permeable so that the gradient could be highly focused on the pump channels on the membrane of these proto-cells.  The membrane could have begun to change into one made from lipids produced from the metabolic reactions, and we already know that lipids readily form micelles or small closed spherical structures once they aggregate in aqueous conditions.  As this occurred, the proto-cells would no longer have been trapped in the porous rock, but would have eventually been able to slowly migrate away from the vents altogether, eventually forming the phospholipid bi-layer cell membranes that we see in modern cells.  Once this got started, self-replicating molecules and the rest of the evolution of the cell would have underwent natural selection as per the Darwinian evolution that most of us are familiar with.

As per the earlier discussion regarding life serving as entropy engines and energy dissipation channels, this self-replication would have been favored thermodynamically as well because replicating those entropy engines and the energy dissipation channels means that they will only become more effective at doing so.  Thus, we can tie this all together, where natural geological processes would have allowed for the required metabolism to form, thus powering organic molecular synthesis and polymerization, and all of these processes serving to increase entropy and maximize energy dissipation.  All that was needed for this to initiate was a planet that had common minerals, water, and CO2, and the natural geological processes can do the rest of the work.  These kinds of planets actually seem to be fairly common in our galaxy, with estimates ranging in the billions, thus potentially harboring life (or where it is just a matter of time before it initiates and evolves if it hasn’t already).  While there is still a lot of work to be done to confirm the validity of these models and to try to find ways of testing them vigorously, we are getting relatively close to solving the puzzle of how life originated, why it is the way it is, and how we can better search for it in other parts of the universe.

A Scientific Perspective of the Arts

Science and the arts have long been regarded as mutually exclusive domains, where many see artistic expression as something that science can’t explain or reduce in any way, or as something that just shouldn’t be explored by any kind of scientific inquiry.  To put it another way, many people have thought it impossible for there to ever be any kind of a “science of the arts”.  The way I see it, science isn’t something that can be excluded from any domain at all, because we apply science in a very general way every time we learn or conceive of new ideas, experiment with them, and observe the results to determine if we should modify our beliefs based on those experiences.  Whenever we pose a question about anything we experience, in the attempt to learn something new and gain a better understanding about those experiences, a scientific approach (based on reason and the senses) is the only demonstrably reliable way we’ve ever been able to arrive at any kind of meaningful answer.  The arts are no exception to this, and in fact, many questions that have been asked about the arts and aesthetics in general have not only been answered by an application of the aforementioned general scientific reasoning that we use every day, but have in fact also been answered within many specific well-established branches of science.

Technology & The Scientific Method

It seems to me that the sciences and the various rewards we’ve reaped from them have influenced art in a number of ways and even facilitated new variations of artistic expression.  For example, science has been applied to create the very technologies used in producing art.  The various technologies created through the application of science have been used to produce new sounds (and new combinations thereof), new colors (and new color gradients), new shapes, and various other novel visual effects.  We’ve even used them to produce new tastes and smells (in the culinary arts for example).  They’ve also been used to create entirely new media through which art is exemplified.  So in a large number of ways, any kind of art has been dependent on science in some way or another — even by simply applying the scientific method by hypothesizing a way to express art in some way, even through a new medium or with a new technique, where the artist experiments with that medium or technique to see if it is satisfactory, and then modifies their hypothesis if needed until the artist obtains the desired result for what they’re trying to express (whether through simple trial and error or what-have-you).

Evolutionary Factors Influencing Aesthetic Preferences

Then we have the questions that pertain to whether or not aesthetic preferences are solely subjective and individualistic, or if they are also objective in some ways.  Some of these questions have in fact been explored within the fields of evolutionary biology and psychology (and within the field of psychology in general), where it is well known that humans find certain types of perceptions pleasurable, such as environments and objects that are conducive to our survival.  For example, the majority of people enjoy visually perceiving an abundance of food, fresh water and plush vegetation, healthy social relationships (including sex) and various emotions, etc. There are also various sounds, smells, tastes, and even tactile sensations that we’ve evolved to find pleasurable — such as the sound of laughter, flowing water, or rain, the taste of salt, fat, and sugar, the smell of various foods and plants, or the tactile sensation of sexual stimulation (to give but a few examples).  So it’s not surprising that many forms of art can appeal to the majority of people by employing these kinds of objects and environments within them, especially in cases where these sources of pleasurable sensations are artificially amplified into supernormal stimuli, thus producing unprecedented levels of pleasure not previously attainable through the natural environment that our senses evolved within.

Additionally, there are certain emotions that we’ve evolved to express as well as understand simply because they increase our chances of survival within our evolutionary niche, and thus artistic representations of these types of universal human emotions will also likely play a substantial role in our aesthetic preferences.  Even the evolved traits of empathy and sympathy, which are quite advantageous to a social species such as our own (due to them reinforcing cooperation and reciprocal altruism among other benefits), are employed by those that are perceiving and appreciating these artistic expressions.

Another possible evolutionary component related to our appreciation of art has to do with sexual selection.  Often times, particular forms of art are appreciated, not only because of the emotions it evokes in the recipient or person perceiving it, but also when they include clever uses of metaphor, allegory, poetry, and other components that often demonstrate significant levels of intelligence or brilliance in the artist that produced them.  In terms of our evolutionary history, having these kinds of skills and displays of intelligence would be attractive to prospective sexual mates for a number of reasons including the fact that they demonstrate that the artist has a surplus of mental capacity to solve more complex problems that are far beyond those they’d typically encounter day to day.  So this can provide a rather unique way of demonstrating particular aspects of their fitness to survive as well as their abilities to protect any future offspring.

Artistic expression (as well as other displays of intelligence and surplus mental capacity) can be seen as analogous to the male peacock’s large and vibrant tail.  Even though this type of tail increases its chances of being caught by a predator, if it has survived to reproductive age and beyond, it shows the females that the male has a very high fitness despite these odds being stacked against him.  It also shows that the male is fit enough to possess a surplus of resources from its food intake that are continually donated to maintaining that tail.  Beyond this, a higher degree of symmetry in the tail (the visual patterns within each feather, the morphology of each feather, and the uniformity of the feathers as a whole set) demonstrates a lower number of mutations in its genome, thus providing better genes for any future offspring.  Because of all these factors, the female has evolved to find these male attributes attractive.

Similarly, for human beings (both male and female), an intelligent brain that is able to produce brilliant expressions of art (among other feats of intelligence), illustrates that the genome for that individual is likely to have less mutations in it.  This is especially apparent once we realize that the number of genes in our genome that pertain to our brain’s development and function accounts for an entire 50% of our total genome.  So if someone is intelligent, since their highly functional brain was dependent on having a small number of mutations in the portion of their genome pertaining to the brain, this shows that the rest of their genome is also far less likely to have harmful mutations in it (and thus less mutations passed on to future offspring).  Art aside, this kind of sexual selection is actually one prominent theory within evolutionary biology to explain why our brains grew as quickly as they did, and as large as they did.  Quite simply, if larger brains were something that both males and females found sexually attractive (through the feats of intelligence they could produce), they would be sexually selected for, thus leading to higher survival rates for offspring and a runaway effect of unprecedented brain growth.  These aesthetic preferences would then likely carry over to general displays of artistic ability, thus no longer pertaining exclusively to the search for prospective sexual mates, but also to simply enjoy the feats of intelligence themselves regardless of the source.  So there are many interesting facets that pertain to likely influential evolutionary factors relating to the origin of artistic expression (or at least the origin of our mental capacity to do so).

Neuroscience & The Arts

One final aspect I’d like to discuss that pertains to the arts within the context of the sciences, lies in the realm of neuroscience.  As neuroscientists are progressing in terms of mapping the brain’s structure and activity, they are becoming better able to determine what kinds of neurological conditions are correlated with various aspects of our conscious experience, our personality, and our behavior in general.  As for how this relates to the arts, we should also eventually be able to determine why we have have the aesthetic preferences we do, whether they are based on: various neurological predispositions, the emotional tagging of various past experiences via the amygdala (and how the memory of those emotionally tagged experiences change over time), possible differences in individual sensitivities to particular stimuli, etc.

Once we get to this level of understanding of the brain itself, when we combine it with the conjoined efforts of other scientific disciplines such as anthropology, archaeology, evolutionary biology and psychology, etc., and if we collaborate with experts in the arts and humanities themselves, we should definitely be able to answer a plethora of questions relating to the origin of art, how and why it has evolved over time as it has (and how it will likely continue to evolve given that our brains as well as our culture are continually evolving in parallel), how and why the arts affect us as they do, etc.  With this kind of knowledge developing in these fields, we may even one day see artists producing art by utilizing this knowledge in very specific and articulate ways, in order to produce expressions that are the most aesthetically pleasing, the most intellectually stimulating, and the most emotionally powerful that we’ve ever experienced, by design.  I think that by putting all of this knowledge together, we would effectively have a true science of the arts.

The arts have no doubt been a fundamental facet of the human condition, and I’m excited to see us beginning to learn the answers to these truly remarkable questions.  I’m hoping that the arts and the sciences can better collaborate with one another, rather than remain relatively alienated from one another, so that we can maximize the knowledge we gain in order to answer these big questions more effectively.  We may begin to see some truly remarkable changes in how the arts are performed and produced based on this knowledge, and this should only enhance the pleasure and enjoyment that they already bring to us.