“Black Mirror” Reflections: U.S.S. Callister (S4, E1)

This Black Mirror reflection will explore season 4, episode 1, which is titled “U.S.S. Callister”.  You can click here to read my last Black Mirror reflection (season 3, episode 2: “Playtest”).  In U.S.S. Callister, we’re pulled into the life of Robert Daly (Jesse Plemons), the Chief Technical Officer at Callister Inc., a game development company that has produced a multiplayer simulated reality game called Infinity.  Within this game, users control a starship (an obvious homage to Star Trek), although Daly, the brilliant programmer behind this revolutionary game, has his own offline version of the game which has been modded to look like his favorite TV show Space Fleet, where Daly is the Captain of the ship.

We quickly learn that most of the employees at Callister Inc. don’t treat Daly very kindly, including his company’s co-founder James Walton (Jimmy Simpson).  Daly appears to be an overly passive, shy, introvert.  During one of Daly’s offline gaming sessions at home, we come to find out that the Space Fleet characters aboard the starship look just like his fellow employees, and as Captain of his simulated crew, he indulges in berating them all.  Due to the fact that Daly is Captain and effectively controls the game, he is rendered nearly omnipotent, and able to force his crew to perpetually bend to his will, lest they suffer immensely.

It turns out that these characters in the game are actually conscious, created from Daly having surreptitiously acquired his co-workers’ DNA and somehow replicated their consciousness and memories and uploaded them into the game (and any biologists or neurologists out there, let’s just forget for the moment that DNA isn’t complex enough to store this kind of neurological information).  At some point, a wrench is thrown into Daly’s deviant exploits when a new co-worker, programmer Nanette Cole (Cristin Milioti), is added to his game and manages to turn the crew against him.  Once the crew finds a backdoor means of communicating with the world outside the game, Daly’s world is turned upside down with a relatively satisfying ending chock-full of poetic justice, as his digitized, enslaved crew members manage to escape while he becomes trapped inside his own game as it’s being shutdown and destroyed.

Daly stuck

This episode is rife with a number of moral issues that build on one another, all deeply coupled with the ability to engineer a simulated reality (perceptual augmentation).  Virtual worlds carry a level of freedom that just isn’t possible in the real world, where one can behave in countless ways with little or no consequence, whether acting with beneficence or utter malice.  One can violate physical laws as well as prescriptive laws, opening up a new world of possibilities that are free to evolve without the feedback of social norms, legal statutes, and law enforcement.

People have long known about various ways of escaping social and legal norms through fiction and game playing, where one can imagine they are somebody else, living in a different time and place, and behave in ways they’d never even think of doing in the real world.

But what happens when they’re finished with the game and go back to the real world with all its consequences, social norms and expectations?  Doesn’t it seem likely that at least some of the behaviors cultivated in the virtual world will begin to rear their ugly heads in the real world?  One can plausibly argue that violent game playing is simply a form of psychological sublimation, where we release many of our irrational and violent impulses in a way that’s more or less socially acceptable.  But there’s also bound to be a difference between playing a very abstract game involving violence or murder, such as the classic board-game Clue, and playing a virtual reality game where your perceptions are as realistic as can be and you choose to murder some other character in cold blood.

Clearly in this episode, Daly was using the simulated reality as a means of releasing his anger and frustration, by taking it out on reproductions of his own co-workers.  And while a simulated experiential alternative could be healthy in some cases, in terms of its therapeutic benefit and better control over the consequences of the simulated interaction, we can see that Daly took advantage of his creative freedom, and wielded it to effectively fashion a parallel universe where he was free to become a psychopath.

It would already be troubling enough if Daly behaved as he did to virtual characters that were not actually conscious, because it would still show Daly pretending that they are conscious; a man who wants them to be conscious.  But the fact that Daly knows they are conscious makes him that much more sadistic.  He is effectively in the position of a god, given his powers over the simulated world and every conscious being trapped within it, and he has used these powers to generate a living hell (thereby also illustrating the technology’s potential to, perhaps one day, generate a living heaven).  But unlike the hell we hear about in myths and religious fables, this is an actual hell, where a person can suffer the worst fates imaginable (it is in fact only limited by the programmer’s imagination) such as experiencing the feeling of suffocation, yet unable to die and thus with no end in sight.  And since time is relative, in this case based on the ratio of real time to simulated time (or the ratio between one simulated time and another), a character consciously suffering in the game could feel as if they’ve been suffering for months, when the god-like player has only felt several seconds pass.  We’ve never had to morally evaluate these kinds of situations before, and we’re getting to a point where it’ll be imperative for us to do so.

Someday, it’s very likely that we’ll be able to create an artificial form of intelligence that is conscious, and it’s up to us to initiate and maintain a public conversation that addresses how our ethical and moral systems will need to accommodate new forms of conscious moral agents.

Jude Law, Haley Joel Osment, Brendan Gleeson, and Brian Turk in Artificial Intelligence: AI (2001)

We’ll also need to figure out how to incorporate a potentially superhuman level of consciousness into our moral frameworks, since these frameworks often have an internal hierarchy that is largely based on the degree or level of consciousness that we ascribe to other individuals and to other animals.  If we give moral preference to a dog over a worm, and preference to a human over a dog (for example), then where would a being with superhuman consciousness fit within that framework?  Most people certainly wouldn’t want these beings to be treated as gods, but we shouldn’t want to treat them like slaves either.  If nothing else, they’ll need to be treated like people.

Technologies will almost always have that dual potential, where they can be used for achieving truly admirable goals and to enhance human well being, or used to dominate others and to exacerbate human suffering.  So we need to ask ourselves, given a future world where we have the capacity to make any simulated reality we desire, what kind of world do we want?  What kind of world should we want?  And what kind of person do you want to be in that world?  Answering these questions should say a lot about our moral qualities, serving as a kind of window into the soul of each and every one of us.

Advertisement

Irrational Man: An Analysis (Part 3, Chapter 8: Nietzsche)

In the last post on this series on William Barrett’s Irrational Man, we began exploring Part 3: The Existentialists, beginning with chapter 7 on Kierkegaard.  In this post, we’ll be taking a look at Nietzsche’s philosophy in more detail.

Ch. 8 – Nietzsche

Nietzsche shared many of the same concerns as Kierkegaard, given the new challenges of modernity brought about by the Enlightenment; but each of these great thinkers approached this new chapter of our history from perspectives that were in many ways, diametrically opposed.  Kierkegaard had a grand goal of sharing with others what he thought it meant to be a Christian, and he tried to revive Christianity within a society that he found to be increasingly secularized.  He also saw that of the Christianity that did exist, it was largely organized and depersonalized, and he felt the need to stress the importance of a much more personal form of the religion.  Nietzsche on the other hand, an atheist, rejected Christianity and stressed the importance of re-establishing our values given the fact that modernity was now living in a godless world, albeit a culturally Christianized world, but one that was now without a Christ.

Despite their differences, both philosophers felt that discovering the true meaning of our lives was paramount to living an authentic life, and this new quest for meaning was largely related to the fact that our view of ourselves had changed significantly:

“By the middle of the nineteenth century, as we have seen, the problem of man had begun to dawn on certain minds in a new and more radical form: Man, it was seen, is a stranger to himself and must discover, or rediscover, who he is and what his meaning is.”

Nietzsche also understood and vividly illustrated the crucial differences between mankind and the rest of nature, most especially our self-awareness:

“It was Nietzsche who showed in its fullest sense how thoroughly problematical is the nature of man: he can never be understood as an animal species within the zoological order of nature, because he has broken free of nature and has thereby posed the question of his own meaning-and with it the meaning of nature as well-as his destiny.”

And for Nietzsche, the meaning one was to find for their life included first abandoning what he saw as an oppressive and disempowering Christian morality, which he saw as severely inhibiting our potential as human beings.  He was much more sympathetic to Greek culture and morality, and even though (perhaps ironically) Christianity itself was derived from a syncretism between Judaism and Greek Hellenism, it’s moral and psychologically relevant differences were too substantial to avoid Nietzsche’s criticism.  He thought that a return to a more archaic form of Greek culture and mentality was the solution we needed to restore or at least re-validate our instincts:

“Dionysus reborn, Nietzsche thought, might become a savior-god for the whole race, which seemed everywhere to show symptoms of fatigue and decline.”

Nietzsche learned about the god Dionysus while he was studying Greek tragedy, and he gravitated toward the Dionysian symbolism, for it contained a kind of reconciliation between high culture and our primal instincts.  Dionysus was after all the patron god of the Greek tragic festivals and so was associated with beautiful works of art; but he was also the god of wine and ritual madness, bringing people together through the grape harvest and the joy of intoxication.  As Barrett describes it:

“This god thus united miraculously in himself the height of culture with the depth of instinct, bringing together the warring opposites that divided Nietzsche himself.”

It was this window into the hidden portions of our psyche that Nietzsche tried to look through and which became an integral basis for much of his philosophy.  But the chaotic nature of our species combined with our immense power to manipulate the environment also concerned him greatly; and he thought that the way modernity was repressing many of our instincts needed to be circumvented, lest we continue to build up this tension in our unconscious only to release itself in an explosive eruption of violence:

“It is no mere matter of psychological curiosity but a question of life and death for man in our time to place himself again in contact with the archaic life of his unconscious.  Without such contact he may become the Titan who slays himself.  Man, this most dangerous of the animals, as Nietzsche called him, now holds in his hands the dangerous power of blowing himself and his planet to bits; and it is not yet even clear that this problematic and complex being is really sane.”

Indeed our species has been removed from our natural evolutionary habitat; and we’ve built our modern lives around socio-cultural and technological constructs that have, in many ways at least, inhibited the open channel between our unconscious and conscious mind.  One could even say that the prefrontal cortex region of our brain, important as it is for conscious planning and decision making, has been effectively high-jacked by human culture (memetic selection) and now it’s constantly running on overtime, over-suppressing the emotional centers of the brain (such as the limbic system).

While we’ve come to realize that emotional suppression is sometimes necessary in order to function well as a cooperative social species that depends on complex long-term social relationships, we also need to maintain a healthy dose of emotional expression as well.  If we can’t release this more primal form of “psychological energy” (for lack of a better term), then it shouldn’t be surprising to see it eventually manifest into some kind of destructive behavior.

We ought not forget that we’re just like other animals in terms of our brain being best adapted to a specific balance of behaviors and cognitive functions; and if this balance is disrupted through hyperactive or hypoactive use of one region or another, doesn’t it stand to reason that we may wind up with either an impairment in brain function or less healthy behavior?  If we’re not making a sufficient use of our emotional capacities, shouldn’t we expect them to diminish or atrophy in some way or other?  And if some neurological version of the competitive exclusion principle exists, then this would further reinforce the idea that our suppression of emotion by other capacities may cause those other capacities to become permanently dominant.  We can only imagine the kinds of consequences that ensue when this psychological impairment falls upon a species with our level of power.

1.  Ecce Homo

“In the end one experiences only oneself,” Nietzsche observes in his Zarathustra, and elsewhere he remarks, in the same vein, that all the systems of the philosophers are just so many forms of personal confession, if we but had eyes to see it.”

Here Nietzsche is expressing one of his central ideas, by pointing out the fact that we can’t separate ourselves from our thoughts, and therefore the ideas put forward by any philosopher are really betraying some set of values they hold whether unconsciously or explicitly, and they also betray one’s individual perspective; an unavoidable perspective stemming from our subjective experience which colors our interpretation of those experiences.  Perspectivism, or the view that reality and our ideas about what’s true and what we value can be mapped onto many different possible conceptual schemes, was a big part of Nietzsche’s overall philosophy, and it was interwoven into his ideas on meaning, epistemology, and ontology.

As enlightening as Nietzsche was in giving us a much needed glimpse into some of the important psychological forces that give our lives the shape they have (for better or worse), his written works show little if any psychoanalytical insight applied to himself, in terms of the darker and less pleasant side of his own psyche.

“Nietzsche’s systematic shielding of himself from the other side is relevant to his explanation of the death of God: Man killed God, he says, because he could not bear to have anyone looking at his ugliest side.”

But even if he didn’t turn his psychoanalytical eye inward toward his own unconscious, he was still very effective in shining a light on the collective unconscious of humanity as a whole.  It’s not enough that we marvel over the better angels of our nature, even though it’s important to acknowledge and understand our capacities and our potential for bettering ourselves and the world we live in; we also have to acknowledge our flaws and shortcomings and while we may try to overcome these weaknesses in one way or another, some of them are simply a part of our nature and we should accept them as such even if we may not like them.

One of the prevailing theories stemming from Western Rationalism (mentioned earlier in Barrett’s book) was that human beings were inherently perfect and rational, and it was within the realm of possibility to realize that potential; but this kind of wishful thinking was relying on a fundamental denial of an essential part of the kind of animal we really are.  And Nietzsche did a good job of putting this fact out in the limelight, and explaining how we couldn’t truly realize our potential until we came to terms with this fairly ugly side of ourselves.

Barrett distinguishes between the attitudes stemming from various forms of atheism, including that of Nietzsche’s:

“The urbane atheism of Bertrand Russell, for example, presupposes the existence of believers against whom he can score points in an argument and get off some of his best quips.  The atheism of Sartre is a more somber affair, and indeed borrows some of its color from Nietzsche: Sartre relentlessly works out the atheistic conclusion that in a universe without God man is absurd, unjustified, and without reason, as Being itself is.”

In fact Nietzsche was more concerned with the atheists that hadn’t yet accepted the full ramifications of a Godless world, rather than believers themselves.  He felt that those who believed in God still had a more coherent belief system given many of their moral values and the meaning they attached to their lives, since they were largely derived from their religious beliefs (even if the religious beliefs were unwarranted or harmful).  Those that were atheists, many of them at least, hadn’t yet taken the time to build a new foundation for their previously held values and the meaning in their lives; and if they couldn’t rebuild this foundation then they’d have to change those values and what they find to be meaningful to reflect that shift in foundation.

Overall, Nietzsche even changes the game a little in terms of how the conception of God was to be understood in philosophy:

“If God is taken as a metaphysical object whose existence has to be proved, then the position held by scientifically minded philosophers like Russell must inevitably be valid: the existence of such an object can never be empirically proved.  Therefore, God must be a superstition held by primitive and childish minds.  But both these alternative views are abstract, whereas the reality of God is concrete, a thoroughly autonomous presence that takes hold of men but of which, of course, some men are more conscious than others.  Nietzsche’s atheism reveals the true meaning of God-and does so, we might add, more effectively than a good many official forms of theism.”

Even though God may not actually exist as a conscious being, the idea of God and the feelings associated with what one labels as an experience of God, is as real as any other idea or experience and so it should still be taken seriously even in a world where no gods exist.  As an atheist myself (formerly a born-again Protestant Christian), I’ve come to appreciate the real breadth of possible meaning attached to the term “God”.  Even though the God or gods described in various forms of theism do not track onto objective reality as was pointed out by Russell, this doesn’t negate the subjective reality underlying theism.  People ascribe the label “God” to a set of certain feelings and forces that they feel are controlling their lives, their values, and their overall vision of the world.  Nietzsche had his own vision of the world as well, and so one could label this as his “god”, despite the confusion that this may cause when discussing God on metaphysical terms.

2. What Happens In “Zarathustra”; Nietzsche as Moralist

One of the greatest benefits of producing art is our ability to tap into multiple perspectives that we might otherwise never explore.  And by doing this, we stand a better chance of creating a window into our own unconscious:

“[Thus Spoke Zarathustra] was Nietzsche’s poetic work and because of this he could allow the unconscious to take over in it, to break through the restraints imposed elsewhere by the philosophic intellect. [in poetry] One loses all perception of what is imagery and simile-that is to say, the symbol itself supersedes thought, because it is richer in meaning.”

By avoiding the common philosophical trap of submitting oneself to a preconceived structure and template for processing information, poetry and other works of art can inspire new ideas and novel perspectives through the use of metaphor and allegory, emotion, and the seemingly infinite powers of our own imagination.  Not only is this true for the artist herself, but also for the spectator who stands in a unique position to interpret what they see and to make sense of it the best they can.  And in the rarest of cases, one may be able to experience something in a work of art that fundamentally changes themselves in the process.  Either way, by presenting someone with an unusual and ambiguous stimulus that they are left to interpret, they can hope to gain at least some access into their unconscious and to inspire an expression of their creativity.

In his Thus Spoke Zarathustra, Nietzsche was focusing on how human beings could ever hope to regain a feeling of completeness; for modernity had fractured the individual and placed too much emphasis on collective specialization:

“For man, says Schiller, the problem is one of forming individuals.  Modern life has departmentalized, specialized, and thereby fragmented the being of man.  We now face the problem of putting the fragments together into a whole.  In the course of his exposition, Schiller even referred back, as did Nietzsche, to the example of the Greeks, who produced real individuals and not mere learned abstract men like those of the modern age.”

A part of this reassembly process would necessarily involve reincorporating what Nietzsche thought of as the various attributes of our human nature, which many of us are in denial of (to go back to the point raised earlier in this post).  For Nietzsche, this meant taking on some characteristics that may seem inherently immoral:

“Man must incorporate his devil or, as he put it, man must become better and more evil; the tree that would grow taller must send its roots down deeper.”

Nietzsche seems to be saying that if there are aspects of our psychology that are normal albeit seemingly dark or evil, then we ought to structure our moral values along the same lines rather than in opposition to these more primal parts of our psyche:

“(Nietzsche) The whole of traditional morality, he believed, had no grasp of psychological reality and was therefore dangerously one-sided and false.”

And there is some truth to this though I wouldn’t go as far as Nietzsche does, as I believe that there are many basic and cross-cultural moral prescriptions that do in fact take many of our psychological needs into account.  Most cultures have some place for various virtues such as compassion, honesty, generosity, forgiveness, commitment, integrity, and many others; and these virtues have been shown within the fields of moral psychology and sociology to help us flourish and lead psychologically fulfilling lives.  We evolved as a social species that operates within some degree of a dominance hierarchy, and sure enough our psychological traits are what we’d expect given our evolutionary history.

However, it’s also true that we ought to avoid slipping into some form of a naturalistic fallacy; for we don’t want to simply assume that all the behaviors that were more common prior to modernity let alone prior to civilization were beneficial to us and our psychology.  For example, raping and stealing were far more common per capita prior to humans establishing some kind of civil state or society.  And a number of other behaviors that we’ve discovered to be good or bad for our physical or psychological health have been the result of a long process of trial and error, cultural evolution, and the cultural transmission of newfound ideas from one generation to the next.  In any case, we don’t want to assume that just because something has been a tradition or even a common instinctual behavior for many thousands or even hundreds of thousands of years, that it is therefore a moral behavior.  Quite the contrary; instead we need to look closely at which behaviors lead to more fulfilling lives and overall maximal satisfaction and then aim to maximize those behaviors over those that detract from this goal.

Nietzsche did raise a good point however, and one that’s supported by modern moral psychology; namely, the fact that what we think we want or need isn’t always in agreement with our actual wants and needs, once we dive below the surface.  Psychology has been an indispensable tool in discovering many of our unconscious desires and the best moral theories are going to be those that take both our conscious and unconscious motivations and traits into account.  Only then can we have a moral theory that is truly going to be sufficiently motivating to follow in the long term as well as most beneficial to us psychologically.  If one is basing their moral theory on wishful thinking rather than facts about their psychology, then they aren’t likely to develop a viable moral theory.  Nietzsche realized this and promoted it in his writings, and this was an important contribution as it bridged human psychology with a number of important topics in philosophy:

“On this point Nietzsche has a perfectly sober and straightforward case against all those idealists, from Plato onward, who have set universal ideas over and above the individual’s psychological needs.  Morality itself is blind to the tangle of its own psychological motives, as Nietzsche showed in one of his most powerful book, The Genealogy of Morals, which traces the source of morality back to the drives of power and resentment.”

I’m sure that drives of power and resentment have played some role in certain moral prescriptions and moral systems, but I doubt that this is likely to be true generally speaking; and even in cases where a moral prescription or system arose out of one of these unconscious drives, such as resentment, this doesn’t negate it’s validity nor demonstrate whether or not it’s warranted or psychologically beneficial.  The source of morality isn’t as important as whether or not the moral system itself works, though the sources of morality can be both psychologically relevant and revealing.

Our ego certainly has a lot to do with our moral behavior and whether or not we choose to face our inner demons:

“Precisely what is hardest for us to take is the devil as the personification of the pettiest, paltriest, meanest part of our personality…the one that most cruelly deflates ones egotism.”

If one is able to accept both the brighter and darker sides of themselves, they’ll also be more likely to willingly accept Nietzsche’s idea of the Eternal Return:

“The idea of the Eternal Return thus expresses, as Unamuno has pointed out, Nietzsche’s own aspirations toward eternal and immortal life…For Nietzsche the idea of the Eternal Return becomes the supreme test of courage: If Nietzsche the man must return to life again and again, with the same burden of ill health and suffering, would it not require the greatest affirmation and love of life to say Yes to this absolutely hopeless prospect?”

Although I wouldn’t expect most people to embrace this idea, because it goes against our teleological intuitions of time and causation leading to some final state or goal, it does provide a valuable means of establishing whether or not someone is really able to accept this life for what it is with all of its absurdities, pains and pleasures.  If someone willingly accepts the idea, then by extension they likely accept themselves, their lives, and the rest of human history (and even the history of life on this planet, no less).  This doesn’t mean that by accepting the idea of the Eternal Return, that people have to put every action, event, or behavior on equal footing, for example, by condoning slavery or the death of millions as a result of countless wars, just because these events are to be repeated for eternity.  But one is still accepting that all of these events were necessary in some sense; that they couldn’t have been any other way, and this acceptance should translate to one’s attitude toward life reflecting this by some kind of overarching contentment.

I think that the more common reaction to this kind of idea is slipping into some kind of nihilism, where a person no longer cares about anything, and no longer finds any meaning or purpose in their lives.  And this reaction is perfectly understandable given our teleological intuitions; most people don’t or wouldn’t want to invest time and effort into some goal if they knew that the goal would never be accomplished or knew that it would eventually be undone.  But, oddly enough, even if the Eternal Return isn’t an actual fact of our universe, we still run into the same basic situation within our own lifetimes whereby we know that there are a number of goals that will never be achieved because of our own inevitable death.

There’s also the fact that even if we did achieve a number of goals, once we die, we can no longer reap any benefits from them.  There may be other people that can benefit from our own past accomplishments, but eventually they will die as well.  Eventually all life in the universe will expire, and this is bound to happen long before the inevitable heat death of the universe transpires; and once it happens, there won’t be anybody experiencing anything at all let alone reaping the benefits of a goal from someone’s distant past.  Once this is taken into account, Nietzsche’s idea doesn’t sound all that bad, does it?  If we knew that there would always be time and some amount of conscious experience for eternity, then there will always be some form of immortality for consciousness; and if it’s eternally recurring, then we end up becoming immortal in some sense.

Perhaps the idea of the Eternal Return (or Recurrence), even though this idea began many hundreds if not a few thousand years before Nietzsche, was motivated by Nietzsche’s own fear of death.  Even though he thought of the idea as horrifying and paralyzing (not least because of all the undesirable parts of our own personal existence), since he didn’t believe in a traditional afterlife, maybe the fear of death motivated him to adopt such an idea, even though he also saw it as a physically plausible consequence of probability and the laws of physics.  But either way, beyond it being far more plausible an idea compared to that of an eternal paradise after death, it’s also a very different idea; not only because one’s memory or identity isn’t preserved such that one actually experiences immortality when the “temporal loop” defining their lifetime starts all over again (i.e. the “movie” of one’s life is just played over and over again, unbeknownst to the person in the movie), but this idea is also different because it involves accepting the world the way it is for all eternity rather than an idea revolving around the way they wish it was.  The Eternal Return fully engages with the existentialist reality of human life with all its absurdities, contingencies, and the rest.

3. Power and Nihilism

Nietzsche, like Kierkegaard before him, was considered to be an unsystematic thinker by many, though Barrett points out that this view is mistaken when it comes to Nietzsche; a common view resulting from the form his writings took which were largely aphoristic.  Nietzsche himself even said that he was viewing science and philosophy through the eyes of art, but nevertheless his work eventually revealed a systematic structure:

“As thinking gradually took over the whole person, and everything else in his life being starved out, it was inevitable that this thought should tend to close itself off in a system.”

This systematization of his philosophy was revealed in the notes he was making for The Will to Power, a work he never finished but one that was more or less assembled and published posthumously by his sister Elisabeth.  The systematization was centered on the idea that a will to power was the underlying force that ultimately guided our behavior, striving to achieve the highest possible position in life as opposed to being fundamentally driven by some biological imperative to survive.  But it’s worth pointing out here that Nietzsche didn’t seem to be merely talking about a driving force behind human behavior, but rather he seemed to be referencing an even more fundamental driving force that was in some sense driving all of reality; and this more inclusive view would be analogous to Schopenhauer’s will to live.

It’s interesting to consider this perspective from the lens of biology, where if we try and reconcile this with a Darwinian account of evolution in which survival is key, we could surmise that a will to power is but a mechanism for survival.  If we grant Nietzsche’s position and the position of the various anti-Darwinian thinkers that influenced him, that is, that a will to survive or survival more generally is somehow secondary to a will to power, then we run into a problem: if survival is a secondary drive or goal, then we’d expect survival to be less important than power; but without survival you can’t have power and thus it makes far more evolutionary sense that a will to survive is more basic than a will to power.

However, I am willing to grant that as soon as organisms began evolving brains along with the rest of their nervous system, the will to power (or something analogous to it) became increasingly important; and by the time humans came on the scene with their highly complex brains, the will to power may have become manifest in our neurological drive to better predict our environment over time.  I wrote about this idea in a previous post where I had explored Nietzsche’s Beyond Good and Evil, where I explained what I saw as a correlation between Nietzsche’s will to power and the idea of the brain as a predictive processor:

“…I’d like to build on it a little and suggest that both expressions of a will to power can be seen as complementary strategies to fulfill one’s desire for maximal autonomy, but with the added caveat that this autonomy serves to fulfill a desire for maximal causal power by harnessing as much control over our experience and understanding of the world as possible.  On the one hand, we can try and change the world in certain ways to fulfill this desire (including through the domination of other wills to power), or we can try and change ourselves and our view of the world (up to and including changing our desires if we find them to be misdirecting us away from our greatest goal).  We may even change our desires such that they are compatible with an external force attempting to dominate us, thus rendering the external domination powerless (or at least less powerful than it was), and then we could conceivably regain a form of power over our experience and understanding of the world.

I’ve argued elsewhere that I think that our view of the world as well as our actions and desires can be properly described as predictions of various causal relations (this is based on my personal view of knowledge combined with a Predictive Processing account of brain function).  Reconciling this train of thought with Nietzsche’s basic idea of a will to power, I think we could…equate maximal autonomy with maximal predictive success (including the predictions pertaining to our desires). Looking at autonomy and a will to power in this way, we can see that one is more likely to make successful predictions about the actions of another if they subjugate the other’s will to power by their own.  And one can also increase the success of their predictions by changing them in the right ways, including increasing their complexity to better match the causal structure of the world, and by changing our desires and actions as well.”

On the other hand, if we treat the brain’s predictive activity as a kind of mechanistic description of how the brain works rather than a type of drive per se, and if we treat our underlying drives as something that merely falls under this umbrella of description, then we might want to consider whether or not any of the drives that are typically posited in psychological theory (such as power, sex, or a will to live) are actually more basic than any other or typically dominant over the others.  Barrett suggests an alternative, saying that we ought to look at these elements more holistically:

“What if the human psyche cannot be carved up into compartments and one compartment wedged in under another as being more basic?  What if such dichotomizing really overlooks the organic unity of the human psyche, which is such that a single impulse can be just as much an impulse toward love on the one hand as it is toward power on the other?”

I agree with Barrett at least in the sense that these drives seem to be operating together much of the time, and when they aren’t in unison, they often seem to be operating on the same level at least.  Another way to put this could be to say that as a social species, we’ve evolved a number of different modular behavioral strategies that are typically best suited for particular social circumstances, though some circumstances may be such that multiple strategies will work and thus multiple strategies may be used simultaneously without working against each other.

But I also think that our conscious attention plays a role as well where the primary drive(s) used to produce the subsequent behavior may be affected by thinking about certain things or in a certain way, such as how much you love someone, what you think you can gain from them, etc.  And this would mean that the various underlying drives for our behavior are individually selected or prioritized based on one’s current experiences and where their attention is directed at any particular moment.  It may still be the case that some drives are typically dominant over others, such as a will to power, even if certain circumstances can lead to another drive temporarily taking over, however short-lived that may be.

The will to power, even if it’s not primary, would still help to explain some of the enormous advancements we’ve made in our scientific and technological progress, while also explaining (at least to some degree) the apparent disparity between our standard of living and overall physio-psychological health on the one hand, and our immense power to manipulate the environment on the other:

“Technology in the twentieth century has taken such enormous strides beyond that of the nineteenth that it now bulks larger as an instrument of naked power than as an instrument for human well-being.”

We could fully grasp Barrett’s point here by thinking about the opening scene in Stanley Kubrick’s 2001: A Space Odyssey, where “The Dawn of Man” was essentially made manifest with the discovery of tools, specifically weapons.  Once this seemingly insignificant discovery was made, it completely changed the game for our species.  While it gave us a new means of defending ourselves and an ability to hunt other animals, thus providing us with the requisite surplus of protein and calories needed for brain development and enhanced intelligence, it also catalyzed an arms race.  And with the advent of human civilization, our historical trajectory around the globe became largely dominated by our differential means of destruction; whoever had the biggest stick, the largest army, the best projectiles, and ultimately the most concentrated form of energy, would likely win the battle, forever changing the fate of the parties involved.

Indeed war has been such a core part of human history, so it’s no wonder Nietzsche stumbled upon such an idea; and even if he hadn’t, someone else likely would have even if they wouldn’t have done so with the same level of creativity and intellectual rigor.  If our history has been so colored with war, which is a physical instantiation of a will to power in order to dominate another group, then we should expect many to see it as a fundamental part of who we are.  Personally, I don’t think we’re fundamentally war-driven creatures but rather that war results from our ability to engage in it combined with a perceived lack of and desire for resources, freedom, and stability in our lives; but many of these goods, freedom in particular, imply a particular level of power over oneself and their environment, so even a fight for freedom is in one way or another a kind of will to power as well.

And what are we to make of our quest for power in terms of the big picture?  Theoretically, there is an upper limit to the amount of power any individual or group can possibly attain, and if one gets to a point where they are seeking power for power’s sake, and using their newly acquired power to harness even more power, then what will happen when we eventually hit that ceiling?  If our values become centered around power, and we lose any anchor we once had to provide meaning for our lives, then it seems we would be on a direct path toward nihilism:

“For Nietzsche, the problem of nihilism arose out of the discovery that “God is dead.” “God” here means the historical God of the Christian faith.  But in a wider philosophical sense it means also the whole realm of supersensible reality-Platonic Ideas, the Absolute, or what not-that philosophy has traditionally posited beyond the sensible realm, and in which it has located man’s highest values.  Now that this other, higher, eternal realm is gone, Nietzsche declared, man’s highest values lose their value…The only value Nietzsche can set up to take the place of these highest values that have lost their value for contemporary man is: Power.”

As Barrett explains, Nietzsche seemed to think that power was the only thing that could replace the eternal realm that we valued so much; but of course this does nothing to alleviate the problem of nihilism in the long term since the infinite void still awaits those at the end of the finite road to maximal power.

“If this moment in Western history is but the fateful outcome of the fundamental ways of thought that lie at the very basis of our civilization-and particularly of that way of thought that sunders man from nature, sees nature as a realm of objects to be mastered and conquered, and can therefore end only with the exaltation of the will to power-then we have to find out how this one-sided and ultimately nihilistic emphasis upon the power over things may be corrected.”

I think the answer to this problem lies in a combination of strategies, but with the overarching goal of maximizing life fulfillment.  We need to reprogram our general attitude toward power such that it is truly instrumental rather than perceived as intrinsically valuable; and this means that our basic goal becomes accumulating power such that we can maximize our personal satisfaction and life fulfillment, and nothing more.  Among other things, the power we acquire should be centered around a power over ourselves and our own psychology; finding ways of living and thinking which are likely to include the fostering of a more respectful relationship with the nature that created us in the first place.

And now that we’re knee deep in the Information Age, we will be able to harness the potential of genetic engineering, artificial intelligence, and artificial reality; and within these avenues of research we will be able to change our own biology such that we can quickly adapt our psychology to the ever-changing cultural environment of the modern world, and we’ll be able to free up our time to create any world we want.  We just need to keep the ultimate goal in mind, fulfillment and contentment, and direct our increasing power towards this goal in particular instead of merely chipping away at it in the background as some secondary priority.  If we don’t prioritize it, then we may simply perpetuate and amplify the meaningless sources of distraction in our lives, eventually getting lost in the chaos, and forgetting about our ultimate potential and the imperatives needed to realize it.

I’ll be posting a link here to part 9 of this post-series, exploring Heidegger’s philosophy, once it has been completed.

CGI, Movies and Truth…

After watching Rogue One: A Star Wars Story, which I liked, though not nearly as much as the original trilogy (Episodes IV, V, and VI), it got me thinking more about something I hadn’t thought about since the most recent presidential election.  As I watched Grand Moff Tarkin and Princess Leia, both characters made possible in large part thanks to CGI (as well as the help of actors Ingvild Deila and Guy Henry), I realized that although this is still a long way away, it is inevitable that (barring a nuclear world war or some other catastrophe that kills us all or sets us back to a pre-industrialized age) the pace of this technology will eventually lead to CGI products that are completely indistinguishable from reality.

This means that eventually, the “fake news” issue that many have been making a lot of noise about as of late, will one day take a new and ugly turn for the worse.  Not only is video and graphic technology accelerating at a fairly rapid pace to exacerbate this problem, but similar concerns are also arising as a result of voice editing software.  By simply gathering several seconds of sample audio from a person of interest, various forms of software are getting better and better at synthesizing their speech in order to mimic them — putting whatever words into “their” mouths that one so desires.

The irony here is that this means that despite the fact that we are going to continue becoming more and more globally interconnected, technologically advanced, and gain more global knowledge, it seems that we will eventually reach a point where each individual becomes less and less able to know what is true and what isn’t in all the places that you are electronically connected to.  One reason for this is that, as per the opening reference to Rogue One, it will become increasingly difficult to judge the veracity of videos that go viral on the internet and/or through news outlets.  We can imagine seeing a video (or many series of videos) released on the news and throughout the internet containing shocking events with real world leaders or other famous people, places, and so forth, events that could possibly start a civil or world war, alter one’s vote, or otherwise — but with the caveat that these events are entirely manufactured by some Machiavellian warmonger or power seeking elite.

Pragmatically speaking, we must still live our lives trusting what we see in proportion to the evidence we have, thus believing ordinary claims with a higher degree of confidence than extraordinary ones.  We will still need to hold to the general rule of extraordinary claims requiring extraordinary evidence in order to meet their burden of proof.  But it will become more difficult to trust certain forms of evidence (including in a court of law), so we’ll have to take that into consideration so that actions that result in more catastrophic consequences (if your assumptions/information turn out to be based on false evidence) require a higher burden of proof — once we are able to successfully pass some kind of graphics Touring Test.

This is by no means an endorsement for conspiracy theories generally nor any other anti-intellectual or dogmatic non-sense. We don’t want people to start doubting everything they see nor to start doubting everything they don’t WANT to see (which would be a proverbial buffet for our cognitive biases and the conspiracy theorists that make use of these epistemological flaws regularly), we still need to take this dynamic technological factor into account to maintain a world view based on proper Bayesian reasoning.

On the brighter side of things, we are going to get to enjoy much of what the new CGI capabilities will bring to us, because movies and all visual entertainment are going to be revolutionarily changed forever in many ways that will be worth celebrating, including our use of virtual reality generally (many various forms that we do and will continue to consciously and rationally desire). We just need to pay attention and exercise some careful moral deliberation as we develop these technologies. Our brains simply didn’t evolve to easily avoid being duped by artificial realities like the ones we’re developing (we already get duped far too often within our actual reality), so we need to engineer our path forward in a way that will better safeguard us from our own cognitive biases so we can maximize our well being once this genie is out of the bottle.

Co-evolution of Humans & Artificial Intelligence

In my last post, I wrote a little bit about the concept of personal identity in terms of what some philosophers have emphasized and my take on it.  I wrote that post in response to an interesting blog post written by James DiGiovanna over at A Philosopher’s Take.  James has written another post related to the possible consequences of integrating artificial intelligence into our societal framework, but rather than discussing personal identity as it relates to artificial intelligence, he discussed how the advancements made in machine learning and so forth are leading to the future prospects of effective companion AI, or what he referred to as programmable friends.  The main point he raised in that post was the fact that programmable friends would likely have a very different relationship dynamic with us compared with our traditional (human) friends.  James also spoke about companion AI  in terms of their also being laborers (as well as being friends) but for the purposes of this post I won’t discuss these laborer aspects of future companion AI (even if the labor aspect is what drives us to make companion AI in the first place).  I’ll be limiting my comments here to the friendship or social dynamic aspects only.  So what aspects about programmable AI should we start thinking about?

Well for one, we don’t currently have the ability to simply reprogram a friend to be exactly as we want them to be, in order to avoid conflicts entirely, to share every interest we have, etc., but rather there is a bit of a give-and-take relationship dynamic that we’re used to dealing with.  We learn new ways of behaving and looking at the world and even new ways of looking at ourselves when we have friendships with people that differ from us in certain ways.  Much of the expansion and beneficial evolution of our perspectives are the result of certain conflicts that arise between ourselves and our friends, where different viewpoints can clash against one another, often forcing a person to reevaluate their own position based on the value they place on the viewpoints of their friends.  If we could simply reprogram our friends, as in the case with some future AI companions, what would this do to our moral, psychological and intellectual growth?  There would be some positive effects I’m sure (from having less conflict in some cases and thus an increase in short term happiness), but we’d definitely be missing out on a host of interpersonal benefits that we gain from having the types of friendships that we’re used to having (and thus we’d likely have less overall happiness as a result).

We can see where evolution ties in to all this, whereby we have evolved as a social species to interact with others that are more or less like us, and so when we envision these possible future AI friendships, it should become obvious why certain problems would be inevitable largely because of the incompatibility with our evolved social dynamic.  To be sure, some of these problems could be mitigated by accounting for them in the initial design of the companion AI.  In general, this could be done by making the AI more like humans in the first place and this could be something advertised as some kind of beneficial AI “social software package” so people looking to get companion AI would be inclined to get this version even if they had the choice to go for the entirely reprogrammable version.

Some features of a “social software package” could be things like a limit on the number of ways the AI could be reprogrammed such that only very serious conflicts could be avoided through reprogramming, but without the ability to avoid all conflicts.  It could be such that the AI are able to have a weight on certain opinions, just as we do, and to be more assertive with regard to certain propositions and so forth.  Once the AI has learned its human counterpart’s personality, values, opinions, etc., it could also be programmed with the ability to intentionally challenge that human by offering different points of view and by its using the Socratic method (at least from time to time).  If people realized that they could possibly gain wisdom, knowledge, tolerance, and various social aptitudes from their companion AI, I would think that would be a marked selling point.

Another factor that I think will likely play a role in mitigating the possible social dynamic clash between companion AI (that are programmable) and humans is the fact that humans are also likely to become more and more integrated with AI technology generally.  That is, as humans are continuing to make advancements in AI technology, we are also likely to integrate a lot of that technology into ourselves, to make humans more or less into cyborgs a.k.a. cybernetic organisms.  If we see the path we’re on already with all the smart phones, apps, and other gadgets and computer systems that have started to become extensions of ourselves, we can see that the next obvious step (which I’ve mentioned elsewhere, here and here) is to remove the external peripherals so that they are directly accessible via our consciousness with no need of interfacing with external hardware and so forth.  If we can access “the cloud” with our minds (say, via bluetooth or the like), then the apps and all the fancy features can become a part of our minds, adding to the ways that we will be able to think, providing an internet worth of knowledge at our cognitive disposal, etc.  I could see this technology eventually allowing us to change our senses and perceptions, including an ability to add virtual objects that are amalgamated with the rest of the external reality that we perceive (such as adding virtual friends that we see and interact with that aren’t physically present outside of our minds even though they appear to be).

So if we start to integrate these kinds of technologies into ourselves as we are also creating companion AI, then we may also end up with the ability to reprogram ourselves alongside those programmable companion AI.  In effect, our own qualitatively human social dynamic may start to change markedly and become more and more compatible with that of the future AI.  The way I think this will most likely play out is that we will try to make AI more like us as we try to make us more like AI, where we co-evolve with one another, trying to share advantages with one another and eventually becoming indistinguishable from one another.  Along this journey however we will also become very different from the way we are now, and after enough time passes, we’ll likely change so much that we’d be unrecognizable to people living today.  My hope is that as we use AI to also improve our intelligence and increase our knowledge of the world generally, we will also continue to improve on our knowledge of what makes us happiest (as social creatures or otherwise) and thus how to live the best and most morally fruitful lives that we can.  This will include improving our knowledge of social dynamics and the ways that we can maximize all the interpersonal benefits therein.  Artificial intelligence may help us to accomplish this however paradoxical or counter-intuitive that may seem to us now.

The illusion of Persistent Identity & the Role of Information in Identity

After reading and commenting on a post at “A Philosopher’s Take” by James DiGiovanna titled Responsibility, Identity, and Artificial Beings: Persons, Supra-persons and Para-persons, I decided to expand on the topic of personal identity.

Personal Identity Concepts & Criteria

I think when most people talk about personal identity, they are referring to how they see themselves and how they see others in terms of personality and some assortment of (usually prominent) cognitive and behavioral traits.  Basically, they see it as what makes a person unique and in some way distinguishable from another person.  And even this rudimentary concept can be broken down into at least two parts, namely, how we see ourselves (self-ascribed identity) and how others see us (which we could call the inferred identity of someone else), since they are likely going to differ.  While most people tend to think of identity in these ways, when philosophers talk about personal identity, they are usually referring to the unique numerical identity of a person.  Roughly speaking, this amounts to basically whatever conditions or properties that are both necessary and sufficient such that a person at one point in time and a person at another point in time can be considered the same person — with a temporal continuity between those points in time.

Usually the criterion put forward for this personal identity is supposed to be some form of spatiotemporal and/or psychological continuity.  I certainly wouldn’t be the first person to point out that the question of which criterion is correct has already framed the debate with the assumption that a personal (numerical) identity exists in the first place and even if it did exist, it also assumes that the criterion is something that would be determinable in some way.  While it is not unfounded to believe that some properties exist that we could ascribe to all persons (simply because of what we find in common with all persons we’ve interacted with thus far), I think it is far too presumptuous to believe that there is a numerical identity underlying our basic conceptions of personal identity and a determinable criterion for it.  At best, I think if one finds any kind of numerical identity for persons that persist over time, it is not going to be compatible with our intuitions nor is it going to be applicable in any pragmatic way.

As I mention pragmatism, I am sympathetic to Parfit’s views in the sense that regardless of what one finds the criteria for numerical personal identity to be (if it exists), the only thing that really matters to us is psychological continuity anyway.  So despite the fact that Locke’s view — that psychological continuity (via memory) was the criterion for personal identity — was in fact shown to be based on circular and illogical arguments (per Butler, Reid and others), nevertheless I give applause to his basic idea.  Locke seemed to be on the right track, in that psychological continuity (in some sense involving memory and consciousness) is really the essence of what we care about when defining persons, even if it can’t be used as a valid criterion in the way he proposed.

(Non) Persistence & Pragmatic Use of a Personal Identity Concept

I think that the search for, and long debates over, what the best criterion for personal identity is, has illustrated that what people have been trying to label as personal identity should probably be relabeled as some sort of pragmatic pseudo-identity. The pragmatic considerations behind the common and intuitive conceptions of personal identity have no doubt steered the debate pertaining to any possible criteria for helping to define it, and so we can still value those considerations even if a numerical personal identity doesn’t really exist (that is, even if it is nothing more than a pseudo-identity) and even if a diachronic numerical personal identity does exist but isn’t useful in any way.

If the object/subject that we refer to as “I” or “me” is constantly changing with every passing moment of time both physically and psychologically, then I tend to think that the self (that many people ascribe as the “agent” of our personal identity) is an illusion of some sort.  I tend to side more with Hume on this point (or at least James Giles’ fair interpretation of Hume) in that my views seem to be some version of a no-self or eliminativist theory of personal identity.  As Hume pointed out, even though we intuitively ascribe a self and thereby some kind of personal identity, there is no logical reason supported by our subjective experience to think it is anything but a figment of the imagination.  This illusion results from our perceptions flowing from one to the next, with a barrage of changes taking place with this “self” over time that we simply don’t notice taking place — at least not without critical reflection on our past experiences of this ever-changing “self”.  The psychological continuity that Locke described seems to be the main driving force behind this illusory self since there is an overlap in the memories of the succession of persons.

I think one could say that if there is any numerical identity that is associated with the term “I” or “me”, it only exists for a short moment of time in one specific spatio-temporal slice, and then as the next perceivable moment elapses, what used to be “I” will become someone else, even if the new person that comes into being is still referred to as “I” or “me” by a person that possesses roughly the same configuration of matter in its body and brain as the previous person.  Since the neighboring identities have an overlap in accessible memory including autobiographical memories, memories of past experiences generally, and the memories pertaining to the evolving desires that motivate behavior, we shouldn’t expect this succession of persons to be noticed or perceived by the illusory self because each identity has access to a set of memories that is sufficiently similar to the set of memories accessible to the previous or successive identity.  And this sufficient degree of similarity in those identities’ memories allow for a seemingly persistent autobiographical “self” with goals.

As for the pragmatic reasons for considering all of these “I”s and “me”s to be the same person and some singular identity over time, we can see that there is a causal dependency between each member of this “chain of spatio-temporal identities” that I think exists, and so treating that chain of interconnected identities as one being is extremely intuitive and also incredibly useful for accomplishing goals (which is likely the reason why evolution would favor brains that can intuit this concept of a persistent “self” and the near uni-directional behavior that results from it).  There is a continuity of memory and behaviors (even though both change over time, both in terms of the number of memories and their accuracy) and this continuity allows for a process of conditioning to modify behavior in ways that actively rely on those chains of memories of past experiences.  We behave as if we are a single person moving through time and space (and as if we are surrounded by other temporally extended single person’s behaving in similar ways) and this provides a means of assigning ethical and causal responsibility to something or more specifically to some agent.  Quite simply, by having those different identities referenced under one label and physically attached to or instantiated by something localized, that allows for that pragmatic pseudo-identity to persist over time in order for various goals (whether personal or interpersonal/societal) to be accomplished.

“The Persons Problem” and a “Speciation” Analogy

I came up with an analogy that I thought was very fitting to this concept.  One could analogize this succession of identities that get clumped into one bulk pragmatic-pseudo-identity with the evolutionary concept of speciation.  For example, a sequence of identities somehow constitute an intuitively persistent personal identity, just as a sequence of biological generations somehow constitute a particular species due to the high degree of similarity between them all.  The apparent difficulty lies in the fact that, at some point after enough identities have succeeded one another, even the intuitive conception of a personal identity changes markedly to the point of being unrecognizable from its ancestral predecessor, just as enough biological generations transpiring eventually leads to what we call a new species.  It’s difficult to define exactly when that speciation event happens (hence the species problem), and we have a similar problem with personal identity I think.  Where does it begin and end?  If personal identity changes over the course of a lifetime, when does one person become another?  I could think of “me” as the same “me” that existed one year ago, but if I go far enough back in time, say to when I was five years old, it is clear that “I” am a completely different person now when compared to that five year old (different beliefs, goals, worldview, ontology, etc.).  There seems to have been an identity “speciation” event of some sort even though it is hard to define exactly when that was.

Biologists have tried to solve their species problem by coming up with various criteria to help for taxonomical purposes at the very least, but what they’ve wound up with at this point is several different criteria for defining a species that are each effective for different purposes (e.g. biological-species concept, morpho-species concept, phylogenetic-species concept, etc.), and without any single “correct” answer since they are all situationally more or less useful.  Similarly, some philosophers have had a persons problem that they’ve been trying to solve and I gather that it is insoluble for similar “fuzzy boundary” reasons (indeterminate properties, situationally dependent properties, etc.).

The Role of Information in a Personal Identity Concept

Anyway, rather than attempt to solve the numerical personal identity problem, I think that philosophers need to focus more on the importance of the concept of information and how it can be used to try and arrive at a more objective and pragmatic description of the personal identity of some cognitive agent (even if it is not used as a criterion for numerical identity, since information can be copied and the copies can be distinguished from one another numerically).  I think this is especially true once we take some of the concerns that James DiGiovanna brought up concerning the integration of future AI into our society.

If all of the beliefs, behaviors, and causal driving forces in a cognitive agent can be represented in terms of information, then I think we can implement more universal conditioning principles within our ethical and societal framework since they will be based more on the information content of the person’s identity without putting as much importance on numerical identity nor as much importance on our intuitions of persisting people (since they will be challenged by several kinds of foreseeable future AI scenarios).

To illustrate this point, I’ll address one of James DiGiovanna’s conundrums.  James asks us:

To give some quick examples: suppose an AI commits a crime, and then, judging its actions wrong, immediately reforms itself so that it will never commit a crime again. Further, it makes restitution. Would it make sense to punish the AI? What if it had completely rewritten its memory and personality, so that, while there was still a physical continuity, it had no psychological content in common with the prior being? Or suppose an AI commits a crime, and then destroys itself. If a duplicate of its programming was started elsewhere, would it be guilty of the crime? What if twelve duplicates were made? Should they each be punished?

In the first case, if the information constituting the new identity of the AI after reprogramming is such that it no longer needs any kind of conditioning, then it would be senseless to punish the AI — other than to appease humans that may be angry that they couldn’t themselves avoid punishment in this way, due to having a much slower and less effective means of reprogramming themselves.  I would say that the reprogrammed AI is guilty of the crime, but only if its reprogrammed memory still included information pertaining to having performed those past criminal behaviors.  However, if those “criminal memories” are now gone via the reprogramming then I’d say that the AI is not guilty of the crime because the information constituting its identity doesn’t match that of the criminal AI.  It would have no recollection of having committed the crime and so “it” would not have committed the crime since that “it” was lost in the reprogramming process due to the dramatic change in information that took place.

In the latter scenario, if the information constituting the identity of the destroyed AI was re-instantiated elsewhere, then I would say that it is in fact guilty of the crime — though it would not be numerically guilty of the crime but rather qualitatively guilty of the crime (to differentiate between the numerical and qualitative personal identity concepts that are embedded in the concept of guilt).  If twelve duplicates of this information were instantiated into new AI hardware, then likewise all twelve of those cognitive agents would be qualitatively guilty of the crime.  What actions should be taken based on qualitative guilt?  I think it means that the AI should be punished or more specifically that the judicial system should perform the reconditioning required to modify their behavior as if it had committed the crime (especially if the AI believes/remembers that it has committed the crime), for the better of society.  If this can be accomplished through reprogramming, then that would be the most rational thing to do without any need for traditional forms of punishment.

We can analogize this with another thought experiment with human beings.  If we imagine a human that has had its memories changed so that it believes it is Charles Manson, has all of Charles Manson’s memories and intentions, then that person should be treated as if they are Charles Manson and thus incarcerated/punished accordingly to rehabilitate them or protect the other members of society.  This is assuming of course that we had reliable access to that kind of mind-reading knowledge.  If we did, the information constituting the identity of that person would be what is most important — not what the actual previous actions of the person were — because the “previous person” was someone else, due to that gross change in information.

Neurological Configuration & the Prospects of an Innate Ontology

After a brief discussion on another blog pertaining to whether or not humans possess some kind of an innate ontology or other forms of what I would call innate knowledge, I decided to expand on my reply to that blog post.

While I agree that at least most of our knowledge is acquired through learning, specifically through the acquisition and use of memorized patterns of perception (as this is generally how I would define knowledge), I also believe that there are at least some innate forms of knowledge, including some that would likely result from certain aspects of our brain’s innate neurological configuration and implementation strategy.  This proposed form of innate knowledge would seem to bestow a foundation for later acquiring the bulk of our knowledge that is accomplished through learning.  This foundation would perhaps be best described as a fundamental scaffold of our ontology and thus an innate aspect that our continually developing ontology is based on.

My basic contention is that the hierarchical configuration of neuronal connections in our brains is highly analogous to the hierarchical relationships utilized to produce our conceptualization of reality.  In order for us to make sense of the world, our brains seem to fracture reality into many discrete elements, properties, concepts, propositions, etc., which are all connected to each other through various causal relationships or what some might call semantic hierarchies.  So it seems plausible if not likely that the brain is accomplishing a fundamental aspect of our ontology by our utilizing an innate hardware schema that involves neurological branching.

As the evidence in the neurosciences suggests, it certainly appears that our acquisition of knowledge through learning what those discrete elements, properties, concepts, propositions, etc., are, involves synaptogenesis followed by pruning, modifying, and reshaping a hierarchical neurological configuration, in order to end up with a more specific hierarchical neurological arrangement, and one that more accurately correlates with the reality we are interacting with and learning about through our sensory organs.  Since the specific arrangement that eventually forms couldn’t have been entirely coded for in our DNA (due to it’s extremely high level of complexity and information density), it ultimately had to be fine-tuned to this level of complexity after it’s initial pre-sensory configuration developed.  Nevertheless, the DNA sequences that were naturally selected for to produce the highly capable brains of human beings (as opposed to the DNA that guides the formation of the brain of a much less intelligent animal), clearly have encoded increasingly more effective hardware implementation strategies than our evolutionary ancestors.  These naturally selected neurological strategies seem to control what particular types of causal patterns the brain is theoretically capable of recognizing (including some upper limit of complexity), and they also seem to control how the brain stores and organizes these patterns for later use.  So overall, my contention is that these naturally selected strategies in themselves are a type of knowledge, because they seem to provide the very foundation for our initial ontology.

Based on my understanding, after many of the initial activity-independent mechanisms for neural development have occurred in some region of the developing brain such as cellular differentiation, cellular migration, axon guidance, and some amount of synapse formation, then the activity-dependent mechanisms for neuronal development (such as neural activity caused by the sensory organs in the process of learning), finally begin to modify those synapses and axons into a new hierarchical arrangement.  It is especially worth noting that even though much of the synapse formation during neural development is mediated by activity-dependent mechanisms, such as the aforementioned neural activity produced by the sensory organs during perceptual development and learning, there is also spontaneous neural activity forming many of these synapses even before any sensory input is present, thus contributing to the innate neurological configuration (i.e. that which is formed before any sensation or learning has occurred).

Thus, the subsequent hierarchy formed through neural/sensory stimulation via learning appears to begin from a parent hierarchical starting point based on neural developmental processes that are coded for in our DNA as well as synaptogenic mechanisms involving spontaneous pre-sensory neural activity.  So our brain’s innate (i.e. pre-sensory) configuration likely contributes to our making sense of the world by providing a starting point that reflects the fundamental hierarchical nature of reality that all subsequent knowledge is built off of.  In other words, it seems that if our mature conceptualization of reality involves a very specific type of hierarchy, then an innate/pre-sensory hierarchical schema of neurons would be a plausible if not expected physical foundation for it (see Edelman’s Theory of Neuronal Group Selection within this link for more empirical support of these points).

Additionally, if the brain’s wiring has evolved in order to see dimensions of difference in the world (unique sensory/perceptual patterns that is, such as quantity, colors, sounds, tastes, smells, etc.), then it would make sense that the brain can give any particular pattern an identity by having a unique schema of hardware or unique use of said hardware to perceive such a pattern and distinguish it from other patterns.  After the brain does this, the patterns are then arguably organized by the logical absolutes.  For example, if the hardware scheme or process used to detect a particular pattern “A” exists and all other patterns we perceive have or are given their own unique hardware-based identity (i.e. “not-A” a.k.a. B, C, D, etc.), then the brain would effectively be wired such that pattern “A” = pattern “A” (law of identity), any other pattern which we can call “not-A” does not equal pattern “A” (law of non-contradiction), and any pattern must either be “A” or some other pattern even if brand new, which we can also call “not-A” (law of the excluded middle).  So by the brain giving a pattern a physical identity (i.e. a specific type of hardware configuration in our brain that when activated, represents a detection of one specific pattern), our brains effectively produce the logical absolutes by nature of the brain’s innate wiring strategy which it uses to distinguish one pattern from another.  So although it may be true that there can’t be any patterns stored in the brain until after learning begins (through sensory experience), the fact that the DNA-mediated brain wiring strategy inherently involves eventually giving a particular learned pattern a unique neurological hardware identity to distinguish it from other stored patterns, suggests that the logical absolutes themselves are an innate and implicit property of how the brain stores recognized patterns.

In short, if it is true that any and all forms of reasoning as well as the ability to accumulate knowledge simply requires logic and the recognition of causal patterns, and if the brain’s innate neurological configuration schema provides the starting foundation for both, then it would seem reasonable to conclude that the brain has at least some types of innate knowledge.

Artificial Intelligence: A New Perspective on a Perceived Threat

There is a growing fear in many people of the future capabilities of artificial intelligence (AI), especially as the intelligence of these computing systems begins to approach that of human beings.  Since it is likely that AI will eventually surpass the intelligence of humans, some wonder if these advancements will be the beginning of the end of us.  Stephen Hawking, the eminent British physicist, was recently quoted by the BBC as saying “The development of full artificial intelligence could spell the end of the human race.”  BBC technology correspondent Rory Cellan-Jones said in a recent article “Prof Hawking says the primitive forms of artificial intelligence developed so far have already proved very useful, but he fears the consequences of creating something that can match or surpass humans.”  Hawking then said “It would take off on its own, and re-design itself at an ever increasing rate.  Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”

Hawking isn’t alone with this fear, and clearly this fear isn’t ill-founded.  It doesn’t take a rocket scientist to realize that human intelligence has allowed us to overcome just about any environmental barrier we’ve come across, driving us to the top of the food chain.  We’ve all seen the benefits of our high intelligence as a species, but we’ve also seen what can happen due to that intelligence being imperfect, having it operate on incomplete or fallacious information, and ultimately lacking an adequate capability of accurately determining the long-term consequences of our actions.  Because we have such a remarkable ability to manipulate our environment, that manipulation can be extremely beneficial or extremely harmful as we’ve seen with the numerous species we’ve threatened on this planet (some having gone extinct).  We’ve even threatened many fellow human beings in the process, whether intentionally or not.  Our intelligence, combined with some elements of short-sightedness, selfishness, and aggression, has led to some pretty abhorrent products throughout human history — anything from the mass enslavement of others spanning back thousands of years to modern forms of extermination weaponry (e.g. bio-warfare and nuclear bombs).  If AI reaches and eventually surpasses our level of intelligence, it is reasonable to consider the possibility that we may find ourselves on a lower rung of the food chain (so to speak), potentially becoming enslaved or exterminated by this advanced intelligence.

AI: Friend or Foe?

So what exactly prompted Stephen Hawking to make these claims?  As the BBC article mentions, “His warning came in response to a question about a revamp of the technology he uses to communicate, which involves a basic form of AI…The theoretical physicist, who has the motor neurone disease amyotrophic lateral sclerosis (ALS), is using a new system developed by Intel to speak.  Machine learning experts from the British company Swiftkey were also involved in its creation. Their technology, already employed as a smartphone keyboard app, learns how the professor thinks and suggests the words he might want to use next.”

Reading this article suggests another possibility or perspective that I don’t think a lot of people are considering with regard to AI technology.  What if AI simply replaces us gradually, by actually becoming the new “us”?  That is, as we further progress in Cyborg (i.e. cybernetic organism) technologies, using advancements similar to Stephen Hawking’s communication ability upgrade, we are ultimately becoming amalgams of biological and synthetic machines anyway.  Even the technology that we currently operate through an external peripheral interface (like smart phones and all other computers) will likely become integrated into our bodies internally.  Google glasses, voice recognition, and other technologies like those used by Hawking are certainly taking us in that direction.  It’s not difficult to imagine one day being able to think about a particular question or keyword, and having an integrated blue-tooth implant in our brain recognize the mental/physiological command cue, and successfully retrieve the desired information wirelessly from an online cloud or internet database of some form.  Going further still, we will likely one day be able to take sensory information that enters the neuronal network of our brain, and once again, send it wirelessly to supercomputers stored elsewhere that are able to process the information with billions of pattern recognition modules.  The 300 million or so pattern recognition modules that are currently in our brain’s neo-cortex would be dwarfed by this new peripheral-free interface and wirelessly accessible technology.

For those that aren’t very familiar with the function or purpose of the brain’s neo-cortex, we use its 300 million or so pattern recognition modules to notice patterns in the environment around us (and meta patterns of neuronal activity within the brain), thus being able to recognize and memorize sensory data, and think.  Ultimately, we use this pattern recognition to accomplish goals, solve problems, and gain knowledge from past experience.  In short, these pattern recognition modules are our primary source or physiological means for our intelligence.  Thus, being able to increase the number of pattern recognition modules (as well as the number of hierarchies between different sets of them), will only increase our intelligence.  Regardless of whether we integrate computer chips in our brain to do at least some or all of this processing locally, or use a wireless means of communication to an external supercomputer farm or otherwise, we will likely continue to integrate our biology with synthetic analogs to increase our capabilities.

When we realize that a higher intelligence allows us to better predict the consequences of our actions, we can see that our increasing integration with AI will likely have incredible survival benefits.  This integration will also catalyze further technologies that could never have been accomplished with biological brains alone, because we simply aren’t naturally intelligent enough to think beyond a certain level of complexity.  As Hawking said regarding AI that could surpass our intelligence, “It would take off on its own, and re-design itself at an ever increasing rate.  Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”  Yes, but if that AI becomes integrated in us, then really it is humans that are finding a way to circumvent slow biological evolution with a non-biological substrate that supercedes it.

At this time I think it is relevant to mention something else I’ve written about previously, which is the advancements being made in genetic engineering and how they are taking us into our last and grandest evolutionary leap, a “conscious evolution”, thus being able to circumvent our own slow biological evolution through an intentionally engineered design.  So as we gain more knowledge in the field of genetic engineering (combined with the increasing simulation and computing power afforded by AI), we will likely be able to catalyze our own biological evolution such that we can evolve quickly as we increase our Cyborg integrations with AI.  So we will likely see an increase in genetic engineering capabilities developing in close parallel with AI advancements, with each field substantially contributing to the other and ultimately leading to our transhumanism.

Final Thoughts

It seems clear that advancements in AI are providing us with more tools to accomplish ever-more complex goals as a species.  As we continue to integrate AI into ourselves, what we now call “human” is simply going to change as we change.  This would happen regardless, as human biological evolution continues its course into another speciation event, similar to the one that led to humans in the first place.  In fact, if we wanted to maintain the way we are right now as a species, biologically speaking, it would actually require us to use genetic engineering to do so, because genetic differentiation mechanisms (e.g. imperfect DNA replication, mutations, etc.) are inherent in our biology.  Thus, for those that argue against certain technologies based on a desire to maintain humanity and human attributes, they must also realize that the very technologies they despise are in fact their only hope for doing so.  More importantly, the goal of maintaining what humans currently are goes against our natural evolution, and we should embrace change, even if we embrace it with caution.

If AI continues to become further integrated into ourselves, forming a Cyborg amalgam of some kind, as it advances to a certain point we may choose one day to entirely eliminate the biological substrate of that amalgam, if it is both possible and advantageous to do so.  Even if we maintain some of our biology, and merely hybridize with AI, then Hawking was right to point out that “The development of full artificial intelligence could spell the end of the human race.”  Although, rather than a doomsday scenario like we saw in the movie The Terminator, with humans and machines at war with one another, the end of the human race may simply mean that we will end up changing into a different species, just as we’ve done throughout our evolutionary history.  Only this time, it will be a transition from a purely biological evolution to a cybernetic hybrid variation.  Furthermore, if it happens, it will be a transition that will likely increase our intelligence (and many other capabilities) to unfathomable levels, giving us an unprecedented ability to act based on more knowledge of the consequences of our actions as we move forward.  We should be cautious indeed, but we should also embrace our ongoing evolution and eventual transhumanism.