“Black Mirror” Reflections: Playtest (S3, E2)

Cooper

Black Mirror, the British science-fiction anthology series created by Charlie Brooker, does a pretty good job experimenting with a number of illuminating concepts that highlight how modern culture is becoming increasingly shaped by (and vulnerable to) various technological advances and changes.  I’ve been interested in a number of these concepts for many years now, not least because of the many important philosophical implications (including a number of moral issues) that they point to.  I’ve decided to start a blog post series that will explore the contents of these episodes.  My intention with this blog post series, which I’m calling “Black Mirror” Reflections, will be to highlight some of my own takeaways from some of my favorite episodes.

I’d like to begin with season 2, episode 3, “Playtest”.  I’m just going to give a brief summary here as you can read the full episode summary in the link provided above.  In this episode, Cooper (Wyatt Russell) decides to travel around the world, presumably as a means of dealing with the recent death of his father, who died from early-onset Alzheimer’s.  After finding his way to London, his last destination before planning to return home to America, he runs into a problem with his credit card and bank account where he can’t access the money he needs to buy his plane ticket.

While he waits for his bank to fix the problem with his account, Cooper decides to earn some cash using an “Oddjobs” app, which provides him with a number of short-term job listings in the area, eventually leading him to “SaitoGemu,” a video game company looking for game testers to try a new kind of personalized horror game involving a (seemingly) minimally invasive brain implant procedure.  He briefly hesitates but, desperate for money and reasonably confident in its safety, he eventually consents to the procedure whereby the implant is intended to wire itself into the gamer’s brain, resulting in a form of perceptual augmentation and a semi-illusory reality.

cooper-mad

The implant is designed to (among other things) scan your memories and learn what your worst fears are, in order to integrate these into the augmented perceptions, producing a truly individualized, and maximally frightening horror game experience.  Needless to say, at some point Cooper begins to lose track of what’s real and what’s illusory, and due to a malfunction, he’s unable to exit the game and he ends up going mad and eventually dying as a result of the implant unpredictably overtaking (and effectively frying) his brain.

nanobots in brain

There are a lot of interesting conceptual threads in this story, and the idea of perceptual augmentation is a particularly interesting theme that finds it’s way into a number of other Black Mirror episodes.  While holographic and VR-headset gaming technologies can produce their own form of augmented reality, perceptual augmentation carried out on a neurological level isn’t even in the same ballpark, having qualitative features that are far more advanced and which are more akin to those found in the Wachowski’s The Matrix trilogy or James Cameron’s Total Recall.  Once the user is unable to distinguish between the virtual world and the external reality, with the simulator having effectively passed a kind of graphical version of the Turing Test, then one’s previous notion of reality is effectively shattered.  To me, this technological idea is one of the most awe-inspiring (yet sobering) ideas within the Black Mirror series.

The inability to discriminate between the two worlds means that both worlds are, for all practical purposes, equally “real” to the person experiencing them.  And the fact that one simulationcan’t tell the difference between such worlds ought to give a person pause to re-evaluate what it even means for something to be real.  If you doubt this, then just imagine if you were to find out one day that your entire life has really been the result of a computer simulation, created and fabricated by some superior intelligence living in a layer of reality above your own (we might even think of this being as a “god”).  Would this realization suddenly make your remembered experiences imaginary and meaningless?  Or would your experiences remain just as “real” as they’ve always been, even if they now have to be reinterpreted within a context that grounds them in another layer of reality?

To answer this question honestly, we ought to first realize that we’re likely fully confident that what we’re experiencing right now is reality, is real, is authentic, and is meaningful (just as Cooper was at some point in his gaming “adventure”).  And this seems to be at least a partial basis for how we define what is real, and how we differentiate the most vivid and qualitatively rich experiences from those we might call imaginary, illusory, or superficial.  If what we call reality is really just a set of perceptions, encompassing every conscious experience from the merely quotidian to those we deem to be extraordinary, would we really be justified in dismissing all of these experiences and their value to us if we were to discover that there’s a higher layer of reality, residing above the only reality we’ve ever known?

For millennia, humans have pondered over whether or not the world is “really” the way we see it, with perhaps the most rigorous examination of this metaphysical question undertaken by the German philosopher Immanuel Kant, with his dichotomy of the phenomenon and the noumenon (i.e. the way we see the world or something in the world versus the way the world or thing “really is” in itself, independent of our perception).  Even if we assume the noumenon exists, we can never know anything about the thing in itself, by our being fundamentally limited by our own perceptual categories and the way we subjectively interpret the world.  Similarly, we can never know for certain whether or not we’re in a simulation.

Looking at the situation through this lens, we can then liken the question of how to (re)define reality within the context of a simulation with the question of how to (re)define reality within the context of a world as it really is in itself, independent of our perception.  Both the possibility of our being in a simulation and the possibility of our perceptions stemming from an unknowable noumenal world could be true (and would likely be unfalsifiable), and yet we still manage to use and maintain a relatively robust conception and understanding of reality.  This leads me to conclude that reality is ultimately defined by pragmatic considerations (mostly those pertaining to our ability to make successful predictions and achieve our goals), and thus the possibility of our one day learning about a new, higher level of reality should merely add to our existing conception of reality, rather than completely negating it, even if it turns out to be incomplete.

Another interesting concept in this episode involves the basic design of the individualized horror game itself, where a computer can read your thoughts and memories, and then surmise what your worst fears are.  This is a technological feat that is possible in principle, and one with far-reaching implications that concern our privacy, safety, and autonomy.  Just imagine if such a power were unleashed by corporations, or mind-readingthe governments owned by those corporations, to acquire whatever information they wanted from your mind, to find out how to most easily manipulate you in terms of what you buy, who you vote for, what you generally care about, or what you do any and every day of your life.  The Orwellian possibilities are endless.

Marketing firms (both corporatocratic and political) have already been making use of discoveries in psychology and neuroscience, finding new ways to more thoroughly exploit our cognitive biases to get us to believe and desire whatever will benefit them most.  Adding to this the future ability to read our thoughts and manipulate our perceptions (even if this is first implemented as a seemingly innocuous video game), this will establish a new means of mass surveillance, where we can each become a potential “camera” watching one another (a theme also highlighted in BM, S4E3: Crocodile), while simultaneously exposing our most private of thoughts, and transmitting them to some centralized database.  Once we reach these technological heights (it’s not a matter of if but when), depending on how it’s employed, we may find ourselves no longer having the freedom to lie or to keep a secret, nor the freedom of having any mental privacy whatsoever.

To be fair, we should realize that there are likely to be undeniable benefits in our acquiring these capacities (perceptual augmentation and mind-reading), such as making virtual paradises with minimal resources, finding new ways of treating phobias, PTSD, brain to cloudand other pathologies; giving us the power of telepathy and superhuman intelligence by connecting our brains to the cloud, giving us the ability to design error-proof lie detectors and other vast enhancements in maximizing personal security and reducing crime.  But there are also likely to be enormous losses in personal autonomy, as our available “choices” are increasingly produced and constrained by automated algorithms; there are likely to be losses in privacy, and increasing difficulties in ascertaining what is true and what isn’t, since our minds will be vulnerable to artificially generated perceptions created by entities and institutions that want to deceive us.

Although we’ll constantly need to be on the lookout for these kinds of potential dangers as they arise, in the end, we may find ourselves inadvertently allowing these technologies to creep into our lives, one consumer product at a time.

Advertisement

Irrational Man: An Analysis (Part 4, Chapter 11: The Place of the Furies)

In my last post in this series on William Barrett’s Irrational Man, I examined some of the work of existential philosopher Jean-Paul Sartre, which concluded part 3 of Barrett’s book.  The final chapter, Ch. 11: The Place of the Furies, will be briefly examined here and this will conclude my eleven part post-series.  I’ve enjoyed this very much, but it’s time to move on to new areas of interest, so let’s begin.

1. The Crystal Palace Unmanned

“The fact is that a good dose of intellectualism-genuine intellectualism-would be a very helpful thing in American life.  But the essence of the existential protest is that rationalism can pervade a whole civilization, to the point where the individuals in that civilization do less and less thinking, and perhaps wind up doing none at all.  It can bring this about by dictating the fundamental ways and routines by which life itself moves.  Technology is one material incarnation of rationalism, since it derives from science; bureaucracy is another, since it aims at the rational control and ordering of social life; and the two-technology and bureaucracy-have come more and more to rule our lives.”

Regarding the importance and need for more intellectualism in our society, I think this can be better described as the need for more critical thinking skills and the need for people to be able to discern fact from fiction, to recognize their own cognitive biases, and to respect the adage that extraordinary claims require extraordinary evidence.  At the same time, in order to appreciate the existentialist’s concerns, we ought to recognize that there are aspects of human psychology including certain psychological needs that are inherently irrational, including with respect to how we structure our lives, how we express our emotions and creativity, how we maintain a balance in our psyche, etc.  But, since technology is not only a material incarnation of rationalism, but also an outlet for our creativity, there has to be a compromise here where we shouldn’t want to abandon technology, but simply to keep it in check such that it doesn’t impede our psychological health and our ability to live a fulfilling life.

“But it is not so much rationalism as abstractness that is the existentialists’ target; and the abstractness of life in this technological and bureaucratic age is now indeed something to reckon with.  The last gigantic step forward in the spread of technologism has been the development of mass art and mass media of communication: the machine no longer fabricates only material products; it also makes minds. (stereotypes, etc.).”

Sure enough, we’re living in a world where many of our occupations are but one of many layers of abstraction constituting our modern “machine of civilization”.  And the military industrial complex that has taken over the modern world has certainly gone beyond the mass production of physical stuff to be consumed by the populace, and now includes the mass production and viral dissemination of memes as well.  Ideas can spread like viruses and in our current globally interconnected world (which Barrett hadn’t yet seen to the same degree when writing this book), the spread of these ideas is much faster and influential on culture than ever before.  The degree of indoctrination, and the perpetuated cycles of co-dependence between citizens and the corporatocratic, sociopolitical forces ruling our lives from above, have resulted in making our way of life and our thinking much more collective and less personal than at any other time in human history.

“Kierkegaard condemned the abstractness of his time, calling it an Age of Reflection, but what he seems chiefly to have had in mind was the abstractness of the professorial intellectual, seeing not real life but the reflection of it in his own mind.”

Aside from the increasingly abstract nature of modern living then, there’s also the abstractness that pervades our thinking about life, which detracts from our ability to actually experience life.  Kierkegaard had a legitimate point here, by pointing out the fact that theory cannot be a complete substitute for practice; that thought cannot be a complete substitute for action.  We certainly don’t want the reverse either, since action without sufficient forethought leads to foolishness and bad consequences.

I think the main point here is that we don’t want to miss out on living life by thinking about it too much.  Since living a philosophically examined life is beneficial, it remains an open question exactly what balance is best for any particular individual to live the most fulfilling life.  In the mean time we ought to simply recognize that there is the potential for an imbalance, and to try our best to avoid it.

“To be rational is not the same as to be reasonable.  In my time I have heard the most hair-raising and crazy things from very rational men, advanced in a perfectly rational way; no insight or feelings had been used to check the reasoning at any point.”

If you ignore our biologically-grounded psychological traits, or ignore the fact that there’s a finite range of sociological conditions for achieving human psychological health and well-being, then you can’t develop any theory that’s supposed to apply to humans and expect it to be reasonable or tenable.  I would argue that ignoring this subjective part of ourselves when making theories that are supposed to guide our behavior in any way is irrational, at least within the greater context of aiming to have not only logically valid arguments but logically sound arguments as well.  But, if we’re going to exclude the necessity for logical soundness in our conception of rationality, then the point is well taken.  Rationality is a key asset in responsible decision making but it should be used in a way that relies on or seeks to rely on true premises, taking our psychology and sociology into account.

“The incident (making hydrogen bombs without questioning why) makes us suspect that, despite the increase in the rational ordering of life in modern times, men have not become the least bit more reasonable in the human sense of the word.  A perfect rationality might not even be incompatible with psychosis; it might, in fact, even lead to the latter.”

Again, I disagree with Barrett’s use or conception of rationality here, but semantics aside, his main point still stands.  I think that we’ve been primarily using our intelligence as a species to continue to amplify our own power and maximize our ability to manipulate the environment in any way we see fit.  But as our technological capacity advances, our cultural evolution is getting increasingly out of sync with our biological evolution, and we haven’t been anywhere close to sufficient in taking our psychological needs and limitations into account as we continue to engineer the world of tomorrow.

What we need is rationality that relies on true or at least probable premises, as this combination should actually lead a person to what is reasonable or likely to be reasonable.  I have no doubt that without making use of a virtue such as reasonableness, rationality can become dangerous and destructive, but this is the case with every tool we use whether it’s rationality or other capacities both mental and material; tools can always be harmful when misused.

“If, as the Existentialists hold, an authentic life is not handed to us on a platter but involves our own act of self-determination (self-finitization) within our time and place, then we have got to know and face up to that time, both in its (unique) threats and its promises.”

And our use of rationality on an individual level should be used to help reach this goal of self-finitization, so that we can balance the benefits of the collective with the freedom of each individual that makes up that collective.

“I for one am personally convinced that man will not take his next great step forward until he has drained to the lees the bitter cup of his own powerlessness.”

And when a certain kind of change is perceived as something to be avoided, the only way to go through with it is by perceiving the status quo as something that needs to be avoided even more so, so that a foray out of our comfort zone is perceived as an actual improvement to our way of life.  But as long as we see ourselves as already all powerful and masters over our domain, we won’t take any major leap in a new direction of self and collective improvement.  We need to come to terms with our current position in the modern world so that we can truly see what needs repair.  Whether or not we succumb to one or both of the two major existential threats facing our species, climate change and nuclear war, is contingent on whether or not we set our eyes on a new prize.

“Sartre recounts a conversation he had with an American while visiting in this country.  The American insisted that all international problems could be solved if men would just get together and be rational; Sartre disagreed and after a while discussion between them became impossible.  “I believe in the existence of evil,” says Sartre, “and he does not.” What the American has not yet become aware of is the shadow that surrounds all human Enlightenment.”

Once again, if rationality is accompanied with true premises that take our psychology into account, then international problems could be solved (or many of them at least), but Sartre is also right insofar as there are bad ideas that exist, and people that have cultivated their lives around them.  It’s not enough to have people thinking logically, nor is some kind of rational idealism up to the task of dealing with human emotion, cognitive biases, psychopathy, and other complications in human behavior that exist.

The crux of the matter is that some ideas hurt us and other ideas help us, with our perception of these effects being the main arbiter driving our conception of what is good and evil; but there’s also a disparity between what people think is good or bad for them and what is actually good or bad for them.  I think that one could very plausibly argue that if people really knew what was good or bad for them, then applying rationality (with true or plausible premises) would likely work to solve a number of issues plaguing the world at large.

“…echoing the Enlightenment’s optimistic assumption that, since man is a rational animal, the only obstacles to his fulfillment must be objective and social ones.”

And here’s an assumption that’s certainly difficult to ground since it’s based on false premises, namely that humans are inherently rational.  We are unique in the animal kingdom in the sense that we are the only animal (or one of only a few animals) that have the capacity for rational thought, foresight, and the complex level of organization made possible from its use.  I also think that the obstacles to fulfillment are objective since they can be described as facts pertaining to our psychology, sociology, and biology, even if our psychology (for example) is instantiated in a subjective way.  In other words, our subjectivity and conscious experiences are grounded on or describable in objective terms relating to how our particular brains function, how humans as a social species interact with one another, etc.  But, fulfillment can never be reached let alone maximized without taking our psychological traits and idiosyncrasies into account, for these are the ultimate constraints on what can make us happy, satisfied, fulfilled, and so on.

“Behind the problem of politics, in the present age, lies the problem of man, and this is what makes all thinking about contemporary problems so thorny and difficult…anyone who wishes to meddle in politics today had better come to some prior conclusions as to what man is and what, in the end, human life is all about…The speeches of our politicians show no recognition of this; and yet in the hands of these men, on both sides of the Atlantic, lies the catastrophic power of atomic energy.”

And as of 2018, we’ve seen the Doomsday clock now reach two minutes to midnight, having inched one minute closer to our own destruction since 2017.  The dominance hierarchy being led and reinforced by the corporatocratic plutocracy are locked into a narrow form of tunnel vision, hell bent on maintaining if not exacerbating the wealth and power disparities that plague our country and the world as a whole, despite the fact that this is not sustainable in the long run, nor best for the fulfillment of those promoting it.

We the people do share a common goal of trying to live a good, fulfilling life; to have our basic needs met, and to have no fewer rights than anybody else in society.  You’d hardly know that this common ground exists between us when looking at the state of our political sphere, likely as polarized now in the U.S. (if not more so) than even during the Civil War.  Clearly, we have a lot of work to do to reevaluate what our goals ought to be, what our priorities ought to be, and we need a realistic and informed view of what it means to be human before any of these goals can be realized.

“Existentialism is the counter-Enlightenment come at last to philosophic expression; and it demonstrates beyond anything else that the ideology of the Enlightenment is thin, abstract, and therefore dangerous.”

Yes, but the Enlightenment has also been one of the main driving forces leading us out of theocracy, out of scientific illiteracy, and towards an appreciation of reason and evidence (something the U.S. at least, is in short supply of these days), and thus it has been crucial in giving us the means for increasing our standard of living, and solving many of our problems.  While the technological advancements derived from the Enlightenment have also been a large contributor to many of our problems, the current existential threats we face including climate change and nuclear war are more likely to be solved by new technologies, not an abolition of technology nor an abolition of the Enlightenment-brand of thinking that led to technological progress.  We simply need to better inform our technological goals of the actual needs and constraints of human beings, our psychology, and so on.

“The finitude of man, as established by Heidegger, is perhaps the death blow to the ideology of the Enlightenment, for to recognize this finitude is to acknowledge that man will always exist in untruth as well as truth.  Utopians who still look forward to a future when all shadows will be dispersed and mankind will dwell in a resplendent Crystal Palace will find this recognition disheartening.  But on second thought, it may not be such a bad thing to free ourselves once and for all from the worship of the idol of progress; for utopianism-whether the brand of Marx or or Nietzsche-by locating the meaning of man in the future leaves human beings here and how, as well as all mankind up to this point, without their own meaning.  If man is to be given meaning, the Existentialists have shown us, it must be here and now; and to think this insight through is to recast the whole tradition of Western thought.”

And we ought to take our cue from Heidegger, at the very least, to admit that we are finite, our knowledge is limited, and it always will be.  We will not be able to solve every problem, and we would do ourselves and the world a lot better if we admitted our own limitations.  But to avoid being overly cynical and simply damning progress altogether, we need to look for new ways of solving our existential problems.  Part of the solution that I see for humanity moving forward is going to be a combination of advancements in a few different fields.

By making better use of genetic engineering, we’ll one day have the ability to change ourselves in remarkable ways in order to become better adapted to our current world.  We will be able to re-sync our biological evolution with our cultural evolution so we no longer feel uprooted, like a fish out of water.  Continuing research in neuroscience will allow us to learn more about how our brains function and how to optimize that functioning.  Finally, the strides we make in computing and artificial intelligence should allow us to vastly improve our simulation power and arm us with greater intelligence for solving all the problems that we face.

Overall, I don’t see progress as the enemy, but rather that we have an alignment problem between our current measures of progress, and what will actually lead to maximally fulfilling lives.

“The realization that all human truth must not only shine against an enveloping darkness, but that such truth is even shot through with its own darkness may be depressing, and not only to utopians…But it has the virtue of restoring to man his sense of the primal mystery surrounding all things, a sense of mystery from which the glittering world of his technology estranges him, but without which he is not truly human.”

And if we actually use technology to change who we are as human beings, by altering the course of natural selection and our ongoing evolution (which is bound to happen with or without our influence, for better or worse), then it’s difficult to say what the future really holds for us.  There are cultural and technological forces that are leading to transhumanism, and this may mean that one day “human beings” (or whatever name is chosen for the new species that replaces us) will be inherently rational, or whatever we’d like our future species to be.  We’ve stumbled upon the power to change our very nature, and so it’s far too naive, simplistic, unimaginative, and short-sighted to say that humans will “always” or “never” be one way or another.  Even if this were true, it wouldn’t negate the fact that one day modern humans will be replaced by a superseding species which has different qualities than what we have now.

2. The Furies

“…Existentialism, as we have seen, seeks to bring the whole man-the concrete individual in the whole context of his everyday life, and in his total mystery and questionableness-into philosophy.”

This is certainly an admirable goal to combat simplistic abstractions of what it means to be human.  We shouldn’t expect to be able to abstract a certain capacity of human beings (such as rationality), consider it in isolation (no consideration of context), formulate a theory around that capacity, and then expect to get a result that is applicable to human beings as they actually exist in the world.  Furthermore, all of the uncertainties and complexities in our human experiences, no matter how difficult they may be to define or describe, should be given their due consideration in any comprehensive philosophical system.

“In modern philosophy particularly (philosophy since Descartes), man has figured almost exclusively as an epistemological subject-as an intellect that registers sense-data, makes propositions, reasons, and seeks the certainty of intellectual knowledge, but not as the man underneath all this, who is born, suffers, and dies…But the whole man is not whole without such unpleasant things as death, anxiety, guilt, fear and trembling, and despair, even though journalists and the populace have shown what they think of these things by labeling any philosophy that looks at such aspects of human life as “gloomy” or “merely a mood of despair.”  We are still so rooted in the Enlightenment-or uprooted in it-that these unpleasant aspects of life are like Furies for us: hostile forces from which we would escape (the easiest way is to deny that the Furies exist).”

My take on all of this is simply that multiple descriptions of human existence are needed to account for all of our experiences, thoughts, values, and behavior.  And it is what we value given our particular subjectivity that needs to be primary in these descriptions, and primary with respect to how we choose to engineer the world we live in.  Who we are as a species is a complex mixture of good and bad, lightness and darkness, and stability and chaos; and we shouldn’t deny any of these attributes nor repress them simply because they make us uncomfortable.  Instead, we would do much better to face who we are head on, and then try to make our lives better while taking our limitations into account.

“We are the children of an enlightenment, one which we would like to preserve; but we can do so only by making a pact with the old goddesses.  The centuries-long evolution of human reason is one of man’s greatest triumphs, but it is still in process, still incomplete, still to be.  Contrary to the rationalist tradition, we now know that it is not his reason that makes man man, but rather that reason is a consequence of that which really makes him man.  For it is man’s existence as a self-transcending self that has forged and formed reason as one of its projects.”

This is well said, although I’d prefer to couch this in different terms: it is ultimately our capacity to imagine that makes us human and able to transcend our present selves by pointing toward a future self and a future world.  We do this in part by updating our models of the world, simulating new worlds, and trying to better understand the world we live in by engaging with it, re-shaping it, and finding ways of better predicting its causal structure.  Reason and rationality have been integral in updating our models of the world, but they’ve also been high-jacked to some degree by a kind of super-normal stimuli reinforced by technology and our living in a world that is entirely alien to our evolutionary niche, and which have largely dominated our lives in a way that overlooks our individualism, emotions, and feelings.

Abandoning reason and rationality is certainly not the answer to this existential problem; rather, we just need to ensure that they’re being applied in a way that aligns with our moral imperatives and that they’re ultimately grounded on our subjective human psychology.

Co-evolution of Humans & Artificial Intelligence

In my last post, I wrote a little bit about the concept of personal identity in terms of what some philosophers have emphasized and my take on it.  I wrote that post in response to an interesting blog post written by James DiGiovanna over at A Philosopher’s Take.  James has written another post related to the possible consequences of integrating artificial intelligence into our societal framework, but rather than discussing personal identity as it relates to artificial intelligence, he discussed how the advancements made in machine learning and so forth are leading to the future prospects of effective companion AI, or what he referred to as programmable friends.  The main point he raised in that post was the fact that programmable friends would likely have a very different relationship dynamic with us compared with our traditional (human) friends.  James also spoke about companion AI  in terms of their also being laborers (as well as being friends) but for the purposes of this post I won’t discuss these laborer aspects of future companion AI (even if the labor aspect is what drives us to make companion AI in the first place).  I’ll be limiting my comments here to the friendship or social dynamic aspects only.  So what aspects about programmable AI should we start thinking about?

Well for one, we don’t currently have the ability to simply reprogram a friend to be exactly as we want them to be, in order to avoid conflicts entirely, to share every interest we have, etc., but rather there is a bit of a give-and-take relationship dynamic that we’re used to dealing with.  We learn new ways of behaving and looking at the world and even new ways of looking at ourselves when we have friendships with people that differ from us in certain ways.  Much of the expansion and beneficial evolution of our perspectives are the result of certain conflicts that arise between ourselves and our friends, where different viewpoints can clash against one another, often forcing a person to reevaluate their own position based on the value they place on the viewpoints of their friends.  If we could simply reprogram our friends, as in the case with some future AI companions, what would this do to our moral, psychological and intellectual growth?  There would be some positive effects I’m sure (from having less conflict in some cases and thus an increase in short term happiness), but we’d definitely be missing out on a host of interpersonal benefits that we gain from having the types of friendships that we’re used to having (and thus we’d likely have less overall happiness as a result).

We can see where evolution ties in to all this, whereby we have evolved as a social species to interact with others that are more or less like us, and so when we envision these possible future AI friendships, it should become obvious why certain problems would be inevitable largely because of the incompatibility with our evolved social dynamic.  To be sure, some of these problems could be mitigated by accounting for them in the initial design of the companion AI.  In general, this could be done by making the AI more like humans in the first place and this could be something advertised as some kind of beneficial AI “social software package” so people looking to get companion AI would be inclined to get this version even if they had the choice to go for the entirely reprogrammable version.

Some features of a “social software package” could be things like a limit on the number of ways the AI could be reprogrammed such that only very serious conflicts could be avoided through reprogramming, but without the ability to avoid all conflicts.  It could be such that the AI are able to have a weight on certain opinions, just as we do, and to be more assertive with regard to certain propositions and so forth.  Once the AI has learned its human counterpart’s personality, values, opinions, etc., it could also be programmed with the ability to intentionally challenge that human by offering different points of view and by its using the Socratic method (at least from time to time).  If people realized that they could possibly gain wisdom, knowledge, tolerance, and various social aptitudes from their companion AI, I would think that would be a marked selling point.

Another factor that I think will likely play a role in mitigating the possible social dynamic clash between companion AI (that are programmable) and humans is the fact that humans are also likely to become more and more integrated with AI technology generally.  That is, as humans are continuing to make advancements in AI technology, we are also likely to integrate a lot of that technology into ourselves, to make humans more or less into cyborgs a.k.a. cybernetic organisms.  If we see the path we’re on already with all the smart phones, apps, and other gadgets and computer systems that have started to become extensions of ourselves, we can see that the next obvious step (which I’ve mentioned elsewhere, here and here) is to remove the external peripherals so that they are directly accessible via our consciousness with no need of interfacing with external hardware and so forth.  If we can access “the cloud” with our minds (say, via bluetooth or the like), then the apps and all the fancy features can become a part of our minds, adding to the ways that we will be able to think, providing an internet worth of knowledge at our cognitive disposal, etc.  I could see this technology eventually allowing us to change our senses and perceptions, including an ability to add virtual objects that are amalgamated with the rest of the external reality that we perceive (such as adding virtual friends that we see and interact with that aren’t physically present outside of our minds even though they appear to be).

So if we start to integrate these kinds of technologies into ourselves as we are also creating companion AI, then we may also end up with the ability to reprogram ourselves alongside those programmable companion AI.  In effect, our own qualitatively human social dynamic may start to change markedly and become more and more compatible with that of the future AI.  The way I think this will most likely play out is that we will try to make AI more like us as we try to make us more like AI, where we co-evolve with one another, trying to share advantages with one another and eventually becoming indistinguishable from one another.  Along this journey however we will also become very different from the way we are now, and after enough time passes, we’ll likely change so much that we’d be unrecognizable to people living today.  My hope is that as we use AI to also improve our intelligence and increase our knowledge of the world generally, we will also continue to improve on our knowledge of what makes us happiest (as social creatures or otherwise) and thus how to live the best and most morally fruitful lives that we can.  This will include improving our knowledge of social dynamics and the ways that we can maximize all the interpersonal benefits therein.  Artificial intelligence may help us to accomplish this however paradoxical or counter-intuitive that may seem to us now.

Artificial Intelligence: A New Perspective on a Perceived Threat

There is a growing fear in many people of the future capabilities of artificial intelligence (AI), especially as the intelligence of these computing systems begins to approach that of human beings.  Since it is likely that AI will eventually surpass the intelligence of humans, some wonder if these advancements will be the beginning of the end of us.  Stephen Hawking, the eminent British physicist, was recently quoted by the BBC as saying “The development of full artificial intelligence could spell the end of the human race.”  BBC technology correspondent Rory Cellan-Jones said in a recent article “Prof Hawking says the primitive forms of artificial intelligence developed so far have already proved very useful, but he fears the consequences of creating something that can match or surpass humans.”  Hawking then said “It would take off on its own, and re-design itself at an ever increasing rate.  Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”

Hawking isn’t alone with this fear, and clearly this fear isn’t ill-founded.  It doesn’t take a rocket scientist to realize that human intelligence has allowed us to overcome just about any environmental barrier we’ve come across, driving us to the top of the food chain.  We’ve all seen the benefits of our high intelligence as a species, but we’ve also seen what can happen due to that intelligence being imperfect, having it operate on incomplete or fallacious information, and ultimately lacking an adequate capability of accurately determining the long-term consequences of our actions.  Because we have such a remarkable ability to manipulate our environment, that manipulation can be extremely beneficial or extremely harmful as we’ve seen with the numerous species we’ve threatened on this planet (some having gone extinct).  We’ve even threatened many fellow human beings in the process, whether intentionally or not.  Our intelligence, combined with some elements of short-sightedness, selfishness, and aggression, has led to some pretty abhorrent products throughout human history — anything from the mass enslavement of others spanning back thousands of years to modern forms of extermination weaponry (e.g. bio-warfare and nuclear bombs).  If AI reaches and eventually surpasses our level of intelligence, it is reasonable to consider the possibility that we may find ourselves on a lower rung of the food chain (so to speak), potentially becoming enslaved or exterminated by this advanced intelligence.

AI: Friend or Foe?

So what exactly prompted Stephen Hawking to make these claims?  As the BBC article mentions, “His warning came in response to a question about a revamp of the technology he uses to communicate, which involves a basic form of AI…The theoretical physicist, who has the motor neurone disease amyotrophic lateral sclerosis (ALS), is using a new system developed by Intel to speak.  Machine learning experts from the British company Swiftkey were also involved in its creation. Their technology, already employed as a smartphone keyboard app, learns how the professor thinks and suggests the words he might want to use next.”

Reading this article suggests another possibility or perspective that I don’t think a lot of people are considering with regard to AI technology.  What if AI simply replaces us gradually, by actually becoming the new “us”?  That is, as we further progress in Cyborg (i.e. cybernetic organism) technologies, using advancements similar to Stephen Hawking’s communication ability upgrade, we are ultimately becoming amalgams of biological and synthetic machines anyway.  Even the technology that we currently operate through an external peripheral interface (like smart phones and all other computers) will likely become integrated into our bodies internally.  Google glasses, voice recognition, and other technologies like those used by Hawking are certainly taking us in that direction.  It’s not difficult to imagine one day being able to think about a particular question or keyword, and having an integrated blue-tooth implant in our brain recognize the mental/physiological command cue, and successfully retrieve the desired information wirelessly from an online cloud or internet database of some form.  Going further still, we will likely one day be able to take sensory information that enters the neuronal network of our brain, and once again, send it wirelessly to supercomputers stored elsewhere that are able to process the information with billions of pattern recognition modules.  The 300 million or so pattern recognition modules that are currently in our brain’s neo-cortex would be dwarfed by this new peripheral-free interface and wirelessly accessible technology.

For those that aren’t very familiar with the function or purpose of the brain’s neo-cortex, we use its 300 million or so pattern recognition modules to notice patterns in the environment around us (and meta patterns of neuronal activity within the brain), thus being able to recognize and memorize sensory data, and think.  Ultimately, we use this pattern recognition to accomplish goals, solve problems, and gain knowledge from past experience.  In short, these pattern recognition modules are our primary source or physiological means for our intelligence.  Thus, being able to increase the number of pattern recognition modules (as well as the number of hierarchies between different sets of them), will only increase our intelligence.  Regardless of whether we integrate computer chips in our brain to do at least some or all of this processing locally, or use a wireless means of communication to an external supercomputer farm or otherwise, we will likely continue to integrate our biology with synthetic analogs to increase our capabilities.

When we realize that a higher intelligence allows us to better predict the consequences of our actions, we can see that our increasing integration with AI will likely have incredible survival benefits.  This integration will also catalyze further technologies that could never have been accomplished with biological brains alone, because we simply aren’t naturally intelligent enough to think beyond a certain level of complexity.  As Hawking said regarding AI that could surpass our intelligence, “It would take off on its own, and re-design itself at an ever increasing rate.  Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”  Yes, but if that AI becomes integrated in us, then really it is humans that are finding a way to circumvent slow biological evolution with a non-biological substrate that supercedes it.

At this time I think it is relevant to mention something else I’ve written about previously, which is the advancements being made in genetic engineering and how they are taking us into our last and grandest evolutionary leap, a “conscious evolution”, thus being able to circumvent our own slow biological evolution through an intentionally engineered design.  So as we gain more knowledge in the field of genetic engineering (combined with the increasing simulation and computing power afforded by AI), we will likely be able to catalyze our own biological evolution such that we can evolve quickly as we increase our Cyborg integrations with AI.  So we will likely see an increase in genetic engineering capabilities developing in close parallel with AI advancements, with each field substantially contributing to the other and ultimately leading to our transhumanism.

Final Thoughts

It seems clear that advancements in AI are providing us with more tools to accomplish ever-more complex goals as a species.  As we continue to integrate AI into ourselves, what we now call “human” is simply going to change as we change.  This would happen regardless, as human biological evolution continues its course into another speciation event, similar to the one that led to humans in the first place.  In fact, if we wanted to maintain the way we are right now as a species, biologically speaking, it would actually require us to use genetic engineering to do so, because genetic differentiation mechanisms (e.g. imperfect DNA replication, mutations, etc.) are inherent in our biology.  Thus, for those that argue against certain technologies based on a desire to maintain humanity and human attributes, they must also realize that the very technologies they despise are in fact their only hope for doing so.  More importantly, the goal of maintaining what humans currently are goes against our natural evolution, and we should embrace change, even if we embrace it with caution.

If AI continues to become further integrated into ourselves, forming a Cyborg amalgam of some kind, as it advances to a certain point we may choose one day to entirely eliminate the biological substrate of that amalgam, if it is both possible and advantageous to do so.  Even if we maintain some of our biology, and merely hybridize with AI, then Hawking was right to point out that “The development of full artificial intelligence could spell the end of the human race.”  Although, rather than a doomsday scenario like we saw in the movie The Terminator, with humans and machines at war with one another, the end of the human race may simply mean that we will end up changing into a different species, just as we’ve done throughout our evolutionary history.  Only this time, it will be a transition from a purely biological evolution to a cybernetic hybrid variation.  Furthermore, if it happens, it will be a transition that will likely increase our intelligence (and many other capabilities) to unfathomable levels, giving us an unprecedented ability to act based on more knowledge of the consequences of our actions as we move forward.  We should be cautious indeed, but we should also embrace our ongoing evolution and eventual transhumanism.

Mind, Body, and the Soul: The Quest for an Immaterial Identity

There’s little if any doubt that the brain (the human brain in particular) is the most complex entity or system that we’ve ever encountered in the known universe, and thus it is not surprising that it has allowed humans to reach the top of the food chain and also the ability to manipulate our environment more than any other creature on Earth.  Not only has it provided humans with the necessary means for surviving countless environmental pressures, effectively evolving as a sort of anchor and catalyst for our continued natural selection over time (through learning, language, adaptive technology, etc.), but it has also allowed humans to become aware of themselves, aware of their own consciousness, and aware of their own brains in numerous other ways.  The brain appears to be the first evolved feature of an organism capable of mapping the entire organism (including its interaction with the external environment), and it may even be the case that consciousness later evolved as a result of the brain making maps of itself.  Even beyond these capabilities, the human brain has also been able to map itself in terms of perceptually acquired patterns related to its own activity (i.e. when we study and learn about how our brains work).

It isn’t at all surprising when people marvel over the complexity, beauty and even seemingly surreal qualities of the brain as it produces the qualia of our subjective experience including all of our sensations, emotions and the resulting feelings that ensue.  Some of our human attributes are so seemingly remarkable, that many people have gone so far as to say that at least some of these attributes are either supernatural, supernaturally endowed, and/or are forever exclusive to humans.  For example, some religious people claim that humans alone have some kind of immaterial soul that exists outside of our experiential reality.  Some also believe that humans alone possess free will, are conscious in some way forever exclusive to humans (some have even argued that consciousness in general is an exclusively human trait), and a host of other (perhaps anthropocentric) “human only” attributes, with many of them forever exclusive to humans.  In the interest of philosophical exploration, I’d like to consider and evaluate some of these claims about “exclusively human” attributes.  In particular, I’d like to focus on the non-falsifiable claim of having a soul, with the aid of reason and a couple of thought experiments, although these thought experiments may also shed some light on other purported “exclusively human” attributes (e.g. free will, consciousness, etc.).  For the purposes of simplicity in these thought experiments, I may periodically refer to many or all purported “humanly exclusive” attributes as simply, “H”.  Let’s begin by briefly examining some of the common conceptions of a soul and how it is purported to relate to the physical world.

What is a Soul?

It seems that most people would define a soul to be some incorporeal entity or essence that serves as an immortal aspect or representation of an otherwise mortal/living being.  Furthermore, many people think that souls are something possessed by human beings alone.  There are also people who ascribe souls to non-living entities (such as bodies of water, celestial bodies, wind, etc.), but regardless of these distinctions, for those that believe in souls, there seems to be something in common: souls appear to be non-physical entities correlated, linked, or somehow attached to a particular physical body or system, and are usually believed to give rise to consciousness, a “life force”, animism, or some power of agency.  Additionally, they are often believed to transcend material existence through their involvement in some form of an afterlife.  While it is true that souls and any claims about souls are unfalsifiable and thus are excluded from any kind of empirical investigation, let’s examine some commonly held assumptions and claims about souls and see how they hold up to a more critical examination.

Creation or Correlation of Souls

Many religious people now claim that a person’s life begins at conception (after Science discovered this specific stage of reproduction), and thus it would be reasonable to assume that if they have a soul, that soul is effectively created at conception.  However, some also believe that all souls have co-existed for the same amount of time (perhaps since the dawn of our universe), and that souls are in some sense waiting to be linked to the physical person once they are conceived or come into existence.  Another way of expressing this latter idea is the belief that all souls have existed since some time long ago, but only after the reproductive conception of a person does that soul begin to have a physical correlate or incarnation linked to it.  In any case, the presumed soul is believed to be correlated to a particular physical body (generally presumed to be a “living” body, if not a human body), and this living body has been defined by many to begin its life either at conception (i.e. fertilization), shortly thereafter as an embryo (i.e. once the fertilized egg/cell undergoes division at least once), or once it is considered a fetus (depending on the context for such a definition).  The easiest definition to use for the purposes of this discussion is to define life to begin at conception (i.e. fertilization).

For one, regardless of the definition chosen, it seems difficult to define exactly when the particular developmental stage in question is reached.  Conception could be defined to take place once the spermatozoa’s DNA contents enter the zygote or perhaps not until some threshold has been reached in a particular step of the process afterward (e.g. some time after the individual parent DNA strands have mixed to produce a double-helix daughter strand).  Either way, most proponents of the idea of a human soul seem to assume that a soul is created or at least correlated (if created some time earlier) at the moment of, or not long after, fertilization.  At this point, the soul is believed to be correlated or representative of the now “living” being (which is of course composed of physical materials).

At a most basic level, one could argue, if we knew exactly when a soul was created/correlated with a particular physical body (e.g. a fertilized egg), then by reversing the last step in the process that instigated the creation/correlation of the soul, we should be able to destroy/decorrelate the soul.  Also, if a soul was in fact correlated with an entire fertilized egg, then if we remove even one atom, molecule, etc., would that correlation change?  If not, then it would appear that the soul is not actually correlated with the entire fertilized egg, but rather it is correlated with some higher level aspect or property of it (whatever that may be).

Conservation & Identity of Souls

Assuming a soul is in fact created or correlated with a fertilized egg, what would happen in the case of chimerism, where more than one fertilized egg fuse together in the early stages of embryonic development?  Would this developing individual have two souls?  By the definition or assumptions given earlier, if a soul is correlated with a fertilized egg in some way, and two fertilized eggs (each with their own soul) merge together, then this would indicate one of a few possibilities.  Either two souls merged into one (or one is actually destroyed) which would demonstrate that the number of souls are not conserved (indicating that not all souls are eternal/immortal), or the two souls would co-exist with that one individual and would imply that not all individuals have the same number of souls (some have one, some may have more) and thus souls don’t each have their own unique identity with a particular person, or it would indicate that after the merging of fertilized eggs took place, one of the two souls would detach from or become decorrelated with its physical counterpart, and the remaining soul would get to keep the booty of both fertilized eggs or so to speak.

In the case of identical twins, triplets, etc., a fertilized egg eventually splits, and we are left with the opposite conundrum. It would seem that we would be starting with one soul that eventually splits into two or more, and thus there would be another violation of the conservation of the number of souls.  Alternatively, if the number of souls are indeed conserved, an additional previously existing soul (if this was the case) could become correlated with the second fertilized egg produced. Yet another possibility would be to say that the “twins to be” (i.e. the fertilized egg prior to splitting) has two souls to start with and when the egg splits, the souls are segregated and each pre-destined twin is given their own.

The only way to avoid these implications would be to modify the assumption given earlier, regarding when a soul is created or correlated.  It would have to be defined such that a soul is created or correlated with a physical body some time after an egg is fertilized when it is no longer possible to fuse with another fertilized egg and after it can no longer split into fertilized multiples (i.e. twins, triplets, etc.).  If this is true, then one could no longer say that a fertilized egg necessarily has a soul, for that wouldn’t technically be the case until some time afterward when chimerism or monozygotic multiples were no longer possible.

If people believe in non-physical entities that can’t be seen or in any way extrospectively verified, it’s not much of a stretch to say that they can come up with a way to address these questions or reconcile these issues, with yet more unfalsifiable claims.  Some of these might not even be issues for various believers but I only mention these potential issues to point out the apparent arbitrariness or poorly defined aspects of many claims and assumptions regarding souls. Now let’s look at a few thought experiments to further analyze the concept of a soul and purported “exclusively human” attributes (i.e. “H”) as mentioned in the introduction of this post.

Conservation and Identity of “H”

Thought Experiment # 1: Replace a Neuron With a Non-Biological Analog

What if one neuron in a person’s brain is replaced with a non-biological/artificial version, that is, what if some kind of silicon-based (or other non-carbon-based) analog to a neuron was effectively used to replace a neuron?  We are assuming that this replacement with another version will accomplish the same vital function, that is, the same subjective experience and behavior.  This non-biologically-based neuronal analog may be powered by ATP (Adenosine Triphosphate) and also respond to neurotransmitters with electro-chemical sensors — although it wouldn’t necessarily have to be constrained by the same power or signal transmission media (or mechanisms) as long as it produced the same end result (i.e. the same subjective experience and behavior).  As long as the synthetic neuronal replacement accomplished the same ends, the attributes of the person (i.e. their identity, their beliefs, their actions, etc.) should be unaffected despite any of these changes to their hardware.

Regarding the soul, if souls do in fact exist and they are not physically connected to the body (although people claim that souls are somehow associated with a particular physical body), then it seems reasonable to assume that changing a part of the physical body should have no effect on an individual’s possession of that soul (or any “H” for that matter), especially if the important attributes of the individual, i.e., their beliefs, thoughts, memories, and subsequent actions, etc., were for all practical purposes (if not completely), the same as before.  Even if there were some changes in the important aspects of the individual, say, if there was a slight personality change after some level of brain surgery, could anyone reasonably argue that their presumed soul (or their “H”) was lost as a result?  If physical modifications of the body led to the loss of a soul (or of any elements of “H”), then there would be quite a large number of people (and an increasing number at that) who no longer have souls (or “H”) since many people indeed have had various prosthetic modifications used in or on their bodies (including brain and neural prosthetics) as well as other intervening mediation of body/brain processes (e.g. through medication, transplants, various levels of critical life support, etc.).

For those that think that changing the body’s hardware would somehow disconnect the presumed soul from that person’s body (or eliminate other elements of their “H”), they should consider that this assumption is strongly challenged by the fact that many of the atoms in the human body are replaced (some of them several times over) throughout one’s lifetime anyway.  Despite this drastic biological “hardware” change, where our material selves are constantly being replaced with new atoms from the food that we eat and the air that we breathe (among other sources), we still manage to maintain our memories and our identity simply because the functional arrangements of the brain cells (i.e. neurons and glial cells) which are composed of those atoms are roughly preserved over time and thus the information contained in such arrangements and/or their resulting processes are preserved over time.  We can analogize this important point by thinking about a computer that has had its hardware replaced, albeit in a way that matches or maintains its original physical state, and understand that as a result of this configuration preservation, it also should be able to maintain its original memory, programs and normal functional operation.  One could certainly argue that the computer in question is technically no longer the “same” computer because it no longer has any of the original hardware.  However, the information regarding the computer’s physical state, that is, the specific configuration and states of parts that allow it to function exactly as it did before the hardware replacement, is preserved.  Thus, for all practical purposes in terms of the identity of that computer, it remained the same regardless of the complete hardware change.

This is an important point to consider for those who think that replacing the hardware of the brain (even if limited to a biologically sustained replacement) is either theoretically impossible, or that it would destroy one’s ability to be conscious, to maintain their identity, to maintain their presumed soul, or any presumed element of “H”.  The body naturally performs these hardware changes (through metabolism, respiration, excretion, etc.) all the time and thus the concept of changing hardware while maintaining the critical aspects of an individual is thoroughly demonstrated throughout one’s lifetime.  On top of this, the physical outer boundary that defines our bodies is also arbitrary in the sense that we exchange atoms between our outer surface and the environment around us (e.g. by shedding skin cells, or through friction, molecular desorption/adsorption/absorption, etc.).  The key idea to keep in mind is that these natural hardware changes imply that “we” are not defined specifically by our hardware or some physical boundary with a set number of atoms, but rather “we” are based on how our hardware is arranged/configured (allowing for some variation of configuration states within some finite acceptable range), and the subsequent processes and functions that result from such an arrangement as mediated by the laws of physics.

Is the type of hardware important?  It may be true that changing a human’s hardware to a non-biological version may never be able to accomplish exactly the same subjective experience and behavior that was possible with the biological hardware, however we simply don’t know that this is the case.  It may be that both the type of hardware as well as the configuration are necessary for a body and brain to produce the same subjective experience and behavior.  However, the old adage “there’s more than one way to skin a cat” has been applicable to so many types of technologies and to the means used to accomplish a number of goals.  There are a number of different hardware types and configurations that can be used to accomplish a particular task, even if, after changing the hardware the configuration must also be changed to accomplish a comparable result.  The question becomes, which parts or aspects of the neural process in the brain produces subjective experience and behavior?  If this becomes known, we should be able to learn how biologically-based hardware and its configuration work together in order to accomplish a subjective experience and behavior, and then also learn if non-biologically-based hardware (perhaps with its own particular configuration) can accomplish the same task.  For the purposes of this thought experiment, let’s assume that we can swap out the hardware with a different type, even if, in order to preserve the same subjective experience and behavior, the configuration must be significantly different than it was with the original biologically-based hardware.

So, if we assume that we can replace a neuron with an efficacious artificial version, and still maintain our identity, our consciousness, any soul that might be present, or any element of “H” for that matter, then even if we replace two neurons with artificial versions, we should still have the same individual.  In fact, even if we replace every neuron, perhaps just one neuron at a time, eventually we would be replacing the entire brain with an artificial version, and yet still have the same individual.  This person would now have a completely non-biologically based “brain”.  In theory, their identity would be the same, and they would subjectively experience reality and their selves as usual.  Having gone this far, let’s assume that we replace the rest of the body with an artificial version.  Replacing the rest of the body, one part at a time, should be far less significant a change than replacing the brain, for the rest of the body is far less complex.

It may be true that the body serves as an integral homeostatic frame of reference necessary for establishing some kind of self-object basis of consciousness (e.g. Damasio’s Theory of Consciousness), but as long as our synthetic brain is sending/receiving the appropriate equivalent of sensory/motor information (i.e. through an interoceptive feedback loop among other requirements) from the new artificial body, the model or map of the artificial body’s internal state provided by the synthetic brain should be equivalent.  It should also be noted that the range of conditions necessary for homeostasis in one human body versus another is far narrower and less individualized than the differences found between the brains of two different people.  This supports the idea that the brain is in fact the most important aspect of our individuality, and thus replacing the rest of the body should be significantly easier to accomplish and also less critical a change.  After replacing the rest of the body, we would now have a completely artificial non-biological substrate for our modified “human being”, or what many people would refer to as a “robot”, or a system of “artificial intelligence” with motor capabilities.  This thought experiment seems to suggest at least one of several implications:

  • Some types of robots can possess “H” (e.g. soul, consciousness, free-will, etc.), and thus “H” are not uniquely human, nor are they forever exclusive to humans.
  • Humans lose some or all of their “H” after some threshold of modification has taken place (likely a modification of the brain)
  • “H”, as it is commonly defined at least, does not exist

The first implication listed above would likely be roundly rejected by most people that believe in the existence of “H” for several reasons including the fact that most people see robots as fundamentally different than living systems, they see “H” as only applicable to human beings, and they see a clear distinction between robots and human beings (although the claim that these distinctions exist has been theoretically challenged by this thought experiment).  The second implication sounds quite implausible (even if we assume that “H” exists) as it would seem to be impossible to define when exactly any elements of “H” were lost based on exceeding some seemingly arbitrary threshold of modification.  For example, would the loss of some element of “H” occur only after the last neuron was replaced with an artificial version?  If the loss of “H” did occur after some specific number of neurons were removed (or after the number of neurons that remained fell below some critical minimum quantity), then what if the last neuron removed (which caused this critical threshold to be met) was biologically preserved and later re-installed, thus effectively reversing the last neuronal replacement procedure?  Would the previously lost “H” then return?

Thought Experiment # 2: Replace a Non-Biological Neuronal Analog With a Real Neuron

We could look at this thought experiment (in terms of the second implication) yet another way by simply reversing the order of the thought experiment.  For example, imagine that we made a robot from scratch that was identical to the robot eventually obtained from the aforementioned thought experiment, and then we began to replace its original non-biologically-based neuronal equivalent with actual biologically-based neurons, perhaps even neurons that were each taken from a separate human brain (say, from one or several cadavers) and preserved for such a task.  Even after this, consider that we proceed to replace the rest of the robot’s “body”, again piecewise (say, from one or several cadavers), until it was completely biologically-based to match the human being we began with in the initial thought experiment.  Would or could this robot acquire “H” at some point, or be considered human?  It seems that there would be no biological reason to claim otherwise.

Does “H” exist?  If So, What is “H”?

I’m well aware of how silly some of these hypothetical questions and considerations sound, however I find it valuable to follow the reasoning all the way through in order to help illustrate the degree of plausibility of these particular implications, and the plausibility or validity of “H”.  In the case of the second implication given previously (that humans lose some or all of “H” after some threshold of modification), if there’s no way to define or know when “H” is lost (or gained), then nobody can ever claim with certainty that an individual has lost their “H”, and thus they would have to assume that all elements of “H” have never been lost (if they want to err on the side of, what some may call, ethical or moral caution).  By that rationale, one would find themselves forced to accept the first implication (some types of robots can possess “H”, and thus “H” isn’t unique to humans).  If anyone denies the first two implications, it seems that they are only left with the third option.  The third implication seems to be the most likely (that “H” as previously defined does not exist), however it should be mentioned that even this third implication may be circumvented by realizing that it has an implicit loophole.  There is a possibility that some or all elements and/or aspects of “H” are not exactly what people assume them to be, and therefore “H” may exist in some other sense.  For example, what if we considered particular patterns themselves, i.e., the brain/neuronal configurations, patterns of brain waves, neuronal firing patterns, patterns of electro-chemical signals emanated throughout the body, etc., to be the “immaterial soul” of each individual?  We could look at these patterns as being immaterial if the physical substrate that employs them is irrelevant, or by simply noting that patterns of physical material states are not physical materials in themselves.

This is analogous to the concept that the information contained in a book can be represented on paper, electronically, in multiple languages, etc., and is not reliant on a specific physical medium.  This would mean that one could accept the first implication that robots or “mechanized humans” possess “H”, although it would also necessarily imply that any elements of “H” aren’t actually unique or exclusive to humans as they were initially assumed to be.  One could certainly accept this first implication by noting that the patterns of information (or patterns of something if we don’t want to call it information per se) that comprise the individual were conserved throughout the neuronal (or body) replacement in these thought experiments, and thus the essence or identity of the individual (whether “human” or “robot”) was preserved as well.

Pragmatic Considerations & Final Thoughts

I completely acknowledge that in order for this hypothetical neuronal replacement to be truly accurate in reproducing normal neuronal function (even with just one neuron), above and beyond the potential necessity of both a specific type of hardware as well as configuration (as mentioned earlier), the non-biologically based version would presumably also have to replicate the neuronal plasticity that the brain normally possesses.  In terms of brain plasticity, there are basically four known factors involved with neuronal change, sometimes referred to as the four R’s: regeneration, reconnection, re-weighting, and rewiring.  So clearly, any synthetic neuronal version would likely involve some kind of malleable processing in order to accomplish at least some of these tasks (if not all of them to some degree), as well as some possible nano-self-assembly processes if actual physical rewiring were needed.  The details of what and how this would be accomplished will become better known over time as we learn more about the possible neuronal dynamic mechanisms involved (e.g. neural darwinism or other means of neuronal differential reproduction, connectionism, Hebbian learning, DNA instruction, etc.).

I think that the most important thing to gain from these thought experiments is the realization of the inability or severe difficulty in taking the idea of souls or “H” seriously given the incompatibility between the traditional  conception of a concrete soul or other “H” and the well-established fluidic or continuous nature of the material substrates that they are purportedly correlated with.  That is, all the “things” in this world, including any forms of life (human or not) are constantly undergoing physical transformation and change, and they possess seemingly arbitrary boundaries that are ultimately defined by our own categorical intuitions and subjective perception of reality.  In terms of any person’s quest for “H”, if what one is really looking for is some form of constancy, essence, or identity of some kind in any of the things around us (let alone in human beings), it seems that it is the patterns of information (or perhaps the patterns of energy to be more accurate) as well as the level of complexity or type of patterns that ultimately constitute that essence and identity.  Now if it is reasonable to conclude that the patterns of information or energy that comprise any physical system aren’t equivalent to the physical constituent materials themselves, one could perhaps say that these patterns are a sort of “immaterial” attribute of a set of physical materials.  This seems to be as close to the concept of an immaterial “soul” as a physicalist or materialist could concede exists, since, at the very least it involves a property of continuity and identity which somewhat transcends the physical materials themselves.