The Open Mind

Cogito Ergo Sum

Archive for the ‘Transhumanism’ Category

CGI, Movies and Truth…

leave a comment »

After watching Rogue One: A Star Wars Story, which I liked, though not nearly as much as the original trilogy (Episodes IV, V, and VI), it got me thinking more about something I hadn’t thought about since the most recent presidential election.  As I watched Grand Moff Tarkin and Princess Leia, both characters made possible in large part thanks to CGI (as well as the help of actors Ingvild Deila and Guy Henry), I realized that although this is still a long way away, it is inevitable that (barring a nuclear world war or some other catastrophe that kills us all or sets us back to a pre-industrialized age) the pace of this technology will eventually lead to CGI products that are completely indistinguishable from reality.

This means that eventually, the “fake news” issue that many have been making a lot of noise about as of late, will one day take a new and ugly turn for the worse.  Not only is video and graphic technology accelerating at a fairly rapid pace to exacerbate this problem, but similar concerns are also arising as a result of voice editing software.  By simply gathering several seconds of sample audio from a person of interest, various forms of software are getting better and better at synthesizing their speech in order to mimic them — putting whatever words into “their” mouths that one so desires.

The irony here is that this means that despite the fact that we are going to continue becoming more and more globally interconnected, technologically advanced, and gain more global knowledge, it seems that we will eventually reach a point where each individual becomes less and less able to know what is true and what isn’t in all the places that you are electronically connected to.  One reason for this is that, as per the opening reference to Rogue One, it will become increasingly difficult to judge the veracity of videos that go viral on the internet and/or through news outlets.  We can imagine seeing a video (or many series of videos) released on the news and throughout the internet containing shocking events with real world leaders or other famous people, places, and so forth, events that could possibly start a civil or world war, alter one’s vote, or otherwise — but with the caveat that these events are entirely manufactured by some Machiavellian warmonger or power seeking elite.

Pragmatically speaking, we must still live our lives trusting what we see in proportion to the evidence we have, thus believing ordinary claims with a higher degree of confidence than extraordinary ones.  We will still need to hold to the general rule of extraordinary claims requiring extraordinary evidence in order to meet their burden of proof.  But it will become more difficult to trust certain forms of evidence (including in a court of law), so we’ll have to take that into consideration so that actions that result in more catastrophic consequences (if your assumptions/information turn out to be based on false evidence) require a higher burden of proof — once we are able to successfully pass some kind of graphics Touring Test.

This is by no means an endorsement for conspiracy theories generally nor any other anti-intellectual or dogmatic non-sense. We don’t want people to start doubting everything they see nor to start doubting everything they don’t WANT to see (which would be a proverbial buffet for our cognitive biases and the conspiracy theorists that make use of these epistemological flaws regularly), we still need to take this dynamic technological factor into account to maintain a world view based on proper Bayesian reasoning.

On the brighter side of things, we are going to get to enjoy much of what the new CGI capabilities will bring to us, because movies and all visual entertainment are going to be revolutionarily changed forever in many ways that will be worth celebrating, including our use of virtual reality generally (many various forms that we do and will continue to consciously and rationally desire). We just need to pay attention and exercise some careful moral deliberation as we develop these technologies. Our brains simply didn’t evolve to easily avoid being duped by artificial realities like the ones we’re developing (we already get duped far too often within our actual reality), so we need to engineer our path forward in a way that will better safeguard us from our own cognitive biases so we can maximize our well being once this genie is out of the bottle.

Advertisements

Virtual Reality & Its Moral Implications

leave a comment »

There’s a lot to be said about virtual reality (VR) in terms of our current technological capabilities, our likely prospects for future advancements, and the vast amount of utility that we gain from it.  But, as it is with all other innovations, with great power comes great responsibility.

While there are several types of VR interfaces on the market, used for video gaming or various forms of life simulation, they do have at least one commonality, namely the explicit goal of attempting to convince the user that what they are experiencing is in fact real in at least some sense.  This raises a number of ethical concerns.  While we can’t deny the fact that even reading books and watching movies influences our behavior to some degree, VR is bound to influence our behavior much more readily because of the sheer richness of the qualia and the brain’s inability to distinguish significant differences between a virtual reality and our natural one.  Since the behavioral conditioning schema that our brain employs has evolved to be well adapted to our natural reality, any virtual variety that increasingly approximates it is bound to increasingly affect our behavior.  So we need to be concerned with VR in terms of how it can affect our beliefs, our biases, and our moral inclinations and other behaviors.

One concern with VR is the desensitization to, or normalization of, violence and other undesirable or immoral behaviors.  Many video games have been criticized over the years for this very reason, with the claim that they promote similar behaviors in the users of those games (most especially younger users with more impressionable minds).  These claims have been significantly validated by the American Psychological Association and the American Academy of Pediatrics, where they have both taken firm stances against children and teens playing violent video games, as a result of the accumulated research and meta studies showing a strong link between violent video gaming and increased aggression, anti-social behavior, and sharp decreases in moral engagement and empathy.

Thus, the increasingly realistic nature of VR and the ever-consistent increase in the capacities one has at their disposal within such a virtual space, is bound to exacerbate these types of problems.  If people are able to simulate rape or pedophilia among other morally reprehensible actions and social taboos, will they too become more susceptible to actually engaging in these behaviors once they leave the virtual space and re-enter the real world?  Even if they don’t increase their susceptibility to perform those behaviors, what does such a virtual escapade do to that person’s moral character?  Are they more likely to condone those behaviors (even if they don’t participate in them directly), or to condone other behaviors that have some kind of moral relevance or cognitive overlap with one another?

On the flip side, what if it was possible to use VR as a therapeutic tool to help cure pedophilia or other behavioral problems?  What if one was able to simulate rape, pedophilia or otherwise to reduce their chances of performing those acts in the real world?  Hardly anyone would argue that a virtual rape or molestation is anywhere near as abhorrent or consequential as real instances of such crimes would be, horrific crimes made against real human beings.  While this may only apply to a small number of people, it is at least plausible that such a therapeutic utility would make the world a better place if it prevented an actual rape or other crime from taking place.  If certain people have hard-wired impulses that would normally ruin their lives or the lives of others if left unchecked, then it would be prudent if not morally obligatory to do what we can to prevent such harms from taking place.  So even though this technology could make the otherwise healthy user begin to engage in bad behaviors, it could also be used as an outlet of expression for those already afflicted with similar impulses.  Just as they’ve used VR to help cure anxiety disorders, phobias, PTSD, and other pathologies, by exposing people to various stimuli that help them to overcome their ills, so too may VR possibly provide a cure for other types of mental illnesses and aggressive predispositions such as those related to murder, sexual assault, etc.

Whether VR is used as an outlet for certain behaviors to prevent them from actually happening in the real world, or as a means of curing a person from those immoral inclinations (where the long term goal is to eventually no longer need any VR treatment at all), there are a few paths that could show some promising results to decrease crime and so forth.  But, beyond therapeutic uses, we need to be careful about how these technologies are used generally and how that usage will increasingly affect our moral inclinations.

If society chose to implement some kind of prohibition to limit the types of things people could do in these virtual spaces, that may be of some use, but beyond the fact that this kind of prohibition would likely be difficult to enforce, it would also be a form of selective prohibition that may not be justified to implement.  If one chose to prohibit simulated rape and pedophilia (for example), but not prohibit murder or other forms of assault and violence, then what would justify such a selective prohibition?  We can’t simply rely on an intuition that the former simulated behaviors are somehow more repugnant than the latter (and besides, many would say that murder is just as bad if not worse anyway).  It seems that instead we need to better assess the consequences of each type of simulated behavior on our behavioral conditioning to see if at least some simulated activities should be prohibited while allowing others to persist unregulated.  On the other hand, if prohibiting this kind of activity is not practical or if it can only be implemented by infringing on certain liberties that we have good reasons to protect, then we need to think about some counter-strategies to either better inform people about these kinds of dangers and/or to make other VR products that help to encourage the right kinds of behaviors.

I can’t think of a more valuable use for VR than to help us cultivate moral virtues and other behaviors that are conducive to our well-being and to our living a more fulfilled life.  Anything from reducing our prejudices and biases through exposure to various simulated “out-groups” (for example), to modifying our moral character in more profound ways through artificial realities that can encourage the user to help others in need and to develop habits and inclinations that are morally praiseworthy.  We can even use this technology (and have already to some degree) to work out various moral dilemmas and our psychological response to them without anybody actually dying or getting physically hurt.  Overall, VR certainly holds a lot of promise, but it also poses a lot of psychological danger, thus making it incumbent upon us to talk more about these technologies as they continue to develop.

Co-evolution of Humans & Artificial Intelligence

leave a comment »

In my last post, I wrote a little bit about the concept of personal identity in terms of what some philosophers have emphasized and my take on it.  I wrote that post in response to an interesting blog post written by James DiGiovanna over at A Philosopher’s Take.  James has written another post related to the possible consequences of integrating artificial intelligence into our societal framework, but rather than discussing personal identity as it relates to artificial intelligence, he discussed how the advancements made in machine learning and so forth are leading to the future prospects of effective companion AI, or what he referred to as programmable friends.  The main point he raised in that post was the fact that programmable friends would likely have a very different relationship dynamic with us compared with our traditional (human) friends.  James also spoke about companion AI  in terms of their also being laborers (as well as being friends) but for the purposes of this post I won’t discuss these laborer aspects of future companion AI (even if the labor aspect is what drives us to make companion AI in the first place).  I’ll be limiting my comments here to the friendship or social dynamic aspects only.  So what aspects about programmable AI should we start thinking about?

Well for one, we don’t currently have the ability to simply reprogram a friend to be exactly as we want them to be, in order to avoid conflicts entirely, to share every interest we have, etc., but rather there is a bit of a give-and-take relationship dynamic that we’re used to dealing with.  We learn new ways of behaving and looking at the world and even new ways of looking at ourselves when we have friendships with people that differ from us in certain ways.  Much of the expansion and beneficial evolution of our perspectives are the result of certain conflicts that arise between ourselves and our friends, where different viewpoints can clash against one another, often forcing a person to reevaluate their own position based on the value they place on the viewpoints of their friends.  If we could simply reprogram our friends, as in the case with some future AI companions, what would this do to our moral, psychological and intellectual growth?  There would be some positive effects I’m sure (from having less conflict in some cases and thus an increase in short term happiness), but we’d definitely be missing out on a host of interpersonal benefits that we gain from having the types of friendships that we’re used to having (and thus we’d likely have less overall happiness as a result).

We can see where evolution ties in to all this, whereby we have evolved as a social species to interact with others that are more or less like us, and so when we envision these possible future AI friendships, it should become obvious why certain problems would be inevitable largely because of the incompatibility with our evolved social dynamic.  To be sure, some of these problems could be mitigated by accounting for them in the initial design of the companion AI.  In general, this could be done by making the AI more like humans in the first place and this could be something advertised as some kind of beneficial AI “social software package” so people looking to get companion AI would be inclined to get this version even if they had the choice to go for the entirely reprogrammable version.

Some features of a “social software package” could be things like a limit on the number of ways the AI could be reprogrammed such that only very serious conflicts could be avoided through reprogramming, but without the ability to avoid all conflicts.  It could be such that the AI are able to have a weight on certain opinions, just as we do, and to be more assertive with regard to certain propositions and so forth.  Once the AI has learned its human counterpart’s personality, values, opinions, etc., it could also be programmed with the ability to intentionally challenge that human by offering different points of view and by its using the Socratic method (at least from time to time).  If people realized that they could possibly gain wisdom, knowledge, tolerance, and various social aptitudes from their companion AI, I would think that would be a marked selling point.

Another factor that I think will likely play a role in mitigating the possible social dynamic clash between companion AI (that are programmable) and humans is the fact that humans are also likely to become more and more integrated with AI technology generally.  That is, as humans are continuing to make advancements in AI technology, we are also likely to integrate a lot of that technology into ourselves, to make humans more or less into cyborgs a.k.a. cybernetic organisms.  If we see the path we’re on already with all the smart phones, apps, and other gadgets and computer systems that have started to become extensions of ourselves, we can see that the next obvious step (which I’ve mentioned elsewhere, here and here) is to remove the external peripherals so that they are directly accessible via our consciousness with no need of interfacing with external hardware and so forth.  If we can access “the cloud” with our minds (say, via bluetooth or the like), then the apps and all the fancy features can become a part of our minds, adding to the ways that we will be able to think, providing an internet worth of knowledge at our cognitive disposal, etc.  I could see this technology eventually allowing us to change our senses and perceptions, including an ability to add virtual objects that are amalgamated with the rest of the external reality that we perceive (such as adding virtual friends that we see and interact with that aren’t physically present outside of our minds even though they appear to be).

So if we start to integrate these kinds of technologies into ourselves as we are also creating companion AI, then we may also end up with the ability to reprogram ourselves alongside those programmable companion AI.  In effect, our own qualitatively human social dynamic may start to change markedly and become more and more compatible with that of the future AI.  The way I think this will most likely play out is that we will try to make AI more like us as we try to make us more like AI, where we co-evolve with one another, trying to share advantages with one another and eventually becoming indistinguishable from one another.  Along this journey however we will also become very different from the way we are now, and after enough time passes, we’ll likely change so much that we’d be unrecognizable to people living today.  My hope is that as we use AI to also improve our intelligence and increase our knowledge of the world generally, we will also continue to improve on our knowledge of what makes us happiest (as social creatures or otherwise) and thus how to live the best and most morally fruitful lives that we can.  This will include improving our knowledge of social dynamics and the ways that we can maximize all the interpersonal benefits therein.  Artificial intelligence may help us to accomplish this however paradoxical or counter-intuitive that may seem to us now.

The illusion of Persistent Identity & the Role of Information in Identity

with 8 comments

After reading and commenting on a post at “A Philosopher’s Take” by James DiGiovanna titled Responsibility, Identity, and Artificial Beings: Persons, Supra-persons and Para-persons, I decided to expand on the topic of personal identity.

Personal Identity Concepts & Criteria

I think when most people talk about personal identity, they are referring to how they see themselves and how they see others in terms of personality and some assortment of (usually prominent) cognitive and behavioral traits.  Basically, they see it as what makes a person unique and in some way distinguishable from another person.  And even this rudimentary concept can be broken down into at least two parts, namely, how we see ourselves (self-ascribed identity) and how others see us (which we could call the inferred identity of someone else), since they are likely going to differ.  While most people tend to think of identity in these ways, when philosophers talk about personal identity, they are usually referring to the unique numerical identity of a person.  Roughly speaking, this amounts to basically whatever conditions or properties that are both necessary and sufficient such that a person at one point in time and a person at another point in time can be considered the same person — with a temporal continuity between those points in time.

Usually the criterion put forward for this personal identity is supposed to be some form of spatiotemporal and/or psychological continuity.  I certainly wouldn’t be the first person to point out that the question of which criterion is correct has already framed the debate with the assumption that a personal (numerical) identity exists in the first place and even if it did exist, it also assumes that the criterion is something that would be determinable in some way.  While it is not unfounded to believe that some properties exist that we could ascribe to all persons (simply because of what we find in common with all persons we’ve interacted with thus far), I think it is far too presumptuous to believe that there is a numerical identity underlying our basic conceptions of personal identity and a determinable criterion for it.  At best, I think if one finds any kind of numerical identity for persons that persist over time, it is not going to be compatible with our intuitions nor is it going to be applicable in any pragmatic way.

As I mention pragmatism, I am sympathetic to Parfit’s views in the sense that regardless of what one finds the criteria for numerical personal identity to be (if it exists), the only thing that really matters to us is psychological continuity anyway.  So despite the fact that Locke’s view — that psychological continuity (via memory) was the criterion for personal identity — was in fact shown to be based on circular and illogical arguments (per Butler, Reid and others), nevertheless I give applause to his basic idea.  Locke seemed to be on the right track, in that psychological continuity (in some sense involving memory and consciousness) is really the essence of what we care about when defining persons, even if it can’t be used as a valid criterion in the way he proposed.

(Non) Persistence & Pragmatic Use of a Personal Identity Concept

I think that the search for, and long debates over, what the best criterion for personal identity is, has illustrated that what people have been trying to label as personal identity should probably be relabeled as some sort of pragmatic pseudo-identity. The pragmatic considerations behind the common and intuitive conceptions of personal identity have no doubt steered the debate pertaining to any possible criteria for helping to define it, and so we can still value those considerations even if a numerical personal identity doesn’t really exist (that is, even if it is nothing more than a pseudo-identity) and even if a diachronic numerical personal identity does exist but isn’t useful in any way.

If the object/subject that we refer to as “I” or “me” is constantly changing with every passing moment of time both physically and psychologically, then I tend to think that the self (that many people ascribe as the “agent” of our personal identity) is an illusion of some sort.  I tend to side more with Hume on this point (or at least James Giles’ fair interpretation of Hume) in that my views seem to be some version of a no-self or eliminativist theory of personal identity.  As Hume pointed out, even though we intuitively ascribe a self and thereby some kind of personal identity, there is no logical reason supported by our subjective experience to think it is anything but a figment of the imagination.  This illusion results from our perceptions flowing from one to the next, with a barrage of changes taking place with this “self” over time that we simply don’t notice taking place — at least not without critical reflection on our past experiences of this ever-changing “self”.  The psychological continuity that Locke described seems to be the main driving force behind this illusory self since there is an overlap in the memories of the succession of persons.

I think one could say that if there is any numerical identity that is associated with the term “I” or “me”, it only exists for a short moment of time in one specific spatio-temporal slice, and then as the next perceivable moment elapses, what used to be “I” will become someone else, even if the new person that comes into being is still referred to as “I” or “me” by a person that possesses roughly the same configuration of matter in its body and brain as the previous person.  Since the neighboring identities have an overlap in accessible memory including autobiographical memories, memories of past experiences generally, and the memories pertaining to the evolving desires that motivate behavior, we shouldn’t expect this succession of persons to be noticed or perceived by the illusory self because each identity has access to a set of memories that is sufficiently similar to the set of memories accessible to the previous or successive identity.  And this sufficient degree of similarity in those identities’ memories allow for a seemingly persistent autobiographical “self” with goals.

As for the pragmatic reasons for considering all of these “I”s and “me”s to be the same person and some singular identity over time, we can see that there is a causal dependency between each member of this “chain of spatio-temporal identities” that I think exists, and so treating that chain of interconnected identities as one being is extremely intuitive and also incredibly useful for accomplishing goals (which is likely the reason why evolution would favor brains that can intuit this concept of a persistent “self” and the near uni-directional behavior that results from it).  There is a continuity of memory and behaviors (even though both change over time, both in terms of the number of memories and their accuracy) and this continuity allows for a process of conditioning to modify behavior in ways that actively rely on those chains of memories of past experiences.  We behave as if we are a single person moving through time and space (and as if we are surrounded by other temporally extended single person’s behaving in similar ways) and this provides a means of assigning ethical and causal responsibility to something or more specifically to some agent.  Quite simply, by having those different identities referenced under one label and physically attached to or instantiated by something localized, that allows for that pragmatic pseudo-identity to persist over time in order for various goals (whether personal or interpersonal/societal) to be accomplished.

“The Persons Problem” and a “Speciation” Analogy

I came up with an analogy that I thought was very fitting to this concept.  One could analogize this succession of identities that get clumped into one bulk pragmatic-pseudo-identity with the evolutionary concept of speciation.  For example, a sequence of identities somehow constitute an intuitively persistent personal identity, just as a sequence of biological generations somehow constitute a particular species due to the high degree of similarity between them all.  The apparent difficulty lies in the fact that, at some point after enough identities have succeeded one another, even the intuitive conception of a personal identity changes markedly to the point of being unrecognizable from its ancestral predecessor, just as enough biological generations transpiring eventually leads to what we call a new species.  It’s difficult to define exactly when that speciation event happens (hence the species problem), and we have a similar problem with personal identity I think.  Where does it begin and end?  If personal identity changes over the course of a lifetime, when does one person become another?  I could think of “me” as the same “me” that existed one year ago, but if I go far enough back in time, say to when I was five years old, it is clear that “I” am a completely different person now when compared to that five year old (different beliefs, goals, worldview, ontology, etc.).  There seems to have been an identity “speciation” event of some sort even though it is hard to define exactly when that was.

Biologists have tried to solve their species problem by coming up with various criteria to help for taxonomical purposes at the very least, but what they’ve wound up with at this point is several different criteria for defining a species that are each effective for different purposes (e.g. biological-species concept, morpho-species concept, phylogenetic-species concept, etc.), and without any single “correct” answer since they are all situationally more or less useful.  Similarly, some philosophers have had a persons problem that they’ve been trying to solve and I gather that it is insoluble for similar “fuzzy boundary” reasons (indeterminate properties, situationally dependent properties, etc.).

The Role of Information in a Personal Identity Concept

Anyway, rather than attempt to solve the numerical personal identity problem, I think that philosophers need to focus more on the importance of the concept of information and how it can be used to try and arrive at a more objective and pragmatic description of the personal identity of some cognitive agent (even if it is not used as a criterion for numerical identity, since information can be copied and the copies can be distinguished from one another numerically).  I think this is especially true once we take some of the concerns that James DiGiovanna brought up concerning the integration of future AI into our society.

If all of the beliefs, behaviors, and causal driving forces in a cognitive agent can be represented in terms of information, then I think we can implement more universal conditioning principles within our ethical and societal framework since they will be based more on the information content of the person’s identity without putting as much importance on numerical identity nor as much importance on our intuitions of persisting people (since they will be challenged by several kinds of foreseeable future AI scenarios).

To illustrate this point, I’ll address one of James DiGiovanna’s conundrums.  James asks us:

To give some quick examples: suppose an AI commits a crime, and then, judging its actions wrong, immediately reforms itself so that it will never commit a crime again. Further, it makes restitution. Would it make sense to punish the AI? What if it had completely rewritten its memory and personality, so that, while there was still a physical continuity, it had no psychological content in common with the prior being? Or suppose an AI commits a crime, and then destroys itself. If a duplicate of its programming was started elsewhere, would it be guilty of the crime? What if twelve duplicates were made? Should they each be punished?

In the first case, if the information constituting the new identity of the AI after reprogramming is such that it no longer needs any kind of conditioning, then it would be senseless to punish the AI — other than to appease humans that may be angry that they couldn’t themselves avoid punishment in this way, due to having a much slower and less effective means of reprogramming themselves.  I would say that the reprogrammed AI is guilty of the crime, but only if its reprogrammed memory still included information pertaining to having performed those past criminal behaviors.  However, if those “criminal memories” are now gone via the reprogramming then I’d say that the AI is not guilty of the crime because the information constituting its identity doesn’t match that of the criminal AI.  It would have no recollection of having committed the crime and so “it” would not have committed the crime since that “it” was lost in the reprogramming process due to the dramatic change in information that took place.

In the latter scenario, if the information constituting the identity of the destroyed AI was re-instantiated elsewhere, then I would say that it is in fact guilty of the crime — though it would not be numerically guilty of the crime but rather qualitatively guilty of the crime (to differentiate between the numerical and qualitative personal identity concepts that are embedded in the concept of guilt).  If twelve duplicates of this information were instantiated into new AI hardware, then likewise all twelve of those cognitive agents would be qualitatively guilty of the crime.  What actions should be taken based on qualitative guilt?  I think it means that the AI should be punished or more specifically that the judicial system should perform the reconditioning required to modify their behavior as if it had committed the crime (especially if the AI believes/remembers that it has committed the crime), for the better of society.  If this can be accomplished through reprogramming, then that would be the most rational thing to do without any need for traditional forms of punishment.

We can analogize this with another thought experiment with human beings.  If we imagine a human that has had its memories changed so that it believes it is Charles Manson, has all of Charles Manson’s memories and intentions, then that person should be treated as if they are Charles Manson and thus incarcerated/punished accordingly to rehabilitate them or protect the other members of society.  This is assuming of course that we had reliable access to that kind of mind-reading knowledge.  If we did, the information constituting the identity of that person would be what is most important — not what the actual previous actions of the person were — because the “previous person” was someone else, due to that gross change in information.

Artificial Intelligence: A New Perspective on a Perceived Threat

with 5 comments

There is a growing fear in many people of the future capabilities of artificial intelligence (AI), especially as the intelligence of these computing systems begins to approach that of human beings.  Since it is likely that AI will eventually surpass the intelligence of humans, some wonder if these advancements will be the beginning of the end of us.  Stephen Hawking, the eminent British physicist, was recently quoted by the BBC as saying “The development of full artificial intelligence could spell the end of the human race.”  BBC technology correspondent Rory Cellan-Jones said in a recent article “Prof Hawking says the primitive forms of artificial intelligence developed so far have already proved very useful, but he fears the consequences of creating something that can match or surpass humans.”  Hawking then said “It would take off on its own, and re-design itself at an ever increasing rate.  Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”

Hawking isn’t alone with this fear, and clearly this fear isn’t ill-founded.  It doesn’t take a rocket scientist to realize that human intelligence has allowed us to overcome just about any environmental barrier we’ve come across, driving us to the top of the food chain.  We’ve all seen the benefits of our high intelligence as a species, but we’ve also seen what can happen due to that intelligence being imperfect, having it operate on incomplete or fallacious information, and ultimately lacking an adequate capability of accurately determining the long-term consequences of our actions.  Because we have such a remarkable ability to manipulate our environment, that manipulation can be extremely beneficial or extremely harmful as we’ve seen with the numerous species we’ve threatened on this planet (some having gone extinct).  We’ve even threatened many fellow human beings in the process, whether intentionally or not.  Our intelligence, combined with some elements of short-sightedness, selfishness, and aggression, has led to some pretty abhorrent products throughout human history — anything from the mass enslavement of others spanning back thousands of years to modern forms of extermination weaponry (e.g. bio-warfare and nuclear bombs).  If AI reaches and eventually surpasses our level of intelligence, it is reasonable to consider the possibility that we may find ourselves on a lower rung of the food chain (so to speak), potentially becoming enslaved or exterminated by this advanced intelligence.

AI: Friend or Foe?

So what exactly prompted Stephen Hawking to make these claims?  As the BBC article mentions, “His warning came in response to a question about a revamp of the technology he uses to communicate, which involves a basic form of AI…The theoretical physicist, who has the motor neurone disease amyotrophic lateral sclerosis (ALS), is using a new system developed by Intel to speak.  Machine learning experts from the British company Swiftkey were also involved in its creation. Their technology, already employed as a smartphone keyboard app, learns how the professor thinks and suggests the words he might want to use next.”

Reading this article suggests another possibility or perspective that I don’t think a lot of people are considering with regard to AI technology.  What if AI simply replaces us gradually, by actually becoming the new “us”?  That is, as we further progress in Cyborg (i.e. cybernetic organism) technologies, using advancements similar to Stephen Hawking’s communication ability upgrade, we are ultimately becoming amalgams of biological and synthetic machines anyway.  Even the technology that we currently operate through an external peripheral interface (like smart phones and all other computers) will likely become integrated into our bodies internally.  Google glasses, voice recognition, and other technologies like those used by Hawking are certainly taking us in that direction.  It’s not difficult to imagine one day being able to think about a particular question or keyword, and having an integrated blue-tooth implant in our brain recognize the mental/physiological command cue, and successfully retrieve the desired information wirelessly from an online cloud or internet database of some form.  Going further still, we will likely one day be able to take sensory information that enters the neuronal network of our brain, and once again, send it wirelessly to supercomputers stored elsewhere that are able to process the information with billions of pattern recognition modules.  The 300 million or so pattern recognition modules that are currently in our brain’s neo-cortex would be dwarfed by this new peripheral-free interface and wirelessly accessible technology.

For those that aren’t very familiar with the function or purpose of the brain’s neo-cortex, we use its 300 million or so pattern recognition modules to notice patterns in the environment around us (and meta patterns of neuronal activity within the brain), thus being able to recognize and memorize sensory data, and think.  Ultimately, we use this pattern recognition to accomplish goals, solve problems, and gain knowledge from past experience.  In short, these pattern recognition modules are our primary source or physiological means for our intelligence.  Thus, being able to increase the number of pattern recognition modules (as well as the number of hierarchies between different sets of them), will only increase our intelligence.  Regardless of whether we integrate computer chips in our brain to do at least some or all of this processing locally, or use a wireless means of communication to an external supercomputer farm or otherwise, we will likely continue to integrate our biology with synthetic analogs to increase our capabilities.

When we realize that a higher intelligence allows us to better predict the consequences of our actions, we can see that our increasing integration with AI will likely have incredible survival benefits.  This integration will also catalyze further technologies that could never have been accomplished with biological brains alone, because we simply aren’t naturally intelligent enough to think beyond a certain level of complexity.  As Hawking said regarding AI that could surpass our intelligence, “It would take off on its own, and re-design itself at an ever increasing rate.  Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”  Yes, but if that AI becomes integrated in us, then really it is humans that are finding a way to circumvent slow biological evolution with a non-biological substrate that supercedes it.

At this time I think it is relevant to mention something else I’ve written about previously, which is the advancements being made in genetic engineering and how they are taking us into our last and grandest evolutionary leap, a “conscious evolution”, thus being able to circumvent our own slow biological evolution through an intentionally engineered design.  So as we gain more knowledge in the field of genetic engineering (combined with the increasing simulation and computing power afforded by AI), we will likely be able to catalyze our own biological evolution such that we can evolve quickly as we increase our Cyborg integrations with AI.  So we will likely see an increase in genetic engineering capabilities developing in close parallel with AI advancements, with each field substantially contributing to the other and ultimately leading to our transhumanism.

Final Thoughts

It seems clear that advancements in AI are providing us with more tools to accomplish ever-more complex goals as a species.  As we continue to integrate AI into ourselves, what we now call “human” is simply going to change as we change.  This would happen regardless, as human biological evolution continues its course into another speciation event, similar to the one that led to humans in the first place.  In fact, if we wanted to maintain the way we are right now as a species, biologically speaking, it would actually require us to use genetic engineering to do so, because genetic differentiation mechanisms (e.g. imperfect DNA replication, mutations, etc.) are inherent in our biology.  Thus, for those that argue against certain technologies based on a desire to maintain humanity and human attributes, they must also realize that the very technologies they despise are in fact their only hope for doing so.  More importantly, the goal of maintaining what humans currently are goes against our natural evolution, and we should embrace change, even if we embrace it with caution.

If AI continues to become further integrated into ourselves, forming a Cyborg amalgam of some kind, as it advances to a certain point we may choose one day to entirely eliminate the biological substrate of that amalgam, if it is both possible and advantageous to do so.  Even if we maintain some of our biology, and merely hybridize with AI, then Hawking was right to point out that “The development of full artificial intelligence could spell the end of the human race.”  Although, rather than a doomsday scenario like we saw in the movie The Terminator, with humans and machines at war with one another, the end of the human race may simply mean that we will end up changing into a different species, just as we’ve done throughout our evolutionary history.  Only this time, it will be a transition from a purely biological evolution to a cybernetic hybrid variation.  Furthermore, if it happens, it will be a transition that will likely increase our intelligence (and many other capabilities) to unfathomable levels, giving us an unprecedented ability to act based on more knowledge of the consequences of our actions as we move forward.  We should be cautious indeed, but we should also embrace our ongoing evolution and eventual transhumanism.