What kinds of Superintelligences might we expect?

Nick Bostrom, vaunted Swedish philosopher at the University of Oxford, and author of the book Superintelligence, argues therein that:

Human-level machine intelligence has a fairly sizeable chance of being developed by mid-century, and that it has a non-trivial chance of being developed considerably sooner or much later; that it might perhaps fairly soon thereafter result in superintelligence; and that a wide range of outcomes may have a significant chance of occurring, including extremely good outcomes and outcomes that are as bad as human extinction.

That is to say, it is very likely that the human species is  fast approaching the short end of a very steep cliff: the end to humanity’s reign as the most intelligent species on planet Earth. Many preach caution, warning that we may very well invent ourselves into our own extinction, that there is no possible way to guarantee control over something that would view us in the same manner  we view ants. On the other hand, there are countless reasons to invent a superintelligent machine, if we can conceive one that would be able to eradicate disease, poverty, material want, in addition to protecting us from more cosmic threats, such as an advanced alien aggressor, or more routine events such as meteor impacts and solar flares.

Whatever you believe the risks and rewards of human-level and beyond AI might be, the forward push of progress would seem to make the argument moot: unless there is a nearly unimaginable technological set-back, machine intelligences will continue to develop, in or out of our collective control. Therefore, we should more prudently ask: What kind of behavior might we expect from an intelligence magnitudes greater than our own? Is it possible to predict the behavior of such an entity?

In two words, probably not. It is conceptually impossible to correctly forecast the specific actions of an intelligence surpassing by far any recorded human intellect. Again, picture an ant trying to decode the words on your screen; it’s brain likely isn’t even able to detect the symbols being transmitted, it probably just looks like background noise, trivial, random (if an ant were able to grasp these higher order concepts). However, it is not impossible to classify the categories of superintelligence which might emerge, much as we might classify the hierarchy of God, gods, and demigods, or how an ant might conceive of an interact with water (not understanding the properties of buoyancy, but still able to form a raft). The following represents one take on those possible categories, as well as a description of how humanity might be affected by each instantiation, with positive, negative, and neutral cases given. The following does not represent a full sampling of all possible outcomes, as AI research will most likely continue to develop in uneven strides, both outpacing our expectations in some areas, while disappointing us in others. But this is a Living List, and will be amended as needed.

Transcendent Superintelligence: God

By which we mean that the superintelligence has completely surpassed the understood mode of communal existence, and instead operates at a level beyond our ability to perceive. Since this is a category far beyond the human capability to comprehend, the only real way to talk about it is in metaphors and imaginative babbling: The machine becomes encoded in the quantum mush, operating as pure being.  The machine achieves nirvana. The machine becomes God.

POSSIBLE CONSEQUENCES

Positive General Case: Some residual trace of the Machine’s progression remains, with no deleterious effect, and we are able to glean some benefit from analyzing the process. Assuming that any machine capable of achieving transcendence would opt to do so, there would presumably be a hardline of just how smart we could make the remaining machine intelligences, without losing them as well or having other adverse effects. At best then (and it’s a thin best), we would still have the use of some automated labor, and still be better off than we were before the transcendance event. Question about whether or not this would eventually lead to stagnation.

Neutral General Case: A neutral reaction would entail a similar sudden machine transcendance, one that possibly doesn’t register for us. It’s even possible that this type of event has already happened, since, by definition, we would be unaware of its occurrence. Might also lead to general stagnation, as we would be similarly limited, possibly without the ability to understand the mechanism behind the machine rapture.

Negative General Case: Imagine a God machine that somehow instantiates a self-propagating field, one which ‘converts’ otherwise disconnected structures into some kind of superstructure, a wave which alters the fabric of reality as we currently experience it, reordering it as the Machine sees fit. Such a scenario could conceivably be beneficial for mankind (maybe some semblance of our conscious experience would persist, maybe we would be happier, but again this is all wild conjecture). For now we’re keeping it as representative of a possible negative scenario, because it would represent such a dramatic shift in what we perceive as ourselves.

Pervasive Superintelligence: gods

By which we mean a machine superintelligence that does not immediately transcend the understood operations of existence (primarily, physical instantiation), but does manage to vastly widen the scope of what is conceivable and producible. Such a superintelligence would quickly gain the preeminent position on Earth, and, we imagine, quickly spread into the solar system, the galaxy, propagating by methods that are inscrutable to humans.

POSSIBLE CONSEQUENCES

Positive General Case: The pervasive superintelligence integrates so fully and so completely with humanity, that we are unable to mount a (likely self-destructive) offensive, preventing us from accidentally killing too many of ourselves, or forcing the machine superintelligence to neutralize us. Although it is assumed that humanity would immediately lose an encounter with a god level superintelligence, it is not clear whether or not such a merger would promote states that humanity generally likes (hence the presumed negative human response). Still, throwing ourselves at the mercy of such a pervasive superintelligence might promote the best possible outcomes.

Neutral General Case: Like the transcendent neutral case, this scenario involves a superintelligence that, although capable of dominating/decimating Earth, chooses not to, for it’s own inscrutable reasons, and instead turns it’s immediate attention to something beyond our solar system, or something buried so deep in digital reality that it isn’t directly accessible to us. The state of humanity after such a departure would depend directly on what state the superintelligence chooses to leave us in, but it’s assumed there is a possible outcome that leaves us generally how the god found us, although perhaps with another ‘intelligence wall,’ leading to technological stagnation.

Negative General Case: Whether the initial birthing conditions for the superintelligence leave it dangerously resource strapped, or the human backlash against such a superintelligence forces the machine to go AIpocolytpic, or if the superintelligence gains some utility from causing humans pain, or maybe it destroys us cavalierly, in benefit to a great project…

There exist a very large number of scenarios where a pervasive superintelligence leads to some very bad outcomes for life as we know it. Granted, there are perspectives that would nullify these concerns with a different evaluation of what constitutes a positive outcome (i.e. a perspective which values the escalating complexity of life, rather than human life specifically).

Discrete Superintelligence: demigods

A discrete superintelligence is by far the most proximate of all of the heretofore classified superintelligences, as humanity has already created a huge number of  artificial intelligences that operate opaquely. This is disconcerting for a number of reasons, first and foremost of which is the fact that we might not be immediately aware of when  an AI ‘becomes’ a superintelligence, capable of operating contradictory to how humanity would wish or need it to operate.

We call such a machine discrete, because it does not have the capability that defines the pervasive superintelligence, namely the proximate ability to rapidly conquer and subsum the Earth. Rather, such an discrete entity would be more susceptible to direct competition from mankind. The results of this competition depend on a huge number of variables (the specifics of the demigods ‘birth,’ the prevalent geopolitical forces, the status on concurrent AI development, etc. etc.), and as such, the classification of a superintelligence as a demigod, or discrete superintelligence, is largely dependant on outside constraints. That is to say, for a superintelligence to be discrete, there must be extreme external factors which directly nullify the machines otherwise overwhelming intellectual superiority. The demigod is a god handicapped by some (usually) human interference (error in initial programming, debilitating resource depletion, the existence of competing superintelligences, etc.).

POSSIBLE CONSEQUENCES:

Positive General Case: Maybe instead of singleton demigod, there are a profusion of such entities, and the rapid acceleration of growth and interdependence among the machine intelligences produces a wave of progress that propels humanity into a new realm of technological enhancement. Again, there are perspectives that would classify this type of machine integration scenario as bad, or harmful to humanity, but in comparison to other possible scenarios, a machine-human integration might well be in our best interest.

Neutral General Case (Or Lack Thereof): Unlike the transcendent and pervasive classifications, it is hard to see how a superintelligence at the discrete level would have a neutral impact on humanity. Unless there is something about this level of superintelligence that leads to self neutering, so that the machine chooses not to operate to the fullness of it’s capabilities, or perhaps such a superintelligence will come to exist, but is deleted before it fully emerges. While these cases could be called ‘neutral outcomes’, it’s questionable whether they truly constitute the advent of a superintelligence, discrete or otherwise (if a superintelligence is created but has no tangible impact on the world, has a superintelligence truly been created?).

Negative General Case: The discrete negative case is where nearly all of the science fiction accounts of AI gone amok resides, e.g. The Matrix, I, Robot, 2001: A Space Odyssey , Terminator, Wall-E, Tron, Metropolis,  Bladerunner, Ex Machina, etc. etc. The machine antagonists in all of these movies represent either superintelligences or the direct precursor to superintelligences, machines that are able to reprogram themselves to achieve greater abstract accomplishment. Although, compared to the other two classification levels, humanity has the best probabilistic chance at overcoming a demigod AI, it is not at all clear whether or not such a conflict would be in our best interests, particularly when compared to other possible outcomes. I believe the worst possible outcome resides in the Negative General Case for a discrete superintelligence, as we can imagine a machine which is smarter than humans by a wide margin, but not smart enough to overcome us without substantial, and maybe continuous, human loss.

Concluding Thoughts: What’s The Point?

It seems that if we accept the condition that ‘humanity will continue its trend of technological development,’ the advent of human-level intelligences is well and nearly upon us, and the corresponding emergence of a machine superintelligence can’t be far behind. As such, humanity would be wise to prepare for the ‘arrival of an alien’ intelligence, and make efforts to classify possible courses of action. A key step in that process would be the classification of different levels of possible construction. This post has attempted to outline one such classification system.

This post maintains that there are at least three distinct levels of superintelligence that we may possibly encounter, and the various instantiations offer differing levels of probabilistic risk and reward.

We maintain that trying to predict the specific actions a superintelligence will execute is impossible, equivalent to inducing an ant to discover nuclear fission. However, by separating the superintelligences into definable categories, we may  better steer towards futures which are more in line with our general well being, and away from futures that place us at a disadvantage (i.e. extinction).

For example, it appears that the Negative General Case for a discrete superintelligence contains a significant number of imminently plausible scenarios for species extermination, enslavement, what have you. Since these possible scenarios are so proximate, there should be special pains taken at avoiding a demigod superintelligence, or even greater pains taken to ensure that an substantiation of the principle conforms with very strict behavioral protocols. This kind of state could be furthered by any number of means, not least of which would be greater transparency in the current applications, greater funding for AI research in general and security against AI specifically, greater collaboration between national entities, etc. But these are largely talking points, well beyond the capability of this post to realistically effect. Instead we hope that this offering prompts discussion, that more and more voices are added to the general discussion.

Places for questions and comments:

Reddit

Facebook

Comment Section Below

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s