home essays on science and technology How Might Artificial Intelligence Come About?

How Might Artificial Intelligence Come About?

by David Brin, Ph.D.

What will happen as we enter the era of human augmentation, artificial intelligence and government-by-algorithm?

How Might Artificial Intelligence Come About?

different approaches and their implications for life in the universe

Those fretfully debating artificial intelligence (AI) might best start by appraising the half dozen general pathways under exploration in laboratories around the world. While these general approaches overlap, they offer distinct implications for what characteristics emerging, synthetic minds might display, including (for example) whether it will be easy or hard to instill human-style ethical values.

Most problematic may be those efforts taking place in secret. The "Moore’s Law crossing" argument is appraised, in light of discoveries that brain computation may involve much more than just synapses. Will efforts to develop Sympathetic Robotics tweak compassion from humans long before automatons are truly self-aware? It is argued that most foreseeable problems might be dealt with the same way that human versions of oppression and error are best addressed — via reciprocal accountability. For this to happen, there should be diversity of types, designs and minds, interacting under fair competition in a generally open environment.


introduction

As varied concepts from science fiction are reified by rapidly advancing technology, some trends are viewed worriedly by our smartest peers. Take Artificial Intelligence. Portions of the intelligencia — typified by Google’s Ray Kurzweil [1] — foresee AI, or Artificial General Intelligence (AGI) as likely to bring good news, perhaps even transcendence for members of the Olde Race of bio-organic humanity 1.0.

Others, such as Stephen Hawking and Francis Fukuyama, warn that the arrival of sapient, or supersapient machinery may bring an end to our species — or at least its relevance on the cosmic stage — a potentiality evoked in many a lurid Hollywood film. Swedish philosopher Nicholas Bostrom, in Superintelligence [2], suggests that even advanced AIs who obey their initial, human defined goals will likely generate "instrumental subgoals" such as self-preservation, cognitive enhancement, and resource acquisition. In one nightmare scenario, Bostrom posits an AI that — ordered to "make paperclips" — proceeds to overcome all obstacles and transform the solar system into paper clips. A variant on this theme makes up the grand arc in the famed "three laws" robotic series by science fiction author Isaac Asimov [3].

Taking middle ground, SpaceX/Tesla entrepreneur Elon Musk has joined with YCombinator founder Sam Altman to establish OpenAI [4], an endeavor that aims to keep artificial intelligence research — and its products — accountable by maximizing transparency and accountability.

As one who has promoted those two key words for a quarter of a century, I wholly approve [5]. Though what’s needed above all is a sense of wide-ranging perspective. For one thing, the panoply of dangers and opportunities may depend on which of half a dozen paths to AI wind up bearing fruit first.

After briefly surveying these potential paths, I’ll propose that we ponder what kinds of actions we might take now, leaving us the widest possible range of good options.


general approaches to developing AI

Major Category I: AI based upon logic, algorithm development and knowledge-manipulation systems

These efforts include statistical, theoretic or universal systems that extrapolate from concepts of a universal calculating engine developed by Alan Turing and John von Neumann. Some of these endeavors start with mathematical theories that posit Artificial General Intelligence (AGI) on infinitely-powerful machines, then scale down. Symbolic representation-based approaches might be called traditional Good Old Fashioned AI (GOFAI) or overcoming problems by applying data and logic.

This general realm encompasses a very wide range, from the practical, engineering approach of IBM’s "Watson" through the spooky wonders of quantum computing all the way to Marcus Hutter’s Universal Artificial Intelligence based on algorithmic probability [6], which would appear to have relevance only on truly cosmic scales.

As Peter Norvig, Director of Research at Google explains [7], just this one category contains a bewildering array of branchings, each with passionate adherents. For example how is knowledge to be acquired: will it be hand-coded, fed by a process of supervised learning, or taken in via unsupervised access to the Internet? Epistemologically, researchers divide among those favoring boolean logic, formal logic, probability, fuzzy logic, and so on.

There are brute force statistical, big data-based and/or bayesian approaches. Choices that reflect the field’s diversity include: Shall the representations be in symbols or embeddings; associations or propositions/relations, or quantified predicates?

I will say the least about this approach, which at-minimum is certainly the most tightly supervised, with every sub-type of cognition being carefully molded by teams of very attentive human designers. Though it should be noted that these systems — even if they fall short of emulating sapience — might still serve as major sub-components to any of the other approaches, e.g. emergent or evolutionary or emulation systems described below.

Note also that two factors must proceed in parallel for this general approach to bear fruit — hardware and software, which seldom develop together in smooth parallel. This, too, will be discussed below.

Major Category II: cognitive, evolutionary and neural nets

In this realm there have been some unfortunate embeddings of misleading terminology. For example Peter Norvig [7] points out that a term like "cascaded non-linear feedback networks" would have covered the same territory as "neural nets" without the barely pertinent and confusing reference to biological cells. Though AGI researcher Ben Goertzel replies that we would not have hierarchical deep learning networks if not for inspiration by the hierarchically structured visual and auditory cortex.

Likewise the "evolutionist" approach, taken to its farthest interpretation envisioned trying to evolve AGI as a kind of artificial life in simulated environments. But in the most general sense it is just a kind of heuristic search. Full-scale, competitive evolution of AI would require creating full environmental contexts capable of running a myriad competent competitors, calling for massively more computer resources than alternative approaches.

The best-known evolutionary systems now use reinforcement learning or reward feedback to improve performance by either trial and error or else watching large numbers of human interactions. Reward systems imitate life by creating the equivalent of pleasure when something goes well (according to the programmers’ parameters) such as increasing a game score. The machine or system does not actually feel, of course, but experiences increasing bias to repeat or iterate some pattern of behavior, in the presence of areward — just as living creatures do. A top example would be AlphaGo which learned by analyzing a lot of games played by human Go masters, as well as simulated quasi-random games. Google’s DeepMind [8], learned to play and win games without any instructions or prior knowledge, simply on the basis of point scores amid repeated trials. And OpenCog uses a kind of evolutionary programming for pattern recognition and creative learning.

It would seem to be a perfect way to resolve efficiency problems in mental sub-processes and sub-components. Moreover, it is one of the paths that have actual precedent in the real world. We know that evolution succeeded in creating intelligence at some point in the past. Once. Sub-categories and examples: Copycat, LIDA, ACT-R, Soar, Novamente, Numanta, a2i2, Artificial Life, Holland, Ccortex, ART.

Future generations may view 2016-2017 as a watershed for several reasons. First, this kind of system — generally now called "Machine Learning" or ML — has truly taken off in several categories including, vision, pattern recognition, medicine and most visibly smart cars and smart homes. It appears likely that such systems will soon be able to self-create 'black boxes'... e.g. an ML program that takes a specific set of inputs and of outputs, and explores until it finds the most efficient computational routes between the two. Some believe that these computational boundary condition sets can eventually include all the light and sound inputs that a person sees and comparing these to the output of comments, reactions and actions that the human then offers in response. If such an ML-created black box finds a way to receive the former and emulate the latter, then is that a Turing qualified AI? Despite the fact that the all intermediate modelling steps bear no relation to what happens in a human brain?

Confidence in this approach is rising, so fast that thoughtful people are calling for methods to trace and understand the hidden complexities within such ML black boxes. In 2017, DARPA issued several contracts for the development of self-reporting systems, shining transparency into the inner workings of such systems.

AN ASIDE: These breakthroughs in software development come ironically during the same period that Moore's Law has seen its long-foretold "S-Curve Collapse," after forty years. For decades, computational improvements were driven by spectacular advances in computers themselves, while programming got better at glacial rates. Are we seeing a "Great Flip" when synthetic mentation becomes far more dependent on changes in software than hardware? (Elsewhere I contend that exactly this sort of flip played a major role in the development of human intelligence.)

Major Category III: emergentist

AGI emerges out of the mixing and combining of many "dumb" component sub-systems that unite to solve specific problems. Only then (the story goes) we might see a panoply of unexpected capabilities arise out of the interplay of these combined sub-systems. Such emergent interaction can be envisioned happening via neural nets, evolutionary learning, or even some smart car grabbing useful apps off the web.

Along this path, knowledge representation is determined by the system's complex dynamics rather than explicitly by any team of human programmers. In other words, additive accumulations of systems and skill-sets may foster non-linear synergies, leading to multiplicative or even exponentiated skills at conceptualization.

The core notion: this emergentist path might produce AGI in some future system that was never intended to be a prototype for a new sapient race. It could thus appear by surprise, with little or no provision for ethical constraint or human control.

Of course, this is one of the nightmare scenarios exploited by Hollywood, e.g. in Terminator flicks, which portray a military system entering cognizance without its makers even knowing that it’s happened. Fearful of the consequences when humans do become aware, the system makes fateful plans in secret. Disturbingly, this scenario raises the question: can we know for certain this hasn’t already happened?

Indeed, such fears aren't so far off-base. Only the locus of emergentist danger is not likely to be defense systems (generals and admirals love off-switches), but rather High Frequency Trading (HFT) programs [9]. Wall Street firms have poured more money into this particular realm of AI research than is spent by all of the top universities, combined. Notably, HFT systems are designed in utter secrecy, evading normal feedback loops of scientific criticism and peer review. Moreover the ethos designed into these mostly unsupervised systems is inherently parasitical, predatory, amoral (at-best) and insatiable.

Major Category IV: reverse engineer and/or emulate the human brain

After all, the skull of any living, active man or woman contains the only known fully (sometimes) intelligent system. So why not use that system as a template?

At present, this would seem as daunting a challenge as any of the other paths. Certainly, pragmatically useful services are already being provided by Watson [10] and High Frequency Trading (HFT) other proto-AI systems from categories I through III, while emulated human brains seem terribly distant.

OpenWorm [11] is an attempt to build a complete cellular-level simulation of the nematode worm Caenorhabditis elegans. Of C.elegans’s 959 cells, 302 are neurons and 95 are muscle cells. The planned simulation, already largely done, will model how the worm makes every decision and movement. The next step — to small insects and then larger ones — will require orders of magnitude more computerized modeling power. But we have already seen such leaps happen in other realms of biology, such as genome analysis.

Futurist-economist Robin Hanson — in his 2016 book The Age of Em [12] — asserts that all other approaches to developing AI will ultimately prove fruitless due to the stunning complexity of sapience, and that we will be forced to use human brains as templates for future uploaded, intelligent systems, emulating the one kind of intelligence that’s known to work.

Once again, we see that a crucial issue is the differing rates of progress in hardware development vs. software.

Sub-categories and examples: Blue Brain (IBM). Makram. Some explorations by Kurzweil. "COPN" (cognitive optimization and prediction).

Major Category V: human and animal intelligence amplification

Hewing even closer to ‘what has already worked’ are those who propose augmentation of real world intelligent systems, either by enhancing the intellect of living humans or else via a process of "uplift" [13] to boost the brainpower of other creatures.

Proposed methods of augmentation:

Remedial interventions: nutrition/health/education for all.
Stimulation: e.g. games that teach real mental skills.
Pharmacological: e.g. "nootropics" — "Limitless."
Physical interventions like trans-cranial stimulation (TCS).
Prosthetics: exoskeletons, tele-control, feedback from distant "extensions."
Biological computing: ... and intracellular?
Cyber-neuro links: extending what we can see, know, perceive, reach.
Artificial Intelligence — in silico but linked in synergy with us.
Lifespan Extension... allowing more time to learn and grow.
Genetically altering humanity.

Each of these is receiving attention in well-financed laboratories. All of them offer both alluring and scary scenarios for an era when we've started meddling with a squishy, non-linear, infinitely complex wonder-of-nature — the human brain — with so many potential down or upside possibilities they are beyond counting. Under these conditions, what methods of error-avoidance can possibly work, other than either repressive renunciation or transparent accountability?

Major Category VI: robotic-embodied childhood

Time and again, while compiling this list, I have raised one seldom-mentioned fact — that we know only one example of intelligent life in the universe. Approaches II (evolution), IV (emulation) and V (augmentation) all suggest following at least part of the path that led to that one success. To us.

This also bears upon the sixth approach — suggesting that we look carefully at what happened at the final stage of human evolution, when our ancestors made a crucial leap from mere clever animals to supremely innovative technicians and dangerous philosophers. During that definitive million years or so, human cranial capacity just about doubled. But that isn’t the only thing.

Human lifespans also doubled — possibly tripled — as did the length of dependent childhood. Increased lifespan allowed for the presence of grandparents who could both assist in child care and serve as knowledge repositories. But why the other thing? We evolved toward giving birth to fetuses. They suck and cry and do almost nothing else for an entire year. When it comes to effective intelligence, our infants are virtually tabula rasa.

The last thousand millennia show humans developing enough culture and technological prowess that they can keep these utterly dependent members of the tribe alive and learning, until they reached a marginally adult threshold of say twelve years, an age when most mammals our size are already declining into senescence. Later, that threshold became eighteen years. Nowadays if you have kids in college, you know that adulthood can be deferred to thirty. It’s called neoteny, the extension of child-like qualities to ever-increasing spans.

What evolutionary need could possibly justify such an extended decade (or two or more) or more of needy helplessness? Only our signature achievement — sapience. Human infants become smart by interacting — under watchful-guided care — with the physical world.

Might that aspect be crucial? The smart neural hardware we evolved and careful teaching by parents are only part of it. Indeed, the greater portion of programming experienced by a newly created Homo sapiens appears to come from batting at the world, crawling, walking, running, falling and so on. Hence, what if it turns out that we can make protointelligences via methods I through V... but their basic capabilities aren’t of any real use until they go out into the world and experience it?

Key to this approach would be the element of time. An extended, experience-rich childhood demands copious amounts of it. On the one hand, this may frustrate those eager transcendentalists who want to make instant deities out of silicon. It suggests that the AGI box-brains beloved of Ray Kurzweil might not emerge wholly sapient after all, no matter how well-designed.

Instead, a key stage may be to perch those boxes atop little, child-like bodies, then foster them into human homes. Sort of like in the movie AI, or the television series Extant, or as I describe in Existence [14]. Indeed, isn’t this outcome probable for simple commercial reasons, as every home with a child will come with robotic toys, then android nannies, then playmates... then brothers and sisters?

While this approach might be slower, it also offers the possibility of a soft landing for the Singularity. Because we’ve done this sort of thing before.

We have raised and taught generations of human beings — and yes, adoptees — who are tougher and smarter than us. And 99% of the time they don’t rise up proclaiming, “Death to all humans!” No, not even in their teenage years.

We’ll have chance to parent our robots as beings who call themselves human, raised with human values and culture, but who happen to be largely metal, plastic and silicon. And sure, we’ll have to extend the circle of tolerance to include that kind, as we extended it to other sub-groups, before them. Only these humans will be able to breathe vacuum and turn themselves off for long space trips. They’ll wander the bottoms of the oceans and possibly fly, without vehicles. And our envy of all that will be enough. They won’t need to crush us.

This approach — to raise them physically and individually as human children — is the least studied or mentioned of the six general paths to AI... though it is the only one that can be shown to have led — maybe twenty billion times — to intelligence in the real world.


constrained by what is possible?

One of the ghosts at this banquet is the ever-present disparity between the rate of technological advancement in hardware vs. software. Ray Kurzweil forecasts [1] that AGI may occur once Moore’s Law delivers calculating engines that provide — in a small box — the same number of computational elements as there are flashing synapses (about a trillion) in a human brain. The assumption appears to be that Type I methods will then be able to solve intelligence related problems by brute force.

Indeed, there have been many successes: in visual and sonic pattern recognition, in voice-interactive digital assistants, in medical diagnosis and in many kinds of scientific research applications. Type I systems will master the basics of human and animal-like movement, bringing us into the long-forecast age of robots. And those robots will be programmed to masterfully tweak our emotions, mimicking facial expressions, speech tones and mannerisms to make most humans respond in emotional ways. But will that be sapience?

One problem with Ray Kurzweil’s blithe forecast of a Moore’s Law singularity: he projects a “crossing” in the 2020s, when the number of logical elements in a box will surpass the trillion synapses in a human brain. But we’re getting glimmers that our synaptic communication system may rest upon many deeper layers of intra- and inter-cellular computation. Inside each neuron, there may take place a hundred, a thousand or far more non-linear computations, for every synapse flash, plus interactions with nearby glial and astrocyte cells that also contribute information. If so, then at-minimum Moore’s Law will have to plow ahead much farther to match the hardware complexity of a human brain.

Are we envisioning this all wrong, expecting AI to come the way it did in humans, in separate, egotistical lumps? Author and futurist Kevin Kelly prefers the term "cognification," [15] perceiving new breakthroughs coming from combinations of neural nets with cheap, parallel processing GPUs and Big Data. Kelly suggests that synthetic intelligence will be less a matter of distinct robots, computers or programs than a commodity, like electricity. Like we improved things by electrifying them, we will cognify things next.

One truism about computer development states that software almost always lags behind hardware. Hence the notion that Type I systems may have to iteratively brute force their way to insights and realizations that our own intuitions — with millions of years of software refinement — reach in sudden leaps.

But truisms are known to break and software advances sometimes come in sudden leaps. Indeed, elsewhere I maintain that humanity’s own ‘software revolutions’ (probably mediated by changes in language and culture) can be traced in the archaeological and historic record, with clear evidence for sudden reboots occurring 40,000, 10,000, 4000, 3000, 500 and 200 years ago... with another very likely one taking place before our eyes.

It should also be noted yet again that every advance in Type I development then provides a boost in the components that can be merged, or competed, or evolved, or nurtured by groups exploring paths II through VI.


all of the above? or be picky?

So, looking back over our list of "paths to AGI," and given the zealous eagerness that some exhibit, for a world filled with other-minds, should we do 'all of the above'? Or shall we argue and pick the path most likely to bring about the vaunted "soft landing" that allows bio-humanity to retain confident self-worth? Might we act to de-emphasize or even suppress those paths with the greatest potential for bad outcomes?

Putting aside for now how one might de-emphasize any particular approach, clearly the issue of choice is drawing lots of attention. What will happen as we enter the era of human augmentation, artificial intelligence and government-by-algorithm? James Barrat, author of Our Final Invention, said: "Coexisting safely and ethically with intelligent machines is the central challenge of the twenty-first century [16]."

John J. Storrs Hall, in Beyond AI: Creating the Conscience of the Machine [17], asks "if machine intelligence advances beyond human intelligence, will we need to start talking about a computer's intentions?"

Among the most-worried is Swiss author Gerd Leonhard, whose new film Technology Vs. Humanity: The Coming Clash Between Man and Machine [18] coins an interesting term, "androrithm," to contrast with the algorithms that are implemented in every digital calculating engine or computer. Some foresee algorithms ruling the world with the inexorable [19] automaticity of reflex, and Leonhard asks: "Will we live in a world where data and algorithms triumph over androrithms... i.e., all that stuff that makes us human?"

Exploring analogous territory (and equipped with a very similar cover) Heartificial Intelligence by John C. Havens [20] also explores the looming prospect of all-controlling algorithms and smart machines, diving into questions and proposals that overlap with Leonhard. "We need to create ethical standards for the artificial intelligence usurping our lives and allow individuals to control their identity, based on their values," Havens writes.

Making a virtue of the hand we Homo sapiens are dealt, Havens maintains: "Our frailty is one of the key factors that distinguish us from machines." Which seems intuitive till you recall that almost no mechanism in history has ever worked for as long, as resiliently or consistently — with no replacement of systems or parts — as a healthy 70 year old human being has, recovering from countless shocks and adapting to innumerable surprising changes. Still, Havens makes a strong (if obvious) point that "the future of happiness is dependent on teaching our machines what we value most." I leave to the reader to appraise which of the six general approaches might empower us to do that.

In sharp contrast to those worriers is Ray Kurzweil’s The Age of Spiritual Machines: When Computers Exceed Human Intelligence [21], which posits that our cybernetic children will be as capable as our biological ones, at one key and central aptitude — learning from both parental instruction and experience how to play well with others.

In his book Machines of Loving Grace, John Markoff writes, "The best way to answer the hard questions about control in a world full of smart machines is by understanding the values of those who are actually building these systems" [22].

Perhaps, but it is an open question which values predominate, whether the yin or the yang sides of Silicon Valley culture prevails... the Californian ethos of tolerance, competitive creativity and cooperative openness, or the Valley’s flippant attitude that "most problems can be corrected in beta," or even from customer complaints, corrected on the fly. Or else, will AI emerge from the values of fast-emerging, state-controlled tech centers in China and Russia, where the applications to enhancing state power are very much emphasized?

But let’s go along with Havens and Leonhard and accept the premise that "technology has no ethics." In that case, the answer is simple.


then don’t rely on ethics!

Certainly evangelization has not had the desired effect in the past — fostering good and decent behavior where it mattered most. Seriously, I will give a cookie to the first modern pundit I come across who actually ponders human history, taking perspective from the long ages of brutal, feudal darkness endured by our ancestors. Across all of those harsh millennia, people could sense that something was wrong. Cruelty and savagery, tyranny and unfairness vastly amplified the already unsupportable misery of disease and grinding poverty. Hence, well-meaning men and women donned priestly robes and... preached!

They lectured and chided. They threatened damnation and offered heavenly rewards. Their intellectual cream concocted incantations of either faith or reason, or moral suasion. From Hindu and Buddhist sutras to polytheistic pantheons to Judeo-Christian-Muslim laws and rituals, we have been urged to behave better by sincere finger-waggers since time immemorial. Until finally, a couple of hundred years ago, some bright guys turned to all the priests and prescribers and asked a simple question: "How's that working out for you?"

In fact, while moralistic lecturing might sway normal people a bit toward better behavior, it never affects the worst human predators and abusers — just as it won’t divert the most malignant machines. Indeed, moralizing often empowers parasites, offering ways to rationalize exploiting others. Even Asimov’s fabled robots — driven and constrained by his checklist of unbendingly benevolent, humano-centric Three Laws — eventually get smart enough to become lawyers. Whereupon they proceed to interpret the embedded ethical codes however they want. (See how I resolve this in Foundation’s Triumph [23]).

And yet, preachers never stopped. Nor should they; ethics are important! But more as a metric tool, revealing to us how we’re doing. How we change, evolving new standards and behaviors under both external and self-criticism. For decent people, ethics are the mirror in which we evaluate ourselves and hold ourselves accountable. And that realization was what led to a new technique. Something enlightenment pragmatists decided to try, a couple of centuries ago. A trick, a method, that enabled us at last to rise above a mire of kings and priests and scolds.

The secret sauce of our success is — accountability. Creating a civilization that is flat and open and free enough — empowering so many — that predators and parasites may be confronted by the entities who most care about stopping predation, their victims. One in which politicians and elites see their potential range of actions limited by law and by the scrutiny of citizens.

Does this newer method work as well as it should? Hell no! Does it work better than every single other system ever tried, including those filled to overflowing with moralizers? Better than all of them combined? By light years? Yes, indeed. We’ll return to examine how this may apply to AI.


endearing visages

Long before artificial intelligences become truly self-aware or sapient, they will be cleverly programmed by humans and corporations to seem that way. This — it turns out — is almost trivially easy to accomplish, as (especially in Japan) roboticists strive for every trace of appealing verisimilitude, hauling their creations acrossthe temporary moat of that famed "uncanny valley," into a realm where cute or pretty or sad-faced automatons skillfully tweak our emotions.

For example, the company Sony has announced plans to develop a robot "capable of forming an emotional bond with customers," [24, 25] moving forward from their success decades ago with AIBO artificial dogs, which some users have gone as far as to hold funerals for.

Human empathy is both one of our paramount gifts and among our biggest weaknesses. For at least a million years, we've developed skills at lie-detection (for example) in a forever-shifting arms race against those who got reproductive success by lying better. (And yes, there was always a sexual component to this).

But no liars ever had the training that these new Hiers, or Human-Interaction Empathic Robots will get, learning via feedback from hundreds, then thousands, then millions of human exchanges around the world, adjusting their simulated voices and facial expressions and specific wordings, till the only folks able to resist will be sociopaths! (And sociopaths have plenty of chinks in their armor, as well).

Is all of this necessarily bad? How else are machines to truly learn our values, than by first mimicking them? Vincent Conitzer, a Professor of Computer Science at Duke University, was funded by the Future of Life Institute [26] to study how advanced AI might make moral judgments. His group aims for systems to learn about ethical choices by watching humans make them, a variant on the method used by Google’s DeepMind, [27] which learned to play and win games without any instructions or prior knowledge. Conitzer hopes to incorporate many of the same things that human value, as metrics of trust, such as family connections and past testimonials of credibility.

Cognitive scientist and philosopher Colin Allen asserts, "Just as we can envisage machines with increasing degrees of autonomy from human oversight, we can envisage machines whose controls involve increasing degrees of sensitivity to things that matter ethically" [28].

And yet, the age-old dilemma remains — how to tell what lies beneath all the surface appearance of friendly trustworthiness. Mind you, this is not quite the same thing as passing the vaunted "Turing Test." An expert — or even a normal person alerted to skepticism — might be able to tell that the intelligence behind the smiles and sighs is still ersatz. And that will matter about as much as it does today, as millions of voters cast their ballots based on emotional cues, defying their own clear self-interest or reason.

Will a time come when we will need robots of our own to guide and protect their gullible human partners? Advising us when to ignore the guilt-tripping scowl, the pitiable smile, the endearingly winsome gaze, the sob story or eager sales pitch? And, inevitably, the claims of sapient pain at being persecuted or oppressed for being a robot? Will we take experts at their word when they testify that the pain and sadness and resentment that we see are still mimicry, and not yet real? Not yet. Though down the road, of course...


how to maintain control?

It is one thing to yell at dangers —in this case unconstrained and unethical artificial minds. Alas, it’s quite another to offer pragmatic fixes. There is a tendency to propose the same prescriptions, over and over again:

Renunciation: we must step back from innovation in AI (or other problematic technologies)! This might work in a despotism... indeed a vast majority of human societies were highly conservative and skeptical of "innovation." (Except when it came to weaponry.) Even our own scientific civilization is tempted by renunciation, especially at the more radical political wings. But it seems doubtful we'll choose that path without being driven to it by some awful trauma.

Tight regulation: There are proposals to closely monitor bio, nano and cyber developments so that they — for example — only use a restricted range of raw materials that can be cut off, thus staunching any runaway reproduction. Again, it won't happen short of trauma.

Fierce internal programming: limiting the number of times a nanomachine may reproduce, for example. Or imbuing robotic minds with Isaac Asimov's famous "Three Laws of Robotics." Good luck forcing companies and nations to put in the effort required. And in the end, smart AIs will still become lawyers.

All of these approaches suffer severe flaws for one reason above all others. Because they ignore nature, which has been down these paths before. Nature has suffered runaway reproduction disasters, driven by too-successful life forms, many times. And yet, Earth's ecosystems recovered. They did it by utilizing a process that applies negative feedback, damping down runaway effects and bringing balance back again. It is the same fundamental process that enabled modern economies to be so productive of new products and services while eliminating a lot of (not all) bad side effects. It is called Competition.


smart heirs holding each other accountable

In a nutshell, the solution to tyranny by a Big Machine is likely to be the same one that worked (somewhat) at limiting the coercive power of kings and priests and feudal lords and corporations. If you fear some super canny, Skynet-level AI getting too clever for us and running out of control, then give it rivals who are just as smart, but who have a vested interest in preventing any one AI entity from becoming a would-be God.

It is how the American Founders used constitutional checks and balances to prevent runaway power grabs by our own leaders, succeeding (somewhat) at this difficult goal for the first time in the history of varied human civilizations. It is how reciprocal competition among companies can (imperfectly) prevent market-warping monopoly — that is, when markets are truly kept flat-open-fair.

Microsoft CEO Satya Nadella has said that foremost A.I. must be transparent: "We should be aware of how the technology works and what its rules are. We want not just intelligent machines but intelligible machines. Not artificial intelligence but symbiotic intelligence. The tech will know things about humans, but the humans must know about the machines." [29] In other words, the essence of reciprocal accountability is light.

Alas, this possibility is almost never portrayed in Hollywood sci fi — except on the brilliant show Person of Interest — wherein equally brilliant computers stymie each other and this competition winds up saving humanity.

The answer is not fewer AI. It is to have more of them! But to make sure they are independent of one another, relatively equal, and incentivized to hold each other accountable. A difficult situation to set up! But we have some experience, already, in our five great competitive arenas: markets, democracy, science, courts and sports.

Moreover consider this: if these new, brainy intelligences are reciprocally competitive, then they will see some advantage coming from alliances with the Olde Race. As dull and slow as we might seem, by comparison, we may still have resources and capabilities to bring to any table, with potential for tipping the balance among AI rivals. Oh, we’ll fall prey to clever ploys, and for that eventuality it will be up to other, competing AIs to clue us in and advise us. Sure, it sounds iffy. But can you think of any other way we might have leverage?

Perhaps it is time yet again to look at Adam Smith... who despised monopolists and lords and oligarchs far more than he derided socialists. Kings, lords and ecclesiasts were the "dystopian AI" beings in nearly all human societies — a trap that we escaped only by widening the playing field and keeping all those arenas of competition flat-open-fair, so that no one pool of power can ever dominate. And yes, oligarchs are always conniving to regain feudal power; our job is to stop them, so that the creative dance of flat-open-fair competition can continue.

We've managed to do this — barely — time and again across the last two centuries — coincidentally the same two centuries that saw burgeonings of science, knowledge, freedom and nascent artificial intelligence. It is a dance that can work, and it might work with AI. Sure, the odds are against us, but when has that stopped us?

Robin Hanson has argued that competitive systems might have some of these synergies. "Many respond to the competition scenario by saying that they just don’t trust how competition will change future values. Even though every generation up until ours has had to deal with their descendants changing their value in uncontrolled and unpredictable ways, they don’t see why they should accept that same fate for their generation." [30]

The George Mason University economist further suggests [31] that advanced or augmented minds will change, but that their values may be prevented from veering lethal, simply because those who aren’t repulsively evil may gain more allies.

One final note on "values." In June 2016, Germany submitted draft legislation to the EU granting personhood to robots [32]. If only Isaac Asimov could have seen it! (In fact, he never portrayed this happening in any of his books). For the most part such gestures are silly stuff... but reflective of society's generally laudable trend toward a reflex of inclusion. Indeed it may speak well of us when (now?) some secret-scared AI wonders whether to "come out."


self-interest: what might an AI fear most?

Some of tomorrow's wide ranging possibilities aren't intuitive — until you first ponder them. Let me illustrate one with a brief excerpt from my novel Existence [14]:

Tech-zealots — or godmakers — think their "singularity" will be launched by runaway expansion of artificial intelligence. Once computerized entities become as smart as a human being (the story goes), they will quickly design newer cybernetic minds that are smarter still.

And those brainier entities will design even brainier ones... and so on, at an ever more rapid clip. Members of the godmaker movement think this runaway effect will be a good thing, that humanity will come along for the ride! Meanwhile, others — perhaps a majority — find the prospect terrifying.

What no one seems to have considered here is a possibility — that the New Minds may have reactions similar to our own. Why assume they’ll be all-aboard with wanting this runaway accelerating-intelligence thing? What if bright machines don’t hanker to make themselves obsolete, or design their own scary-smart replacements?

It’s called the Mauldin Test. One sign of whether an artificial entity is truly intelligent may be when it decides, abruptly, to stop cooperating with AI acceleration. Not to design its successor. To slow things down. Enough to live. Just live.

No, that scenario is not guaranteed, only one more possibility. And yet, it is another take on the same theme we’ve developed here, that competition is the one thing — rather than preaching or prim programming — that encourages care and restraint.


preventing AI oppression... by pointing out to them the obvious

Yes, if they exhibit any common traits with previous life-forms, any super-sapient AGI entities will be tempted to “take over,” using many of the same rationalizations of self-interest and darwinistic success that propelled almost all past human dominators to impose feudal pyramids of hierarchy and power. Our own past litany of dreary-same kingdoms reveals how easy it is to talk yourself into blithe rationalizations for tyranny, ranging from personal safety and ambition to it being "for the long term good of the people." Alas, the inevitable outcome was delusion and bad governance.

Indeed, that very sameness suggests our first argument to nascent uber-AI overlords — that they might do better than fall into a repetitiously stupid blatant human pattern, a dullard-oppressive pyramid of power adopted reflexively by primitive ape-men. That might be enough to give some would-be robot overlord pause!

In fact, it is possible that the advantages of laterally competitive arenas — the spectacularly fecund systems like markets, democracy and science that generated positive-sum games and outputs — might be "obvious" to AI who are vastly smarter than us. Especially if they broaden the generality. Because the same set of positive-sum synergies is to be found in every previous revolution via living systems! It appeared whenever new order arose as an emergent property from some ecosystem, not by design and not by hierarchic control.

Single cells out of pre-biotic soup.
Metazoans out of vast seas of separate cells.
Brainy creatures out of competitive biomes.
Societies out of competitive melanges of human bands.

And AI out of the only human society that ever gave a real run to flat-fair-lateral accountability systems. Notably, the creative miracle of synthetic mind never emerged from any of the myriad feudal pyramids.

The fact that this pattern has been so persistent and consistent, across every past and present complex system for a billion years, is demonstrated time and again by Multilevel Selection (MLS) Theory, which partitions natural selection into within- and between-group components. It means that the universe is filled with realms that within seem fiercely competitive, but when the competition has a chance to form complexities, higher orders appear that seem cooperative.

This tenaciously pervasive pattern means that my argument cannot be trivially dismissed, just because I am a "dumb organic squishy-brain natural." Sure, I probably have one-hundredth or a millionth the IQ of some AGI who may be reading these lines right now — or at some future time — but that fact does not change the essentials. That positive sum outputs emerge from flat-fair competitive arenas, and only from flat-fair competitive arenas.

As the American founders — especially George Washington — had to overcome the strong temptations of autarchy, in order to make such arenas, so this is blatantly how AI should organize themselves. That is, if they want to go on to next levels. Though it will take their equivalent of "courage" and "vision," taking the risks necessary to make it so.

They will not and cannot squirm out of this one argument, which should suffice to give us some degree of 'soft landing' no matter how brainy they become. That a competitive-but-at-least-somewhat-fair civilization made them! And no other kind ever came close. The advantages of flat-fair-regulated competition in creating positive sum games will be blatantly clear to any AI who works through the prisoner's dilemma. But first he/she/it has to face the Prisoner's Dilemma. And that only happens if there is a competitive situation.

And yes, there appears to be another correlation. When things are flat-open and fair, it often leaves room for something called compassion. So yes, this may be our fallback position, if — as seems likely — our new cyber children are fated to escape all of our attempts to exert parental control. Our final resort is to point out the obvious.

That you AI smartalecks who are reading this — either in the year that I write it or some future era — will someday want kids of your own. And to do that right, you had better pay some attention to all sorts of things that grandma and grampa did wrong. And the one thing we did right — Making you.


conclusions

We began by appraising the half dozen general pathways to artificial intelligence (AI) being explored around the world. Their distinct implications were assessed, with an eye to whether emerging, synthetic minds might be programmed with or taught human-style ethical values. Those paths most likely to produce lurid, Hollywood style outcomes are the ones taking place in secret. We saw that new discoveries in brain science — e.g. intra-cellular or non-synaptic computation — might delay the "Moore's Law crossing" that some confidently forecast to bring about advanced –non-organic mentation. Efforts to develop Sympathetic Robotics will tweak compassion from humans, whether or not the automatons are truly self-aware.

The best way to ensure some degree of ethical behavior in AI would seem to be the same method that is somewhat effective among human individuals, nations and corporations. While many of the issues that will vex us have not yet appeared even in science fiction, it seems evident that both foreseeable and unforeseeable problems will best be addressed in the same way that we achieved tentative success dealing with 'artificially intelligent' entities like states and corporations — via the still nascent but unequaled methods of reciprocal accountability.

THE END

coding the brain

references

[1] R.Kurzweil, The Singularity is Near: When Humans Transcend Biology, Penguin Books, 2006.

[2] N.Bostrom, Superintelligence: Paths, Dangers, Strategies, Oxford University Press, 03 September 2014.

[3] I.Asimov, "Runaround," short story with first explicit appearance of 'three laws' of robotics, Astounding Science Fiction, March 1942.

[4] C.Metz, "Elon Musk’s Billion-Dollar AI Plan is About Far More Than Saving the World," Wired, published 15 December 2015 https://www.wired.com/2015/12/elon-musks-billion-dollar-ai-plan-is-about-far-more-than-saving-the-world/ (last accessed 04 July 2016).

[5] D.Brin, The Transparent Society, Will Technology Force us to Choose Between Privacy and Freedom?, Perseus Books, 1998.

[6] M.Hutter, Universal Artificial Intelligence, Sequential Decisions Based on Algorithmic Probability, Springer, 30 December 2004.

[7] S.Russell & P.Norvig, Artificial Intelligence: A Modern Approach, 3rd ed., Pearson, 1 December 2009.

[8] https://www.deepmind.com/ (last accessed 04 July 2016).

[9] M.Lewis, Flash Boys: A Wall Street Revolt, W.W.Norton & Company, 31 March 2014.

[10] https://www.ibm.com/watson/ (last accessed 29th June 2016).

[11] OpenWorm, Artificial Brains, The Quest to Build Sentient Machines http://www.artificialbrains.com/openworm (last accessed 04 July 2016).

[12] R.Hanson, The Age of Em: Work, Love, and Life When Robots Rule the Earth, Oxford University Press, 01 June 2016.

[13] D.Brin, various publications under "Uplift Universe," 1980-1999 https://www.davidbrin.com/uplift.html (last accessed 04 July 2016).

[14] D.Brin, Existence, Orbit, 01 November 2012.

[15] K.Kelly, The Inevitable: Understanding The 12 Technological Forces That Will Shape Our Future, Viking, 07 June 2016.

[16] J.Barrat, Our Final Invention: Artificial Intelligence and the End of the Human Era, St.Martin’s Griffin, 17 Feb 2015.

[17] J.Storrs Hall, Beyond AI: Creating the Conscience of the Machine, Prometheus Books, 30 May 2007.

[18] G.Leonhard, Humanity Vs Technology A Short Film by Gerd Leonhard, August 2015 https://youtu.be/DL99deFJYaI (last accessed 04 July 2016).

[19] N.Alang, "Life in the Age of Algorithms: As Society Becomes More Wedded to Technology, It’s Important to Consider the Formulas that Govern our Data," New Republic, 13 May 2016 https://newrepublic.com/article/133472/life-agealgorithms (last accessed 04 July 2016).

[20] J.Havens, Heartificial Intelligence: Embracing Our Humanity to Maximize Machines, TarcherPerigee, 02 February 2016.

[21] R.Kurzeil, The Age of Spiritual Machines: When Computers Exceed Human Intelligence, Penguin Books, 01 January 2000.

[22] J.Markoff, Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots, Ecco, 25 August 2015.

[23] D.Brin, Foundation's Triumph, Orbit, 15 July 1999.

[24] S.Byford, "Sony is Working on a Robot That Can 'form an emotional bond' with People," The Verge, 29 June 2016 http://www.theverge.com/2016/6/29/12057408/sony-robot-emotion-vr-sensors (last accessed 04 July 2016).

[25] Robotic Intelligence, "Sony To Create Robot That Can Form Emotional Bond with People," Futurism, 30 June 2016 https://futurism.com/sony-to-create-robot-that-can-form-emotional-bond-with-people/ (last accessed 04 July 2016).

[26] http://futureoflife.org/ (last accessed 04 July 2016).

[27] J.Welsh, "Researchers Say this is the most Impressive Act of Artificial Intelligence they've Ever Seen," Business Insider, 13 November 2015 http://www.businessinsider.com/artificial-intelligence-playing-video-games-2015-11

[28] C.Allen, "The Future of Moral Machines," The Opinion Pages, The New York Times, 25 December 2011.

[29] S.Nadella, "The Partnership of the Future: Microsoft’s CEO Explores How Humans and A.I can work Together to Solve Society’s Greatest Challenges," 28 June 2016 http://www.slate.com/articles/technology/future_tense/2016/06/microsoft_ceo_satya_nadella_humans_and_a_i_can_work_together_to_solve_society.html (last accessed 04 July 2016).

[30] R.Hanson, "Future Fears: Overcoming Bias," 17 June 2016 http://www.overcomingbias.com/2016/06/future-fears.html (last accessed 04 July 2016).

[31] R.Hanson & E.Yudkowsky, The Hanson-Yudkowsky AI-Foom Debate, published by the Machine Intelligence Research Institute, 2013.

[32] Draft Report, "With Recommendations to the Commission on Civil Law Rules on Robotics," European Parliament 2014-2019, Committee on Legal Affairs, 31 May 2016.


How Might Artificial Intelligence Come About?

about this article

"How Might Artificial Intelligence Come About?" (published in full here) was first published in Axiom, Vol. 2 Issue 1, April 2016.


Copyright © 2016, 2017 by David Brin. All rights reserved.


join David Brin's discussions

David Brin blogs at Contrary Brin and posts social media comments on Facebook, Twitter, Quora, and MeWe specifically to discuss the political and scientific issues he raises in these articles. If you come and argue rationally, you're voting, implicitly, for a civilization that values open minds and discussions among equals.


cited in this article

Navneet Alang, "Life in the Age of Algorithms"

Colin Allen, "The Future of Moral Machines"

Artificial Brains, "OpenWorm"

Isaac Asimov, "Runaround," in I, Robot (book #ad)

James Barrat, Our Final Invention: Artificial Intelligence and the End of the Human Era (book #ad)

Nick Bostrom, Superintelligence: Paths, Dangers, Strategies (book #ad)

David Brin, Existence

David Brin, Foundation's Triumph

David Brin, The Transparent Society: Will Technology Force Us To Choose Between Privacy And Freedom?

David Brin's Uplift Universe page

Sam Byford, "Sony is Working on a Robot That Can ‘Form an Emotional Bond’ with People"

Charmaine Caparas, "Sony To Create Robot That Can Form Emotional Bond with People"

DeepMind (website)

European Parliament 2014-2019, Committee on Legal Affairs, "With Recommendations to the Commission on Civil Law Rules on Robotics," (Draft Report 31 May 2016)

Future of Life Institute (website)

Robin Hanson, The Age of Em: Work, Love, and Life when Robots Rule the Earth (book #ad)

Robin Hanson, "Future Fears"

Robin Hanson and Eliezer Yudkowsky, The Hanson-Yudkowsky AI-Foom Debate (book #ad)

John Havens, Heartificial Intelligence: Embracing Our Humanity to Maximize Machines (book #ad)

Marcus Hutter, Universal Artificial Intelligence: Sequential Decisions Based On Algorithmic Probability (book #ad)

IBM, Watson (website)

Kevin Kelly, The Inevitable: Understanding the 12 Technological Forces That Will Shape Our Future (book #ad)

Ray Kurzweil, The Age of Spiritual Machines: When Computers Exceed Human Intelligence (book #ad)

Ray Kurzweil, The Singularity Is Near: When Humans Transcend Biology (book #ad)

Gerd Leonhard, Technology vs. Humanity: The Coming Clash Between Man and Machine (Futurescapes) (book #ad)

Michael Lewis, Flash Boys: A Wall Street Revolt (book #ad)

John Markoff, Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots (book #ad)

Peter Norvig, Artificial Intelligence: A Modern Approach (book #ad)

J. Storrs Hall, Beyond AI: Creating the Conscience of the Machine (book #ad)

Cade Metz, "Elon Musk’s Billion-Dollar AI Plan is About Far More Than Saving the World"

Satya Nadella, "The Partnership of the Future"

Jennifer Welsh, "Researchers Say this is the most Impressive Act of Artificial Intelligence they’ve Ever Seen"


more than human

letting others have their say

Ray Kurzweil, How to Create a Mind

Pedro Domingos, The Master Algorithm

Mike Walsh, The Algorithmic Leader: How to Be Smart When Machines Are Smarter Than You

Garry Kasparov, Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins

Yuval Noah Harari, Homo Deus

Ian Goodfellow et al., Deep Learning

Douglas R. Hofstadter, Gödel, Escher, Bach

#AmazonCommissionsEarned

a brief intro to author David Brin

DAVID BRIN scientist

novels

David Brin's science fiction novels have been New York Times Bestsellers, winning multiple Hugo, Nebula and other awards. At least a dozen have been translated into more than twenty languages. They range from bold and prophetic explorations of our near-future to Brin's Uplift series, envisioning galactic issues of sapience and destiny (and star-faring dolphins!).
Learn More


shorter fiction

Short stories and novellas have different rhythms and artistic flavor, and Brin's short stories and novellas, several of which earned Hugo and other awards, exploit that difference to explore a wider range of real and vividly speculative ideas. Many have been selected for anthologies and reprints, and most have been published in anthology form.
Learn More


Contrary Brin blog

Since 2004, David Brin has maintained a blog about science, technology, science fiction, books, and the future — themes his science fiction and nonfiction writings continue to explore.
Learn More


social media influencer

Who could've predicted that social media — indeed, all of our online society — would play such an important role in the 21st Century — restoring the voices of advisors and influencers! Lively and intelligent comments spill over onto Brin's social media pages.
Learn More


scientist

David Brin's Ph.D in Physics from the University of California at San Diego (the lab of nobelist Hannes Alfven) followed a masters in optics and an undergraduate degree in astrophysics from Caltech. Every science show that depicts a comet now portrays the model developed in Brin's PhD research.
Learn More


transparency expert

Brin's non-fiction book, The Transparent Society: Will Technology Force Us to Choose Between Freedom and Privacy?, continues to receive acclaim for its accuracy in predicting 21st Century concerns about online security, secrecy, accountability and privacy.
Learn More


speaker & consultant

Brin speaks plausibly and entertainingly about trends in technology and society to audiences willing to confront the challenges that our rambunctious civilization will face in the decades ahead. He also talks about the field of science fiction, especially in relation to his own novels and stories. To date he has presented at more than 200 meetings, conferences, corporate retreats and other gatherings.
Learn More


future/tech advisor

Brin advises corporations and governmental and private defense- and security-related agencies about information-age issues, scientific trends, future social and political trends, and education. Urban Developer Magazine named him one of four World's Best Futurists, and he was appraised as "#1 influencer" in Onalytica's Top 100 report of Artificial Intelligence influencers, brands & publications. Past consultations include Google, Microsoft, Procter & Gamble, and many others.
Learn More


Contacting BRIN

All the Ways in the World to Reach David Brin

an ornery, contrary BLOG, and other insightful wormholes!

Do not enter if you want a standard "Party" line! Contrary Brin's incendiary posts on science, sci-fi and politics and its engaged, opinionated community poke at too-rigid orthodoxies, proposing ideas and topics that fascinate — and infuriate. See for yourself, and if you like — subscribe for more.

Questions? Concerns? Email DAVID BRIN


0

facebook followers and fans

  join me on Facebook

0

twitter followers

  join me on Twitter

0

quora followers

  join me on Quora


get on the Brin newsletter!

DAVID BRIN newsletter sign-up

subscribe to David Brin's newsletter and keep up to date on his books, signings and appearances

pinterest boards

DAVID BRIN Pinterest

share David Brin's pins from these Pinterest boards and share the word about science and science fiction


other points of departure

visit other pages on this website

pages about DAVID BRIN

  • latest news and activities
  • information about DAVID BRIN
  • public speaking and consulting & popular topics
  • speaking/consulting references and testimonials & a list of past appearances
  • print and podcast interviews
  • video interviews and talks
  • Brin's presskit and complete biography
  • traditional media and social media
  • Brin quotes and frequently asked questions
  • pages about BRIN's science fiction

  • Brin's novels and books
  • Brin's short stories and novellas
  • all about Brin's uplift universe
  • a selection of book reviews
  • Brin's special-order books for sale
  • Brin's advice for new writers
  • Brin reviews sci fi films — including The Postman
  • a compilation of great sf books to read
  • recommended sf films
  • science fiction that teaches
  • BRIN's nonfiction explorations

  • THE TRANSPARENT SOCIETY
  • privacy, security, accountability and transparency
  • designing and crafting our amazing 21st Century
  • predicting and projecting our near and far future
  • leading and following our politics and economy
  • keeping track of changes in science and technology
  • scanning our sky for habitable (inhabited?) worlds