For Those Facing the AI Apocalypse

For Those Facing the AI Apocalypse

Global Affairs SEPTEMBER 23, 2015 | 01:45 GMT

By Joel Garreau

For almost a decade, the dominant Silicon Valley prediction has been Singularitarian utopianism. In this story about the future, the godlike powers afforded by the genetics, robotics, artificial intelligence and nanotechnology revolutions rapidly cure stupidity, ignorance, pain, suffering and even death. We merge with our machines and thus transcend. This outcome is inevitable, according to this prediction, because technology is on its ever-increasing march, and it matters little what we try to do about it. Call this the Heaven Prediction.

In just the past few months, however, the fashionable prediction among the techno elite has changed to dystopianism — the imminent arrival of satanic artificial intelligences that will be the last invention humans ever make, or will be allowed to make. The word doom is used liberally. In this reading of the tea leaves, technology is in control and there's frighteningly little we can do about it. Call this the Hell Prediction.

What is up with this astonishing swing? And are these really the only two doors for humanity to pass through?

Let's be clear: There is nothing wrong with these Heaven and Hell stories as scenarios. They are perfectly credible and legitimate possible futures logically based on existing facts. What's remarkable, though, is that many of the advocates of these futures present them as stone-cold predictions. They see no alternative.

In truth, techies' very deep "super-brain" worries about the accelerating and astonishing powers of artificial intelligence go back years. But a few months ago came the explosive announcement from tech luminary Elon Musk, renowned physicist Stephen Hawking and many creators of artificial intelligence. They warned that the "intelligence explosion" could sink the human race. Such legends as Apple co-founder Steve Wozniak and Microsoft co-founder Bill Gates have since said they share the concern.

This is highly reminiscent of the moment back in April 2000, when Bill Joy, co-founder of Sun Microsystems and sometimes called "the Edison of the Internet," presented his manifesto "Why the Future Doesn't Need Us." It appeared in Wired — the house organ of the digerati — and was subtitled "Our most powerful 21st-century technologies — robotics, genetic engineering, and nanotech — are threatening to make humans an endangered species." It included the AI apocalypse and more. Joy explicitly intended it as a wake-up call comparable in magnitude to that of Albert Einstein advising Franklin D. Roosevelt of the possibility of an atomic bomb. As do Musk, Hawking and company with their warnings today. And they're not kidding, and they're not wrong, and they're doing the species a favor.

The Prevail Scenario

But the surprising thing at this moment — well, maybe it isn't so surprising — is how often the techies can't think past their transistors when it comes to the impact of their creations on culture, values, society and the future of the human race.

Joy set the pattern that others continue to follow. First they pay due attention to Ray Kurzweil. Kurzweil is the polymath author of The Singularity Is Near: When Humans Transcend Biology and similar works. He is now a director of engineering at Google, heading up a team that develops machine intelligence. He is also cheerleader-in-chief for the Heaven Prediction and co-founder of Singularity University, where Musk has been a featured presenter.

The next step, however, is for it to occur to them, "Hey, wait a minute. This could go exactly the opposite way." This is the moment — when finally they realize the Heaven Prediction is not bullet-proof — that they switch to the other simplistic prediction because they can see no logical alternative. Then they turn against their own creations.

The problem here is that both Heaven and Hell are technodeterministic stories. They are mirror images. Both assume that the core driver of change is how many transistors you can hook up, how fast. They then take this nice smooth curve of Moore's Law, map it onto the future of the human race — up or down — and voila, they have a prediction. Technology drives history, in this view, leaving little or no room for human agency.

We've seen this error before. In the 1950s, no one would have given you a plugged nickel for the scenario we live in today, in which no one has popped a nuclear weapon in anger for 70 years. Of course, that abstinence could change in the next 20 minutes. But we humans for three generations have figured out how to avoid this existential peril — and prevail.

The Hell Prediction folk — like Joy — ascribe that to "luck." Whenever the species dodges a bullet, they call it sheer blind fortune.

Maybe. But when a species manages to create its own luck for millennia, you have to start wondering how.

Enter the Prevail Scenario. This is the third story for how our futures might go. It is far more faithful to history as we have known it. Prevail is not some middle ground between Heaven and Hell. It is way off in its own territory. Its fundamental assumption is that what matters is not how many transistors you can hook up — a la Moore's Law. It's how many ornery, cussed, imaginative, surprising humans you can hook up.

Unquestionably, if we're waiting for the House Judiciary Committee or some learned university center to solve our problems at their usual pace — while game-changing challenges to the future of the human race are increasing on a curve — that's a problem. The gap just keeps getting wider.

Suppose, however, that our bottom-up, flock-like human responses to these challenges are also rapidly increasing on a second curve. Then we have a shot.

There's reason for guarded optimism about the existence and efficacy of that second curve. If you look out at the future from A.D. 1200, you see marauding hordes and plague, and you say, "Okay, this experiment is over." But then circa 1450 you get movable type and the printing press. All of a sudden you've got a brand-new way for humans to store, share and distribute their ideas. The results are amazing. First you get the Renaissance. And then the Enlightenment, which yields that massively parallel processing called democracy. And science itself. And you find yourself in our world today, in which 1200 is ancient history in every sense.

These discoveries and innovations were beyond the imagination of any one king or country. These achievements were not top-down. They were bottom-up — frequently in defiance of power, notably the church. Our world was created by people who came together, collectively, to do the best they could against dire odds. And sometimes hitting transcendence. If you want to call this "heroic muddling-through," I won't argue. Our literature is full of Prevail stories – from Exodus in the Bible to the British "nation of shopkeepers" prevailing against the Third Reich. From Huckleberry Finn to Casablanca. From Star Wars to Lord of the Rings to Harry Potter.

Today you can see it in our headlines. On 9/11, the fourth airplane — United Airlines Flight 93 — never makes it to its target. Why? Because the Air Force was so smart? No. Because the White House was so smart? Hell no. It's because a small group of people on board that aircraft — empowered by their air-phone technology — figured out, diagnosed and cured their society's ills in a little under an hour flat. Was it an ideal solution? No. They all died. But they prevailed.

So how would you know if the Prevail Scenario was the future actually coming into being? Are we seeing an exponential increase in the quantity, quality, variety and complexity of ways that humans are finding to connect? Are we seeing novel and interesting group behavior as a result — like flocks doing amazing and surprising things?

Well, how about eBay? That's not just the world's biggest flea market. That's more than 100 million people worldwide achieving complexity without leaders for a long time (by Internet standards). Wikipedia amplifies our minds. I have no idea what Twitter is good for, but if it flips out every tyrant in the Middle East, I'm interested.

The Final Exam

So here's the question for those facing the AI apocalypse: We know that innovation centers like the Defense Advanced Research Projects Agency (DARPA) can and do accelerate the first curve of technological change. Can we, reading this piece, become the DARPA of the second curve? Can we accelerate our species' co-evolution — to our ends?

The central question of this co-evolution is not what the computer will become. It's what kind of people we are becoming.

Can human understanding about human understanding increase?

Can we learn what actually makes teams work?

Do we have a moral obligation to use enhancement technology to make ourselves beings who are more compassionate, moral and wise?

Is that our only chance for survival? As the scenarist Arie de Geus says: "The ability to learn faster than your competition may be the only sustainable competitive advantage."

The stakes could not be higher.

We cannot detect any other intelligent life in the universe. It has occurred to me to wonder whether every intelligent species gets to the point where it takes control of its own evolution.

Maybe this is the final exam. Maybe everybody else flunked. Let's not flunk.



Comments

Popular posts from this blog

Report: World’s 1st remote brain surgery via 5G network performed in China

BMW traps alleged thief by remotely locking him in car

Visualizing The Power Of The World's Supercomputers