Our Final Invention

Our Final Invention: Artificial Intelligence and the End of the Human Era by James Barrat

In as little as a decade, artificial intelligence could match and then surpass human intelligence. Corporations and government agencies around the world are pouring billions into achieving AI’s Holy Grail―human-level intelligence. Once AI has attained it, scientists argue, it will have survival drives much like our own. We may be forced to compete with a rival more cunning, more powerful, and more alien than we can imagine.

Through profiles of tech visionaries, industry watchdogs, and groundbreaking AI systems, James Barrat’s Our Final Invention explores the perils of the heedless pursuit of advanced AI. Until now, human intelligence has had no rival. Can we coexist with beings whose intelligence dwarfs our own? And will they allow us to?

The key points Barrat argues for are these:

Intelligence explosion this century. We’ve already created machines that are better than humans at chess and many other tasks. At some point, probably this century, we’ll create machines that are as skilled at AI research as humans are. At that point, they will be able to improve their own capabilities very quickly. (Imagine 10,000 Geoff Hintons doing AI research around the clock, without any need to rest, write grants, or do anything else.) These machines will thus jump from roughly human-level general intelligence to vastly superhuman general intelligence in a matter of days, weeks or years (it’s hard to predict the exact rate of self-improvement).

The power of superintelligence.
 Humans steer the future not because we’re the strongest or fastest but because we’re the smartest. Once machines are smarter than we are, they will be steering the future rather than us. We can’t constrain a superintelligence indefinitely: that would be like chimps trying to keep humans in a bamboo cage. In the end, if vastly smarter beings have different goals than you do, you’ve already lost.


Superintelligence does not imply benevolence.
 In AI, “intelligence” just means something like “the ability to efficiently achieve one’s goals in a variety of complex and novel environments.” Hence, intelligence can be applied to just about any set of goals: to play chess, to drive a car, to make money on the stock market, to calculate digits of pi, or anything else. Therefore, by default a machine superintelligence won’t happen to share our goals: it might just be really, really good at maximizing ExxonMobil’s stock price, or calculating digits of pi, or whatever it was designed to do. As Theodore Roosevelt said, “To educate [someone] in mind and not in morals is to educate a menace to society.”


Convergent instrumental goals. 
A few specific “instrumental” goals (means to ends) are implied by almost any set of “final” goals. If you want to fill the galaxy with happy sentient beings, you’ll first need to gather a lot of resources, protect yourself from threats, improve yourself so as to achieve your goals more efficiently, and so on. That’s also true if you just want to calculate as many digits of pi as you can, or if you want to maximize ExxonMobil’s stock price. Superintelligent machines are dangerous to humans — not because they’ll angrily rebel against us — rather, the problem is that for almost any set of goals they might have, it’ll be instrumentally useful for them to use our resources to achieve those goals. As Yudkowsky put it, “The AI does not love you, nor does it hate you, but you are made of atoms it can use for something else.”


Humans values are complex.
 Our idealized values — i.e., not what we want right now, but what wewould want if we had more time to think about our values, resolve contradictions in our values, and so on — areprobably quite complex. Cognitive scientists have shown that we don’t care just about pleasure or personal happiness; rather, our brains are built with “a thousand shards of desire.” As such, we can’t give an AI our values just by telling it to “maximize human pleasure” or anything so simple as that. If we try to hand-code the AI’s values, we’ll probably miss something that we didn’t realize we cared about. Scholarly references:


Human values are fragile.
 In addition to being complex, our values appear to be “fragile” in the following sense: there are some features of our values such that, if we leave them out or get them wrong, the future contains nearly 0% of what we value rather than 99% of what we value. For example, if we get a superintelligent machine to maximize what we value except that we don’t specify consciousness properly, then the future would be filled with minds processing information and doing things but there would be “nobody home.” Or if we get a superintelligent machine to maximize everything we value except that we don’t specify our value for novelty properly, then the future could be filled with minds experiencing the exact same “optimal” experience over and over again, like Mario grabbing the level-end flag on a continuous loop for a trillion years, instead of endless happy adventure.

http://www.kurzweilai.net/

Our Final Invention does an excellent job of explaining these and other technical AI details, all while leading a grand tour of the AI world. This is no dense academic text. Barrat uses clear journalistic prose and a personal touch honed through his years producing documentaries for National Geographic, Discovery, and PBS. The book chronicles his travels interviewing a breadth of leading AI researchers and analysts, interspersed alongside Barrat’s own thoughtful commentary. The net result is a rich introduction to AI concepts and characters. Newcomers and experts alike will learn much from it.

The book is especially welcome as a counterpoint to The Singularity Is Near and other works by Ray Kurzweil. Kurzweil is by far the most prominent spokesperson for the potential for AI to transform the world. But while Kurzweil does acknowledge the risks of AI, his overall tone is dangerously optimistic, giving the false impression that all is well and we should proceed apace with AGI and other transformative technologies.Our Final Invention does not make this mistake. Instead, it is unambiguous in its message of concern. http://blogs.scientificamerican.com/

Now comes James Barrat with a new book — “Our Final Invention: Artificial Intelligence and the End of the Human Era” — that accessibly chronicles these risks and how a number of top AI researchers and observers see them. If you read just one book that makes you confront scary high-tech realities that we’ll soon have no choice but to address, make it this one. https://www.washingtonpost.com/

For 20 years James Barrat has created documentary films for National Geographic, the BBC, Discovery Channel, History Channel and public television. In 2000, during the course of his career as a film-maker, James interviewed Ray Kurzweil and Arthur C. Clarke. The latter interview not only transformed entirely Barrat’s views on artificial intelligence, but also made him write a book on the technological singularity called Our Final Invention: Artificial Intelligence and the End of the Human Era.

I read an advance copy of Our Final Invention and it is by far the most thoroughly researched and comprehensive anti-The Singularity is Near book that I have read so far. And so I couldn’t help it but invite James on Singularity 1 on 1 so that we can discuss the reasons for his abrupt change of mind and consequent fear or the singularity.

During our 70 minute conversation with Barrat we cover a variety of interesting topics such as: his work as a documentary film-maker who takes interesting and complicated subjects and makes them simple to understand; why writing was his first love and how he got interested in the technological singularity; how his initial optimism about AI turned into pessimism; the thesis of Our Final Invention; why he sees artificial intelligence more like ballistic missiles rather than video games; why true intelligence is inherently unpredictable “black box”; how we can study AI before we can actually create it; hard vs slow take-off scenarios; the positive bias in the singularity community; our current chances of survival and what we should do..

James Barrat is a documentary filmmaker and author of “Our Final Invention: Artificial Intelligence and the End of the Human Era.”

For the fifth instalment of the Artificial Intelligence Series, Darrell Becker and I interview James Barrat. James is a film maker, public speaker, and an author. His book, Our Final Invention, masterfully presents the case for why you should be worried about the singularity.

How is AI different from other multi-use technologies? What are the basic drives that an AI will have? Can’t we just program ethics into AIs and alleviate our worries? How does James feel about AI depictions in popular media? We cover all of these questions and more in this delightful conversation!

The Hollywood cliché is that artificial intelligence will take over the world. Could this cliché soon become scientific reality, as AI matches then surpasses human intelligence?

Each year AI’s cognitive speed and power doubles; ours does not. Corporations and government agencies are pouring billions into achieving AI’s Holy Grail — human-level intelligence. Scientists argue that AI that advanced will have survival drives much like our own. Can we share the planet with it and survive?

The recently published book Our Final Invention explores how the pursuit of Artificial Intelligence challenges our existence with machines that won’t love us or hate us, but whose indifference could spell our doom. Until now, intelligence has been constrained by the physical limits of its human hosts. What will happen when the brakes come off the most powerful force in the universe?

This London Futurists Hangout on Air will feature a live discussion between the author of Our Final Invention, James Barrat, and an international panel of leading futurists: Jaan Tallinn, William Hertling, Calum Chace, and Peter Rothman.

Leave a Reply

Your email address will not be published. Required fields are marked *