{"id":399,"date":"2020-12-06T00:20:21","date_gmt":"2020-12-05T21:20:21","guid":{"rendered":"http:\/\/artificialbrain.xyz\/?p=399"},"modified":"2020-12-06T17:03:23","modified_gmt":"2020-12-06T14:03:23","slug":"our-final-invention","status":"publish","type":"post","link":"https:\/\/www.newworldai.com\/our-final-invention\/","title":{"rendered":"Our Final Invention"},"content":{"rendered":"

Our Final Invention: Artificial Intelligence and the End of the Human Era by James Barrat<\/strong><\/span><\/h2>\n

In as little as a decade, artificial intelligence could match and then surpass human intelligence. Corporations and government agencies around the world are pouring billions into achieving AI’s Holy Grail\u2015human-level intelligence. Once AI has attained it, scientists argue, it will have survival drives much like our own. We may be forced to compete with a rival more cunning, more powerful, and more alien than we can imagine.<\/span><\/p>\n

Through profiles of tech visionaries, industry watchdogs, and groundbreaking AI systems, James Barrat’s Our Final Invention explores the perils of the heedless pursuit of advanced AI. Until now, human intelligence has had no rival. Can we coexist with beings whose intelligence dwarfs our own? And will they allow us to?<\/span><\/p>\n

The key points Barrat argues for are these:<\/span><\/strong><\/span><\/p>\n

Intelligence explosion this century<\/strong><\/span>. We\u2019ve already created machines that are better than humans at chess and many other tasks. At some point, probably this century, we\u2019ll create machines that are as skilled at AI research as humans are. At that point, they will be able to improve their own capabilities very quickly. (Imagine 10,000 Geoff Hintons doing AI research around the clock, without any need to rest, write grants, or do anything else.) These machines will thus jump from roughly human-level general intelligence to vastly superhuman general intelligence in a matter of days, weeks or years (it\u2019s hard to predict the exact rate of self-improvement).<\/span>
\n
\nThe power of superintelligence.<\/strong><\/span>\u00a0Humans steer the future not because we\u2019re the strongest or fastest but because we\u2019re the smartest. Once machines are smarter than we are, they will be steering the future rather than us. We can\u2019t constrain a superintelligence indefinitely: that would be like chimps trying to keep humans in a bamboo cage. In the end, if vastly smarter beings have different goals than you do, you\u2019ve already lost.<\/span>
\n
\nSuperintelligence does not imply benevolence.<\/strong><\/span>\u00a0In AI, \u201cintelligence\u201d just means something like \u201cthe ability to efficiently achieve one\u2019s goals in a variety of complex and novel environments.\u201d Hence, intelligence can be applied to just about any set of goals: to play chess, to drive a car, to make money on the stock market, to calculate digits of pi, or anything else. Therefore, by default a machine superintelligence won\u2019t happen to share our goals: it might just be really, really good at maximizing ExxonMobil\u2019s stock price, or calculating digits of pi, or whatever it was designed to do. As Theodore Roosevelt said, \u201cTo educate [someone] in mind and not in morals is to educate a menace to society.\u201d<\/span>
\n
\nConvergent instrumental goals.\u00a0<\/strong><\/span>A few specific \u201cinstrumental\u201d goals (means to ends) are implied by almost any set of \u201cfinal\u201d goals. If you want to fill the galaxy with happy sentient beings, you\u2019ll first need to gather a lot of resources, protect yourself from threats, improve yourself so as to achieve your goals more efficiently, and so on. That\u2019s also true if you just want to calculate as many digits of pi as you can, or if you want to maximize ExxonMobil\u2019s stock price. Superintelligent machines are dangerous to humans \u2014 not because they\u2019ll angrily rebel against us \u2014 rather, the problem is that for almost any set of goals they might have, it\u2019ll be instrumentally useful for them to use our resources to achieve those goals. As Yudkowsky put it, \u201cThe AI does not love you, nor does it hate you, but you are made of atoms it can use for something else.\u201d<\/span>
\n
\nHumans values are complex.<\/strong><\/span>\u00a0Our idealized values \u2014 i.e., not what we want right now, but what wewould want if we had more time to think about our values, resolve contradictions in our values, and so on \u2014 areprobably quite complex. Cognitive scientists have shown that we don\u2019t care just about pleasure or personal happiness; rather, our brains are built with \u201ca thousand shards of desire.\u201d As such, we can\u2019t give an AI our values just by telling it to \u201cmaximize human pleasure\u201d or anything so simple as that. If we try to hand-code the AI\u2019s values, we\u2019ll probably miss something that we didn\u2019t realize we cared about. Scholarly references:<\/span>
\n
\nHuman values are fragile.<\/strong><\/span>\u00a0In addition to being complex, our values appear to be \u201cfragile\u201d in the following sense: there are some features of our values such that, if we leave them out or get them wrong, the future contains nearly 0% of what we value rather than 99% of what we value. For example, if we get a superintelligent machine to maximize what we value except that we don\u2019t specify consciousness properly, then the future would be filled with minds processing information and doing things but there would be \u201cnobody home.\u201d Or if we get a superintelligent machine to maximize everything we value except that we don\u2019t specify our value for novelty properly, then the future could be filled with minds experiencing the exact same \u201coptimal\u201d experience over and over again, like Mario grabbing the level-end flag on a continuous loop for a trillion years, instead of endless happy adventure.<\/span>
\n http:\/\/www.kurzweilai.net\/<\/em><\/span><\/p>\n

Our Final Invention does an excellent job of explaining these and other technical AI details, all while leading a grand tour of the AI world. This is no dense academic text. Barrat uses clear journalistic prose and a personal touch honed through his years producing documentaries for National Geographic, Discovery, and PBS. The book chronicles his travels interviewing a breadth of leading AI researchers and analysts, interspersed alongside Barrat\u2019s own thoughtful commentary. The net result is a rich introduction to AI concepts and characters. Newcomers and experts alike will learn much from it.<\/span><\/p>\n

The book is especially welcome as a counterpoint to The Singularity Is Near and other works by Ray Kurzweil. Kurzweil is by far the most prominent spokesperson for the potential for AI to transform the world. But while Kurzweil does acknowledge the risks of AI, his overall tone is dangerously optimistic, giving the false impression that all is well and we should proceed apace with AGI and other transformative technologies.Our Final Invention does not make this mistake. Instead, it is unambiguous in its message of concern.\u00a0http:\/\/blogs.scientificamerican.com\/<\/em><\/span><\/p>\n

Now comes James Barrat with a new book \u2014 \u201cOur Final Invention: Artificial Intelligence and the End of the Human Era\u201d \u2014 that accessibly chronicles these risks and how a number of top AI researchers and observers see them. If you read just one book that makes you confront scary high-tech realities that we\u2019ll soon have no choice but to address, make it this one.\u00a0https:\/\/www.washingtonpost.com\/<\/em><\/span><\/p><\/blockquote>\n