Ray Kurzweil, one of the world’s leading inventors, thinkers, and futurists, with a thirty-year track record of accurate predictions and called “the restless genius” by The Wall Street Journaland “the ultimate thinking machine” by Forbes magazine, spoke at the Nobel Week Dialogue in Gothenburg, Sweden.
In this talk, Kurzweil explores the history and trajectory of exponential advances in computing and Information Technology to project how he believes Artificial Intelligence (AI) may enhance our natural biological intelligence in the future.
“Pleasure to be here in this beautiful city and part of this prestigious proceeding. I’m going to talk about the future of intelligence where we’re going to enhance our natural biological intelligence which we’ve been doing through education, through artificial intelligence.
First, I want to share a surprising discovery I made in 1981. I was trying to figure out how to time my inventions and I started with the common wisdom that you cannot predict the future. But, I thought if I plotted a lot of data being an engineer visualized in the right way I could make some educated guesses. I made a surprising discovery. There’s one aspect of the future that’s remarkably predictable. That is the price performance and capacity not of everything and not of every technology but of information technology proceeds in a predictable manner.
I had the price performance of computing calculations per second, per constant dollar from the 1890 American census through 1980 in 1981. It was a very smooth curve and I projected it out to 2050. We’re now 34 years later in 2015 and it’s exactly where it should be.
That predictable trajectory is exponential meaning every fixed period of time if it doubles. It’s about every year and even that speed is speeding up.
Exponential growth is actually very different than our intuition if you wonder why we do have a brain. It is to make predictions about the future. So, we can predict the consequences of our actions, the consequences of inaction.
But, those built-in predictors are linear. You know, we track an animal in the field 10,000 years ago. We didn’t expect it to speed up. We expected to go a constant trajectory. That’s a linear projection that works very well that became hardwired in our brains.
The reality of information technologies is progresses by doubling every period. So, What’s the difference? A linear projection that’s our intuition goes 1,2,3, an exponential projection that’s the reality of information technology goes 1,2,4. Doesn’t sound that different but by the time you get to step 30. The linear projection our intuition is at 30. The exponential projection is at a billion except 40 it’s a trillion. This is not an idle speculation about the future.
I mean this little computer is actually several billion times more powerful per dollar than the computer I used when I went to MIT in the 1960s.
We’re going to do that again in the next 25 years. It’ll be another several billion times more powerful and so shrinking in size. This is a hundred thousand times smaller than that computer and the 2030s will have computers and all robotic devices the size of blood cells.
So, let me show you just what this means. An exponentially starts out very slowly and then it explodes when it gets to what I call the “Knee Of The Curve”.
INFORMATION TECHNOLOGIES (OF ALL KINDS)
DOUBLE THEIR POWER ABOUT
(PRICE, PERFORMANCE, CAPACITY, BANDWIDTH)
Here’s that graph I had in the 1900- 1981. I had it through 1980. This goes through 2010. That’s a very smooth exponential progression (actually doubly exponential). This is a logarithmic graphic to go up to graph we’re multiplying by actually every level is 100,000 times greater than the level below it. So, this goes back to the 1890 census. This is literally billions of times (actually a trillion times) greater computation that we can get for the same cause since the 1890 census.
The people look at this Moore’s law. But this started decades before Gordon Moore was even born. Moore’s law is just that part on the right having to do with chips we were shrinking vacuum-tube in the 1950s to keep this going then that hit a wall. We couldn’t shrink vacuum tubes anymore and keep the vacuum in 1959. So, we went to the fourth paradigm people have been talking about the end of Moore’s law which will happen by 2020 then we’ll go to the sixth paradigm which is three-dimensional computing that’s already begun that’ll be in full swing by 2020 that will keep this going for a very long time.
But really what’s the most interesting thing about this graph? The fact that it’s trillions times more computation of the same cost is interesting. But more interesting is where is World War 1, World War 2, the Cold War, the Great Depression. If this goes too thick and thin through one piece or bloom times and recessions, people feel a muscle slowdown during the recent recession. That’s not the case. It has a mind of its own. I have a mathematical treatment of why this is the case in my book Singularity is near but really the empirical evidence is the most convincing.
Moore’s Law is really just one paradigm among many within computation and computation is just one type of information technology. I don’t have time to dwell on all this. But, you could buy one transistor for $1 1968. You can buy 10 billion $4 today. They’re actually better. Because they’re smaller. So, the electrons have less distance to travel so that they’ve sped up the cost of a transistor cycle has come down by half every year. That’s a 50 percent deflation rate. This is really an economic thesis having to do with the economics of abundance which is what information technology presents versus the economics of scarcity where we see inflation. People say “ok that has to do with these strange little devices that we carry around”. But you know that’s just a very small part of the economy. But one industry one area after another is going to be transformed from a non information technology to becoming an information technology.
The one that’s undergoing that transformation right now is biology the enabling factor for that was the genome project. That was a perfect exponential halfway through the project we’ve sequence 1% of the genome and mainstream critics that I told you this wasn’t going to work who you are seven years one percent it’s going to take seven hundred years just like we said.
That’s linear thinking my reaction at the time was oh we finished 1% we’re almost done. Because 1% is only seven doublings for 100%. Indeed, it kept doubling 7 years later was finished. That has continued since the end of the genome project. That first genome cost a billion dollars. We’re now down to a few thousand dollars per genome. It’s not just sequencing our ability to understand. This basically software which is what it is to model it, to simulate it, and to reprogram it, to change it, to overcome disease and aging processes is also accelerating that exponential pace these technologies are now a thousand times more powerful than they were a decade ago when the genome project was completed.
Now, people worry about deflation. We had massive deflation during the war (depression of the 1930s). There’s a different reason collapse of consumer confidence. But the concern is if I can get the same stuff, the same computation, the same communication, the same genetic sequencing that I could get a year ago for half the price. Okay I’ll buy more but am I going to double my consumption after all how much do I need are not going to saturate my ability to consume these resources. If I don’t double my consumption, the size of the economy not as measured in bits bytes and base pairs but its measured in constant dollars or Euros or Crona is going to shrink for a variety of good reasons that would be a bad thing.
But that’s actually not what we see. I mean this is bits of memory chips we have dozens of graphs like this we actually more than double our consumption. We’ve been doing that in every form of information technology for the last 50 years. The reason for that is innovation creativity basically what the Nobel Prize celebrates. When price performance reaches certain points, whole new applications explode.
This is communications internet data traffic that’s the number of bits we move around wirelessly in the world. It was Morse code over AM radio century ago. Now, it’s three or four gene networks again perfect exponential growth multiplied by trillion in the last century.
This is the graph I had at the internet in the early 80s. It was called the ARPANET connected a few thousand scientists. I did the math and said wow this could be a world wide web connecting hundreds of millions of people in the late 1990s. We’ll need search engines. Because, we won’t be able to find anything. The computational communication resources needed for search engine will emerge. What I could not predict is that it would be these couple of kids in a Stanford dormitory who would take over the world of search among the 50 projects that were trying to do that. But the fact that we would need and search engines and that would be feasible was predictable
That’s the same graph seen on a linear scale that’s how we experienced it. So to the casual observer it looked like world wide web new thing came out of nowhere. But you could see it coming if you looked at the exponential progression.
I mentioned biology this is a whole revolution health and medicine used to be. A linear process hit or miss now we’re actually treating the software of life as software as an information process and so it’s an information technology. This is a grand transformation which is generally overlooked when we look at the future of Medicine.
So super computers on artificial intelligence we need both the hardware and the software there have been many different ways to estimate how much hardware capacity we need in order to functionally simulate the human brain Hans Moravec did it one way I’ve done it a couple of other ways it’s been other estimates they all come out to ten to the fourteenth calculations per second. We seeded that in ten years ago in supercomputers. We’re now vastly beyond that. A personal computer will achieve that level in the early 2020s.
This is a logarithmic scale perfect exponential growth in supercomputer capacity. This is a whole different area. But, we’re applying nanotechnology which is a form of information technology to the design of solar collection panels also energy storage. The amount of solar energy is doubling every two years again a perfect exponential. Right now, we’re only six doublings from 100% which point will be using one part in 10,000 of the sunlight.
Three-dimensional printing physical things are going to be transformed to. I think that we’re kind of in the hype phase now 3D printing. That’s going to take off in the 2020s when we have submicron resolutions what we ultimately will greatly expand our ability to create physical things as an information technology.
Ultimately computerize devices will be the size of blood cells. We’ll put them in our bloodstream. They’ll augment our immune system. They’ll provide virtual reality from within the nervous system and it also provide a direct connection from our brain to outside the brain to the clouds that sounds very futuristic. I’d point out that Parkinson’s patients already have a neural implant a computer connected into their brain that connects outside the patient. They can download new software to this neural implant from outside the patient.
So I’ve been thinking about thinking for 50 years and we have now effective models of how it works. The spatial resolution of non-invasive brain scanning is doubling every year. We have effective models now (functional models) of the neocortex which is where we do our thinking. That’s the outer layer of the brain emerged 200 million years ago with mammals.
We don’t have perfect knowledge. But, we’re gaining actually very useful information from the brain reverse engineering projects here in Europe in the United States. That are giving us hints as to how the neocortex works. What I’m doing in my group at Google is actually creating simulations (functional simulations) of how we believe the neocortex works. Those will get more refined as we learn more and more about the brain.
I’ll just give you a simple example. The basic function of the neocortex a basic unit is not one neuron. We’ve made great advances as you may have heard in deep neural nets where we can now actually go multiple layers and get very abstract features. Just a few years ago, people challenged that AI can’t even tell the difference between a dog and a cat. Now we can and 10,000 other categories as well with you deep neural nets.
But, they’re the chief unit that we’re building on as a neuron. The real way the brain is organizes in modules of about a hundred neurons and each of these modules can recognize a pattern. We were debating it dinner last night exactly which mathematical model fit that best hidden Markov model or a long short temporal memory. I won’t elaborate on that right now. But, we’re gaining more and more insight as to how these work. They’re organized in hierarchies going up the hierarchy we recognize even more and more abstract features.
At the very highest level about 15-20 levels high, we recognize things like -oh that was funny -that was ironic -she’s pretty you might think that those are more sophisticated. It’s actually the hierarchy below them. That’s more sophisticated.
16 year old girl was having brain surgeries. She was talking to the surgeons. They wanted to talk to her. You can do that. Because, there’s no pain receptors in the brain. Whenever they stimulated these points on her read, she would start to laugh. And they thought that they had found some kind of laugh reflex. But, no they had actually found the points in her neocortex that recognized humor. Whenever they stimulated these points, she would start to laugh. You guys are so funny just standing there was a typical common and they weren’t funny not while doing brain surgery.
We’re gaining more and more insight. This is helping us to build artificial intelligence. Ultimately, we will use artificial intelligence first of all to recognize language. The head of Watson is here will tell you about his system. We’re creating something similar Google which can actually understand language. Watson got this query correct in the rhyme category, a long tiresome speech delivered by a frothy pie topping and it quickly said what is a meringue harangue which is pretty good. The humans didn’t get that it got a higher score than the best of humans combined and it got its knowledge by reading Wikipedia. We’re doing something similar at Google ultimately.
We will put AI in the cloud will connect with these nanobots to extra neocortex in the cloud. This is very powerful but if I want to do anything interesting like the search or language translation doesn’t take place in.
This device this can multiply itself thousands or millions fold by connecting wirelessly to the cloud. We ultimately will do that with these nano robots in our brain. So if I’m walking along and I see somebody says I’ve got to think of something clever. I’ve got two seconds by 300 million neocortical modules isn’t going to cut it. I need a billion or ten billion for two seconds. I’ll be able to access that wirelessly in the cloud just the way these devices can access additional computation in the cloud and so will become a hybrid of biological and non-biological thinking. We’ll apply that to solving the problems of humanity.”