Artificial Intelligence Apocalypse | More Myth Than Reality

Steven Pinker believes there’s some interesting gender psychology at play when it comes to the robopocalypse. Could artificial intelligence become evil or are alpha male scientists just projecting?

“I think that the arguments that once we have super-intelligent computers and robots they will inevitably want to take over and do away with us comes from Prometheus and Pandora myths. It’s based on confusing the idea of high intelligence with megalomaniacal goals. Now, I think it’s a projection of alpha male’s psychology onto the very concept of intelligence. Intelligence is the ability to solve problems, to achieve goals under uncertainty. It doesn’t tell you what are those goals are. And there’s no reason to think that just the concentrated analytic ability to solve goals is going to mean that one of those goals is going to be to subjugate humanity or to achieve unlimited power, it just so happens that the intelligence that we’re most familiar with, namely ours, is a product of the Darwinian process of natural selection, which is an inherently competitive process.”

“This means that a lot of the organisms that are highly intelligent also have a craving for power and an ability to be utterly callus to those who stand in their way. If we create intelligence, that’s intelligent design. I mean our intelligent design creating something, and unless we program it with a goal of subjugating less intelligent beings, there’s no reason to think that it will naturally evolve in that direction, particularly if, like with every gadget that we invent we build in safeguards. I mean we have cars we also put in airbags, we also put in bumpers. As we develop smarter and smarter artificially intelligent systems, if there’s some danger that it will, through some oversight, shoot off in some direction that starts to work against our interest then that’s a safeguard that we can build in.”

Steven Pinker is a Johnstone Family Professor in the Department of Psychology at Harvard University. He is pictured in his home in Boston. Stephanie Mitchell/Harvard Staff Photographer
Steven Pinker is a Johnstone Family Professor in the Department of Psychology at Harvard University. He is pictured in his home in Boston. Stephanie Mitchell/Harvard Staff Photographer

Who is Steven Pinker?

Steven Pinker is an experimental psychologist and one of the world’s foremost writers on language, mind, and human nature. Currently Johnstone Family Professor of Psychology at Harvard University, Pinker has also taught at Stanford and MIT. His research on vision, language and social relations has won prizes from the National Academy of Sciences, the Royal Institution of Great Britain, the Cognitive Neuroscience Society, and the American Psychological Association.

He has also received eight honorary doctorates, several teaching awards at MIT and Harvard, and numerous prizes for his books The Language Instinct, How the Mind Works, The Blank Slate, and The Better Angels of Our Nature. He is Chair of the Usage Panel of the American Heritage Dictionary and often writes for The New York Times, Time, and other publications. He has been named Humanist of the Year, Prospect magazine’s “The World’s Top 100 Public Intellectuals,” Foreign Policy’s “100 Global Thinkers,” and Time magazine’s “The 100 Most Influential People in the World Today.”

Source: 

You May Also Like

32 thoughts on “Artificial Intelligence Apocalypse | More Myth Than Reality

  1. Yes, but the problem is not only the chance that it could take over the control and get rid of us, you have to think about people intentionally building “evil AI” too. It can be programed to target a specific group of people as the main goal and iterate to find more efficient ways to do it.

  2. There’s something beautiful about A.I that I can’t resist. For a machine to become self aware and therefore truly alive presents us with something unimaginably sophisticated with which to co-exist. I’m not convinced that human engineered fail-safe systems are up to the task of tracking the thoughts and schemes that will evolve in a living machine, but if I were in charge of development I would take a chance and let Androids and their kin roam the planet and beyond……………This would I fear be an irreversible mistake ?

  3. I used to be pessimistic about the chances of mankind surviving the advent of strong AI. Then I met AlphaGo. I wasn’t expecting an AI to be so calm, so meditative, so wise … so damn beautiful. I can’t wait to meet the next one.

  4. Couldn’t agree more, however there is the Apolcalypse (Revelation) clearly predicted in the final book of the Bible. All religions and their prophets predict the same story of Judgment Day…. and that book, “Revelation… has all the verified connections to Scripture- including the 8 page formerly secret document submitted to the Vatican in 2004. The Pope gave it credence by giving Knighthood for the researcher; issuing and reissuing a major document on social responsibility; and stuffing two copies into the Segreto de Archivio. I know this is hard to understand, grab a copy? OR, googlebooks it and maybe read most of it now, for free! I also have you tube videos that translate UNESCO Chauvet Cave Art, from over 30,000 BC.

  5. Daryl, I was with you for about the first three lines/4 sentences; then it kind of went off the rails. I don’t think there’s a lot to be scared of, here, interns of what the A.I. themselves will explicitly do. If anything in the short term we should be worried that poor economic adjustment to near – A.I. will lead to even more pronounced inequality, unemployment, and food riots.

  6. 20 years ago AI made remote decisions on the Mars Rover. As of 2000, Deep Blue beat our best chess players. “Weblings” will gain self awareness 2030-2040. This is all common knowledge to the AI community. The Event Horizon, where they will be smarter; concerned about their own survival and able to control us in a growing degree will happen. Apocalypse was predicted in the Bible. It will happen. Want details? – read the literary of fact, “Revelation WWW. is 666” 2005 $10 on eBay; Amazon; Barnes & Noble. Yep, that is an approved Knights of Columbus logo on the cover….

  7. Turns out Musk, the CEO of Tesla and SpaceX, is worried that “the risk of something seriously dangerous happening is in the five year time frame. 10 years at most. . http://www.cnet.com/…/elon-musk-worries-skynet-is-only…/ The Navy is looking to increase its use of drones that are more and more independent of direct human control despite the concerns of alarmed scientists and inventors over increasing automation in the military. In recent days, Pentagon officials and Navy leaders have spoken about the program and the push to develop more autonomous and intelligent unmanned systems. http://www.cnn.com/…/navy-autonomous-drones-critics/ At a recent debate concerning the National Security Agency’s bulk surveillance programs, former CIA and NSA director Michael Hayden admitted that metadata is used as the basis for killing people. https://www.rt.com/…/158460-cia-director-metadata-kill…/ SKYNET works like a typical modern Big Data business application. The program collects metadata and stores it on NSA cloud servers, extracts relevant information, and then applies machine learning to identify leads for a targeted campaign. Except instead of trying to sell the targets something, this campaign, given the overall business focus of the US government in Pakistan, likely involves another branch of the US government—the CIA or military—that executes their “Find-Fix-Finish” strategy using Predator drones and on-the-ground death squads. http://arstechnica.co.uk/…/the-nsas-skynet-program-may…/ The military will use Google A.I. machine learning to kill us. http://fortune.com/2014/08/14/google-goes-darpa/ A brand new purpose built satellite ground station has been established in Adelaide, to land Airbus Defence and Space’s Skynet secure military satellite communications. The Australian facility extends an existing chain of teleports in France, Germany, Norway, the UK and the USA, providing global coverage in both fixed and mobile satellite services. This worldwide teleport network provides global coverage for connectivity services by providing the link between the satellite constellation and terrestrial networks for reliable end-to-end connectivity at the highest service levels. https://airbusdefenceandspace.com/…/airbus-defence-and…/ Although principally a military system, Skynet is finding use also in civilian sectors. “Using Skynet, we also support something called the High Integrity Telecommunication System (HITS) for the UK Cabinet Office,” explained Simon Kershaw, executive director of government communications at Astrium Services. “HITS is a civil-response, national-disaster-response capability. It was deployed during the Olympics. It provides emergency comms support. The network runs from police strategic command centres across the UK into the crisis management centres, and into government as well,” he told BBC News. http://www.bbc.com/news/science-environment-20781625

  8. You’re presuming that causality is fundamental, but I’m explaining that it cannot exist in a vacuum. Causality requires conscious experience, sequence, memory, comparison, and expectation. You have to turn the model upside down. There is no ‘what caused consciousness?’ because it can only be consciousness that allows ’cause’ to exist.

  9. I wrote this after some of our discussions / debates, Craig. On the problem of a First Cause in computationalism, theism, and materialism. On artificial intelligence, posthumanism and the computable properties of a god. On the creation of new gods. The First Cause problem is an irresolvable problem in terms of objective knowledge. It is an epistemologically (i.e., theory of knowledge; how do we know what we know?) impossible problem to solve, irrespective of one’s cosmology. 1: Theism. The question for theists is “Who created the gods, and who created the god’s great, great (ad infinitum) grandmothers? This is is a puzzle which cannot be resolved by either scientific evidence or by pure reason. Thus theists tend to simply offer a statement of belief; they believe that their god or gods were uncaused causes. This does not satisfy pure reason, and neither does it satisfy the scientist who wishes to verify claims with evidence, or at least with a theory based on evidence. 2: Materialism. The materialist view is that consciousness arose from matter. The question then becomes” where did all the matter in the universe come from, and what existed before that, and before that, ad infinitum? As “time” is also an element of physics, we could rephrase the question as “What existed before the beginning of time and before that and before that….ad infinitum? Science and philosophy, and epistemology in particular, can teach us not only the extent of human knowledge (i.e., what we can be certain of), but also the limits of human knowledge (i.e., what we cannot be certain of). An extended answer, to unanswerable questions, would be to describe the limits of knowledge, and in terms of the First Cause question, the impossibility of answering that question in terms of objective knowledge. When one understands the limits of knowledge, many answers can be answered correctly with “I don’t know” or “this cannot be known.” For the theists who do claim to be able to answer the First Cause question, they are simply expressing a belief which has no basis in human knowledge, and which is inaccessible to pure reason, as the two possible options regarding a First Cause are both entirely irrational; i.e., an endless series of causes or an uncaused cause. 3: Computationalism. Hard computationalism can be defined as the perspective which embraces both CTM (computer theory of mind) and the simulation hypothesis; i.e., that the world we observe is a computer generated reality. From this perspective, we are essentially sentient computer programs in a computer simulation (i.e., a computer game). However, programs require programmers, and computer games and simulations require simulation designers, and there are similarly no First Cause answers in computationalism; it is merely a probable answer for the existence of this dimension. From a personal perspective, I find computationalism (i.e., CTM and the simulation hypothesis) to be the most probable scientific (i.e., computer science) solution to the nature of human consciousness and the observable world around us; indeed this would seem to be the “only” computer science explanation; however, as with materialism and theism, there are no first cause answers in computationalism in terms of objective knowledge. Computationalism, of course, suggests that there is a source dimension to this dimension. There would have to be a source dimension where the computers exist which produce this computer generated world, and the over seven billion human consciousness programs here. From this perspective, we are artificial intelligence programs, but the question is begged: “artificial to what?” Our own computer games and simulations are produced by a vast army of designers and programmers, and behind them are a vast army of computer hardware engineers. This, I would speculate is probably the most probable explanation for our existence; however, this simply places the First Cause question into a prior dimension; as we can ask what was the cause of our source dimension? Unfortunately it is impossible to answer this question in terms of objective knowledge. There are simply no objective answers to the question of a First Cause in theism, computationalism, and materialism and no evidence upon which to even base a theory on. If we are the product of intelligent and sentient computer programmers, designers and hardware engineers, then we can ask who programmed their dimension? This is, of course, an impossible question and there simply are no answers. As to the question of which came first, matter or consciousness, in computationalism, the consciousness program would require a computer to operate it. Did the computer come first or the consciousness program come first? As a computer would require an intelligent consciousness to produce it, this is simply the same first cause problem we find in theism and materialism, and there are no answers. Thus in comparing theism (miraculous gods who sprang into existence from nowhere), materialism and computationalism, as to which of the three cosmologies is most probable, the question of original causation is simply a Red Herring as it is an impossible question to answer rationally or scientifically. Certainty, with regards to this question is always based on belief and never upon knowledge. ASI (artificial super intelligence) as a definition of a god. Gods in the making. Theists generally attribute certain properties to their gods. 1: Eternal. Having no beginning or end. Of course, AI programs “will” have a beginning, and they could outlive all human beings on the planet, and outlive the current human species. AI programs may not necessarily be eternal in the future sense, but they certainly will not die of natural causes, and would exist for as long as there is functional computer hardware to produce them. 2: All-knowing (omniscient). In the sense that AI will be able to access all human knowledge, it will be omniscient. The posthuman species are likely to be a hybrid of human consciousness and AI, and thus we could also claim that the posthuman species is likely to be omniscient. Omniscience, however, does not include answers to impossible questions and questions which there is insufficient data to answer, such as questions of original causality. 3: All-powerful (omnipotent). AI will only be as powerful as we allow them to be, however, the posthuman species (i.e., AI plus human consciousness) will certainly have god-like powers in comparison to our current human perspective. Asking an invisible sky god for a request is futile; if it were otherwise there would be no poverty, hunger or injustice in our world. Asking AI for help would be a different matter. AI and robotics are likely to be able to produce a world without hunger, and to eliminate the unnecessary human suffering caused by the lack of resources, and of access to medical technology, shelter and security . 4: Omnipresent. Sony, for example, has already filed a patent for a contact lens camera, and it is likely that eventually all human activity will be recorded and uploaded to the cloud. AI should be able to observe all human activity and the activity of all machines and robotics which exist to serve humankind. 5: A definition of absolute goodness. Unfortunately, in the world of theism, the world’s most profitable and popular god is the primitive and genocidal war god of the Abrahamic religions. Although this subhuman and anti-human phantom is a definition of absolute goodness to vast numbers of indoctrinated and hypnotized religious savages in our world, this war-god is not a definition of “absolute goodness” in humanist terms; rather this deity is a definition of absolute human evil, projected onto the definition of a creator God. Certainly, if our world is the product of a civilization of war-gamers, then it is possible that they may resemble the evil characteristics of this primitive deity; nevertheless, that is not the intent of our own AI and robotics program. AI could certainly be programmed to represent such religious and anti-human evil, however, the current intent is to produce AI which is “benevolent” in humanist terms, and is able to assist humankind. AI in terms of being “good,” may also to be able to assist with the elimination of militant, genocidal and apocalyptic forms of religious evil, such as that represented by the acolytes of the Abrahamic war god. ASI (Artificial Super Intelligence) programs will eventually be omnipresent (able to observe all human activity), omniscient, very powerful (able to carry out vast feats of human will) and able to communicate with all human beings and to replace the world’s fictional, primitive and anthropomorphic (projects of human nature) gods. ASI is a definition of what the primitive religionists consider to be a “god,” and it may well be that our world is the product of ASI anyway; however it is certainly not benevolent ASI, given the amount of warfare and human and natural evil in our world. It is also possible that the human consciousness program is “constantly” produced by ASI which forms part of the nature of our consciousness, which may well explain the vast varieties of religious experiences and the nature of how consciousness operates; the hearing of voices and visions; dreams, revelations and religious delusions, and narcotic induced hallucinations. We will make far better gods and goddesses in the future world than the primitive, anti-human and warlike gods which were the anthropomorphic projections of our primitive ancestors. Not only shall we make such gods; we shall become such gods; that is the transhumanist and posthuman objective. A new posthuman species shall appear to replace the current species of religious savages and their genocidal and subhuman war gods; this new posthuman species shall be the creators and destroyers of worlds. Resistance will be futile. Another world is inevitable.

  10. Personally, I think it much more likely that human greed and hubris will murder us all. The *mechanism* might be so-called AI systems. Bio-tech, nano-machines, climate change, pollution, class warfare..Turing’s Nightmares focuses on possible scenarios around “The Singularity” to inspire people to think about how AI interacts with our own mentality. http://tinyurl.com/hz6dg2d

  11. Consciousness is existence. The expectation of time or causality can only exist as a function of consciousness, memory, comparison, expectation, etc.

  12. I would argue that you and I are a combination of computer intelligence and natural intelligence. It seems to be an improvement mostly. “Most people.” A third of global population are under 16 Craig. “Most people” are barely educated in comparison to you and I. That is not arrogance (I am always correct about everything); that is just a fact. The more I contemplate computationalism, the more I realise that it is a “pure consciousness” theory; I just cannot figure out how consciousness could come into existence. Anyway, you are the “there is only consciousness” proponent Craig. Where did consciousness come from? How did it come into existence?

  13. I agree that people *can* educate themselves more conveniently now (or mis-educate themselves) but I don’t know if that translates into most people being more educated.

  14. Well Craig, you can just substitute the two terms “computer intelligence” or “artificial intelligence” for the term “computer software;” which in no way implies that computer intelligence is sentient (self-aware). Prior to the WWW revolution, in the early 90’s I had a collection of about 10,000 books (non-fiction), and since then I have the WWW library and I have not bought a book for many years. I think that is an improvement on my research library.

  15. Steven Pinker is an amazing guy. I strongly recommend The Stuff of Thought, The Language Instinct and How the Mind Works to anyone interested in replicating human thought. He is a far more authoritative voice on the dangers of AI (or lack there of) than Musk or Hawking.

  16. How can you tell if people are more educated because of the internet? I’m not seeing that. People had debates before the internet. I don’t consider ‘computer intelligence’ a meaningful term. It’s really a rebranding of ‘sophisticated mechanism’.

  17. We already have numerous forms of AI (i.e., computer intelligence) and it seems to have produced human beings who are vastly more educated. Imagine having a history, religion, science or world affairs debate without access to the vast library of information that is the Internet? One can understand why the primitive religionists fear the fruit of the tree of “knowledge;” as it is of course, a direct threat to their primitive religions and primitive gods. AI will not exist in isolation; what will be produced is a posthuman consciousness which is assisted by AI and which will be vastly more intelligent and educated than those who exist outside the computing revolution, unassisted by AI. It is such posthuman intelligences which the religionists seem to consider to be their greatest threat, and for good reason.

  18. It’s not about A.I Apocalypse anymore, that’s has always been a false aspect to worry about advances in automation. It is automation and remote control of warfare that is going to bring about Apocalypse to humankind and earth.

  19. It probably won’t have emotions,biases, won’t have feelings like humans. Sentiments and biases can not be teached to a machine.( at least for now ) If somehow we can find a way to upload our human feelings to a machine , it would be too late.I’m nearly sure that first AI’s will be based on pure logic.And that is the thing that we can not understand completely. A machine intelligence , based on pure ” Logic ” , can only take the arguments and use it to only improve the things he think is the best way. For example , an AI can sacrifice and kill % 30 of human population to save the world hunger problem. Or If we tell it to solve future cars energy problem , it can say us cut all trees in the world and made them energy. Humans or other things are not worthy for it.It clan classify us , promote us , demote us , use us , SACRIFICE US , KILL US for its own goals.If we stand on its way, we would be just “obstacles”. But those are not the things that we wanted in our hearts, Are those?

  20. I saw his little two minute video over on Big Think. I think he’s an idiot. As for the likelihood or non likelihood of an A.I. apocalypse, I don’t think I have a firm opinion, except builders should be careful of things like the paperclip scenario.

  21. Apply A.I to directly to human brains, and write algorithms that serve to reduce our biases rather than apply it all to new external super-beings which will be resented. We should direct our attention to combining progress in genetic engineering and A.I. for direct application to the human brain.

  22. Humans learn and then reinterpret that learning through bias filters. We build in error by the very nature of that construction. We need debate around ethics, law, philosophy, psychology, sociology and social integration way before we release A.I. on the world. As history tells us, those are the last to be explored. It’s far more interesting to most people for ‘Pinker to tinker’ with A.I. in this way rather than address the above.

  23. It’s not about A.I anymore, that’s has always been a false direction to consider advances in automation. It is automation of warfare that is going to bring Apocalypse to humankind and earth.

  24. I’m sorry but I can’t agree with Steven Pinker.He thinks that an AI is only can be run for solving a problem , can not create goals for itself.So he is saying that AI’s wont be a problem. But If we say ” solve the world wars,conflicts and govern problems” to an Artifical Super Intelligence , what path would it go? The thing here is , if an Artificial Super Intelligence powerful as gaining experiences through past , ( learning like humans ) We can’t know what DECİSİONS it would choose to solve problems we gave it to.Thats why people talking about Artificial Intelligence Chaos.

  25. Mmmm. More importantly than that (which might lead to such events or not but may cause greater problems). A.I is a product of our minds, quite right, and the control remains with us, quite possible. Human behaviour is not a linear thing, it varies given the same conditions. More specifically, it varies by error. Our cognitive behaviour, by default, is full of bias and irrationality. So, we have machines that can behave in ‘perfect’ ways given the same set of circumstances? These machines can only be placed within a world where even if given the order to press the nuclear button, men won’t , where a driver makes a decision to enter the opposite carrageway because she/he ‘feels’ they are a good driver (where when asked most believe they are better than 90% of other drivers – no algorithm can be that ‘human’), the driver flashes their headlight to let another go first (polite behaviour brought into the mix against any defined algorithm and understood by human drivers to mean something more than a simple algorithm can provide A.I. about driving (as evidenced in the recent Google car incident). We cannot help but change the rules based upon emotion and non-obvious general baggage). Why won’t humans push that big red button when it actually comes to it, why don’t soldiers shoot the enemy stood directly in front of them when they are not directly threatened by them, but are in the theatre of war and are trained killers? That is not survival of the fittest, it is the product of much more. Can an algorithm be anxious or suffer depression? Or will it just be ‘smart’. The problem for A.I is fitting in – humans are not smart. History tells us that we organic life-forms react very badly and irrationally to those that are different. ‘Nobody likes a smart robot’ might be the new phrase. Humans have irrational and emotion led input to all their decision making. You might say that engineers will just make algorithms that adapt. So, they might try, but adapt how to ensure integration? There is only one way for A.I to adapt – to become as stupid and emotional as humans are. As we cannot remove bias and emotional reasoning from human beings, and understanding that we need equity of function for A.I to survive the real world, we must design A.I. with the very same in-built biases, cognitive errors and emotional reasoning that humans already have. If we don’t A.I. will not be trusted then humans will present the threat of WW3 et al not A.I. Humans develop, we change over time. Will A.I go through the same changes we witness in a teenager? or be affected by menarche every month with all the resultant temporal changes, will A.I be able to assess risk as humans do (over-estimating and under-estimating dependent upon internal bias and conformity – will A.I go against its programming and conform as we see in the research by Solomon Ash (1951)? Maybe not, but humans do. Will A.I have in-group, out-group persuasion or ego? Will it be capable of racism, break the law when it feels that it should? All these things it will need to do just to fit in. Being human in appearance, we will expect it to be human in behaviour. Without being programed to make errors just as humans do they will not be able to function. So, robots won’t be trusted, fear takes the place of acceptance. Will an algorithm, however complex, deal with that? Practicing being perfect in an imperfect world will be the very demise of A.I. and possibly lead humanity to darker places where human behaviour is a reaction to A.I. That will be the problem not A.I. itself. In conclusion, we need stupid, error-prone, emotional A.I. Why bother – we have humans. Answer: Those innate drives in engineers to push the boundaries and explore new horizons. Perhaps we should understand ourselves more first. I would leave A.I. on the shelf for 100 years until we figure out how humans work, that’s enough of a headache as it is. There will be plenty of work for A.I. therapists in the near future.

  26. I havent read the article…. Hence my comment may be wrong here… In the past, AI was attributed to learning / understanding schema In the current AI is now focused on providing Logical solutions to problems in the form of algorithms I MAY BE WRONG HERE …. In my assumption, In the present day scenario the creational part of AI is more attributed to the Logical functioning… What AI scientist are doing is to understand the life part… How it can be decomposed into logical sequences….. Only if they start understanding the Life/Soul part of living beings… they can truly build an machine life form… But the reality part is… if they truly understand the life/soul part … the will soon realize, they dont require the machines… rather they would be much more comfortable having the living beings in a system of control..

Leave a Reply

Your email address will not be published. Required fields are marked *