AGI Archives - New World : Artificial Intelligence https://www.newworldai.com/tag/agi/ Artificial Intelligence, Deep Learning, Machine Learning, AI Lectures, AI Conferences, AI TED Talks, AI Movies, AI Books Wed, 25 Sep 2019 21:17:06 +0000 en-US hourly 1 https://wordpress.org/?v=6.1.6 What is Going to Happen in the next 40 years? – Ben Goertzel https://www.newworldai.com/going-happen-next-40-years-ben-goertzel/ https://www.newworldai.com/going-happen-next-40-years-ben-goertzel/#respond Tue, 27 Aug 2019 16:17:33 +0000 https://www.newworldai.com/?p=5052 Dr Ben Goertzel is the Founder and CEO of SingularityNET and Chief Science Advisor for Hanson Robotics. He is one of the world’s leading

The post What is Going to Happen in the next 40 years? – Ben Goertzel appeared first on New World : Artificial Intelligence.

]]>
Dr Ben Goertzel is the Founder and CEO of SingularityNET and Chief Science Advisor for Hanson Robotics.

He is one of the world’s leading experts in Artificial General Intelligence (AGI), with decades of expertise in applying AI to practical problems like natural language processing, data mining, video gaming, robotics, national security and bioinformatics.

He was part of the Hanson team which developed the AI software for the humanoid Sophia robot, which can communicate with humans and display more than 50 facial expressions.Today he also serve as Chairman of the AGI Society, the Decentralized AI Alliance and the futurist nonprofit organisation Humanity+.

Source :  London Real

The post What is Going to Happen in the next 40 years? – Ben Goertzel appeared first on New World : Artificial Intelligence.

]]>
https://www.newworldai.com/going-happen-next-40-years-ben-goertzel/feed/ 0
Hey Watson, Do you Believe in God? Elizabeth Kiehner https://www.newworldai.com/who-determines-the-future-of-artificial-intelligence-elizabeth-kiehner/ https://www.newworldai.com/who-determines-the-future-of-artificial-intelligence-elizabeth-kiehner/#respond Mon, 05 Jun 2017 19:52:38 +0000 http://artificialbrain.xyz/?p=2904 Elizabeth Kiehner, leading design operations for IBM Interactive Experience across the globe with multi-disciplinary expertise and an appetite for business growth, highlights the weight

The post Hey Watson, Do you Believe in God? Elizabeth Kiehner appeared first on New World : Artificial Intelligence.

]]>
Elizabeth Kiehner, leading design operations for IBM Interactive Experience across the globe with multi-disciplinary expertise and an appetite for business growth, highlights the weight of our biases and values when it comes to programming artificial intelligence. What is their role and where does algorithmic accountability come into play?

“When we think about artificial intelligence, we set up this crossroad where we often hear like a quote like this a very apocalyptic picture painted for us about what the future is going to be like.

I’m here to fly in the face of that. You might refer to this Stephen Hawking quote or similar quotes made from Elon Musk or even from Bill Gates. I have a much more optimistic outlook that I want to talk about with you here today.

So when we think about AI, the term was first coined in 1956. That’s 61 years ago and since then we’ve done very little to educate the general public on the nuances of artificial intelligence and what that actually means what is this impact on our lives?

So the first thing that I want to draw a distinction around is the difference between Artificial Intelligence and Artificial General Intelligence. A new acronym AGI. So if you remember one thing AGI come away with that tonight and tell your friends. Because artificial general intelligence is what people typically think of when they imagine that robot buddy they’re going to have in the future that has full autonomy total consciousness it might be as intelligent as them. It might be even a little bit more intelligent than them.

We are so far away from that today. And it’s something that we may not even see in our lifetime. I think that we’re focusing our attention on the wrong thing. I want to argue that we should be scrutinizing people. It’s the people who are doing the programming of AI today that really have an impact on driving the future of where this technology goes.

And people are wonderful but we all have so much baggage don’t we? We have biases and we have prejudices and we have all sorts of different ethical values beliefs and religions that inform and what we do every day? We bring that with us quite frankly to our work.

But, we luckily have new rules for the cognitive era. So, we have at IBM establish the role of an ethics adviser. We have field testing in our production process before anything gets deployed and then explanation based collateral systems.

So if you’re working with Watson and you ask Watson a question, you will not only get a very quick answer. You’ll find out the level of confidence Watson has in that answer. You’ll find out all of the rationale behind the decision making process. So that’s really great.. Right?

But, What about bias? What happens when we talk about bias? Who here in this room would purport to be biased? Raise your hand. That’s great. It’s everyone. And it’s inevitable that humans are all bias. It’s part of what makes us human quite frankly. These gut instincts and our feelings that we sometimes get in the pit of our stomach that give us a sense- you know “I should do this thing instead of that”. All of these underlying fundamental drivers which sometimes include religion help us to inform the up to 35,000 decisions we make on average each and every day.

What about turning our attention now to religion? Who here in the room believes in God? So whether you do or not you’ve certainly heard of this quote before we all have and I would argue that we stand here today at this point where we’re creating artificial intelligence in our own image and we need to be very very careful about how we’re doing that and how we’re programming for that.

If we turn the table the other way and ask Watson “Hey Watson, do you believe in God”, how would Watson respond? Well that would actually depend on which instance of Watson you are talking to. Watson is trained in several different domains and has very deep domain knowledge in each of those areas. So are we talking to financial services Watson? or are we talking to legal Watson? We might be lucky enough to talk about Watson who won jeopardy.

But today let’s pretend that we’re talking to the Watson that’s very skilled in understanding medicine and oncology. And when we think about that, we’re definitely treading in the territory of life and death and with life and death inevitably comes up the topic of religion.

Just think back a few years to all of the controversy that was surrounding the Terri Schiavo case. And then pivot for a moment to other legal aspects such as the fact that there are 13 countries in the world where being an atheist is illegal. Not only is it illegal but it is punishable by death. You see how important it is for us to consider what religion means when we train our cognitive systems. So we have these people that we call quality experts and it’s pretty easy for us now to take structured and unstructured data and feed it into Watson.

But what we have to understand and where the subjectivity comes in is another acronym called URL. That’s our ability to help Watson understand and reason and learn. That’s the area again where we need to focus our attention as to what we’re programming into the systems as well as algorithmic accountability.

What’s happening in our data sets? Where is the data coming from? Is it coming from a safe source? And our data sets biased? Ultimately when we look at the values that were embedding into the system, we’re going to ultimately need to agree on what those are and where they’re coming from. Are they reflective of judeo-christian values? Are they reflective of secular values? Or they reflective of values that we haven’t even defined it?

“That is what keeps me up at night. Despite that, I have a very very positive feeling about the future. In just three years will see 5 million people connected to the Internet. That’s more than half of the world global population. So all different people from different races, religions, colours, working together online be it with AI be it with quantum computing on the cloud. It’ll be miraculous time for us to think together. So what I asked you here today and the question on the table that I want to leave you with is really this: The question is not whether machines think.. But whether humans do..”

Elizabeth Kiehner leads the Global Design Services for IBM with multi-disciplinary experience in user experience, design, and technology. She is the director of Global Design Services and Chief of Staff at IBM.

Elizabeth Kiehner is a design thinking evangelist who believes in harnessing the power of creativity to solve global business challenges. By unearthing human insights, Liz co-creates customer-centric experiences that have the power to transform the enterprise.

“She’s the co-creator of GM’s new OnStar Go offering—what the car company is calling the “first cognitive mobility platform” that uses IBM’s Watson learning supercomputer to plug drivers into connected services and pick up skills based on people’s patterns. For instance, it could pre-order a coffee for pickup from a drive-in window, or use listening habits to create a personalized radio station,” according to TechCrunch.

Her creativity and passion have impacted some of the world’s most recognized brands with regular engagements and workshops with the C-suite. She has recently co-created with General Motors the world’s first cognitive mobility platform, OnStar Go.

For two decades Elizabeth has led creative, design, and technology teams to produce groundbreaking ideas for organizations including Thornberg & Forester (co-founder), Havas, Suissa Miller, Trollback & Company and Freestyle Collective. Her portfolio includes campaigns and game-changing digital, mobile and wearable platforms for brands such as Google, Microsoft, Viacom, American Express, Apple, Turner, Fidelity, Schwab, GM, GE and Khan Academy and many more.

The post Hey Watson, Do you Believe in God? Elizabeth Kiehner appeared first on New World : Artificial Intelligence.

]]>
https://www.newworldai.com/who-determines-the-future-of-artificial-intelligence-elizabeth-kiehner/feed/ 0
The First Truly Intelligent Machine Will Be Humanity’s Last Invention https://www.newworldai.com/the-first-truly-intelligent-machine-will-be-humanitys-last-invention/ https://www.newworldai.com/the-first-truly-intelligent-machine-will-be-humanitys-last-invention/#respond Mon, 06 Feb 2017 18:18:31 +0000 http://artificialbrain.xyz/?p=1762 “The mathematician I.J. Good back in the mid-1960s introduced what he called the intelligence explosion, which in essence was the same as the concept

The post The First Truly Intelligent Machine Will Be Humanity’s Last Invention appeared first on New World : Artificial Intelligence.

]]>
“The mathematician I.J. Good back in the mid-1960s introduced what he called the intelligence explosion, which in essence was the same as the concept that Vernor Vinge later introduced and Ray Kurzweil adopted and called the technological singularity. What I.J. Good said was the first intelligent machine will be the last invention that humanity needs to make. Now in the 1960s the difference between neural AI and AGI wasn’t that clear and I.J. Good wasn’t thinking about a system like AlphaGo that could beat Go but couldn’t walk down the street or add five plus five. In the modern vernacular what we can say is the first human level AGI, the first human level artificial general intelligence, will be the last invention that humanity needs to make. “

“And the reason for that is once you get a human level AGI you can teach this human level AGI math and programming and AI theory and cognitive science and neuroscience. This human level AGI can then reprogram itself and it can modify its own mind and it can make itself into a yet smarter machine. It can make 10,000 copies of itself, some of which are much more intelligent than the original. And once the first human level AGI has created the second one which is smarter than itself, well, that second one will be even better at AI programming and hardware design and cognitive science and so forth and will be able to create the third human level AGI which by now will be well beyond human level.”

“So it seems that it’s going to be a laborious path to get to the first human level AGI. I don’t think it will take centuries from now but it may be decades rather than years. On the other hand once you get to a human level AGI I think you may see what some futures have called a hard takeoff where you see the intelligence increase literally day by day as the AI system rewrites its own mind. And this – it’s a big frightening but it’s also incredibly exciting. Does that mean humans will not ever make any more inventions? Of course it doesn’t. But what it means is if we do things right we won’t need to. If things come out the way that I hope they will what will happen is we’ll have these superhuman minds and largely they’ll be doing their own things. They will also offer to us the possibility to upload or upgrade ourselves and join them in realms of experience that we cannot now conceive in our current human forms. Or these superhuman AGIs may help humans to maintain a traditional human-like existence.”

ben_goertzel-agi

“I mean if you have a million times human IQ and you can reconfigure elementary particles into new forms of matter at will then supplying a few billion humans with food and water and video games, virtual reality headsets and national parks and flying cars and what not – this would be trivial for these superhuman minds. So if they’re well disposed toward us people who chose to remain in human form could have a simply much better quality of life than we have now. You don’t have to work for a living. You can devote your time to social, emotional, spiritual, intellectual and creative pursuits rather than laboriously doing things you might rather not do just in order to get food and shelter and an internet connection. So I think there is tremendous positive possibilities here and there’s also a lot of uncertainty and there’s a lot of work to get to the point where intelligence explodes in the sense of a hard takeoff. But I do think it’s reasonably probable we can get there in my lifetime, which is rather exciting.”

Who is Ben Goertzel?

Ben Goertzel is Chief Scientist of financial prediction firm Aidyia Holdings and robotics firm Hanson Robotics; Chairman of AI software company Novamente LLC, which is a privately held software company; Chairman of the Artificial General Intelligence Society and the OpenCog Foundation; Vice Chairman of futurist nonprofit Humanity+; Scientific Advisor of biopharma firm Genescient Corp.; Advisor to the Singularity University; Research Professor in the Fujian Key Lab for Brain-Like Intelligent Systems at Xiamen University, China; and general Chair of the Artificial General Intelligence conference series, an American author and researcher in the field of artificial intelligence. He was the Director of Research of the Machine Intelligence Research Institute (formerly the Singularity Institute).

ben_goertzel

His research work encompasses artificial general intelligence, natural language processing, cognitive science, data mining, machine learning, computational finance, bioinformatics, virtual worlds and gaming and other areas. He has published a dozen scientific books, more than 100 technical papers, and numerous journalistic articles.

He actively promotes the OpenCog project that he co-founded, which aims to build an open source artificial general intelligence engine. He is focused on creating benevolent superhuman artificial general intelligence; and applying AI to areas like financial prediction, bioinformatics, robotics and gaming. (wikipedia)

The post The First Truly Intelligent Machine Will Be Humanity’s Last Invention appeared first on New World : Artificial Intelligence.

]]>
https://www.newworldai.com/the-first-truly-intelligent-machine-will-be-humanitys-last-invention/feed/ 0