Power & Limits of Deep Learning – Yann LeCun

At a workshop on AI and the Future of Work earlier this month, Yann LeCun, Director of AI Research at Facebook and Founding Director of the NYU Center for Data Science, talked about “power and the limits of deep learning.”

LeCun, who pioneered the convolutional neural networks that are at the heart of many of the recent advancements in AI, was both enthusiastic on the progress the field has made in recent years and realistic about what such systems can and cannot do.

There have been multiple waves of AI, LeCun said, and noted that while the current wave has focused on deep learning, what’s to come is “perception,” with the biggest examples being applications such as medical imaging and self-driving cars. Nearly all of these applications employ supervised learning and most use convolutional neural networks, which LeCun first described in 1989 and which were first deployed in character recognition in ATMs in 1995. LeCun said the patent on such networks expired in 2007.

It’s the big data sets with large sample sizes as well as the tremendous increases in computing power (aided by Geoffrey Hinton’s work in figuring out how to use GPUs for image recognition) that have resulted in the most change in recent years. Even for LeCun, the advancements in image recognition have been “nothing less than astonishing.” Though perception “really works,” what’s still missing is reasoning.

LeCun talked about three different kinds of approaches and the limitations of each of them. Reinforcement learning requires a huge number of samples. It’s great for games, as the system can run millions of trials and get better and better, but it’s hard to use in the real world, as you don’t want to drive a car off a cliff 50 million times, for example, and real-time is a factor in the real world.

Supervised learning, which is most of what we see now, requires a medium amount of feedback and is working well. However, supervised machine learning has some issues. LeCun said such systems reflect biases in data, though he said he is optimistic this problem can be overcome and believes it is easier to remove biases from machines compared with people. But it’s also hard to verify such systems for reliability and difficult to explain decisions made based on outputs from such systems, and LeCun talked about loan applications as an example of this.

Unsupervised or predictive learning, which is currently being researched for things such as predicting future frames in a video, requires a lot of feedback. Unsupervised learning involves predicting the past, present, or future from whatever information is available, or in other words, the ability to fill in the blanks, which LeCun said is effectively what we call common sense. He noted that babies can do this, but that getting machines to do so has been very difficult, and talked about how researchers are working on techniques like generative adversarial networks (GANs) for predictions made in uncertain conditions. We are far from having a complete solution, he said.

LeCun talked about the three types of learning as being like parts of a cake: reinforcement learning is the cherry on top, supervised learning the icing, and predictive learning is the main part of the cake.

LeCun predicted AI will change how things are valued, with goods built by robots costing less and authentic human experiences costing more, and said this may mean there is “a bright future for jazz musicians and artisans.”

Overall, LeCun said AI is a General Purpose Technology (GPT) like the steam engine, electricity, or the computer. As such, it will affect many areas of the economy, but it will take 10 or 20 years before we see an effect on productivity. LeCun said AI will lead to job replacement, but noted that technology deployment is limited by how fast workers can train for it.

As for a “true AI revolution,” LeCun said that this won’t happen until machines acquire common sense, and determining the principles to build this may take two, five, twenty, or more years; beyond that, it will then take years to develop practical AI technology based on those principles. After all, he noted, it took twenty years for convolutional nets to become important. And that’s all based on the assumption that the principles are simple; it becomes much more complicated if “intelligence is a kludge.”

https://www.pcmag.com/article/357463/yann-lecun-discusses-the-power-limits-of-deep-learning

Yann LeCun is Director of AI Research at Facebook, and Silver Professor of Dara Science, Computer Science, Neural Science, and Electrical Engineering at New York University, affiliated with the NYU Center for Data Science, the Courant Institute of Mathematical Science, the Center for Neural Science, and the Electrical and Computer Engineering Department.

The Artificial Intelligence Channel

 

You May Also Like

Leave a Reply

Your email address will not be published. Required fields are marked *