Best of 2019: Bengio and Intel on Why AI is Not Magic

Yoshua Bengio speaking with Karen Hao at the EmTech MIT conference, September 18, 2019

Yoshua Bengio speaking with Karen Hao at the EmTech MIT conference, September 18, 2019

[September 20, 2019]

Asked what is the biggest misconception about AI, Yoshua Bengio answered without hesitation “AI is not magic.” Winner of the 2018 Turing Award (with the other “fathers of the deep learning revolution,” Geoffrey Hinton and Yann LeCun), Bengio spoke at the EmTech MIT event about the “amazing progress in AI” while stressing the importance of understanding its current limitations and recognizing that “we are still very far from human-level AI in many ways.”

Deep learning has moved us a step closer to human-level AI by allowing machines to acquire intuitive knowledge, according to Bengio. Classical AI was missing this “learning component,” and deep learning develops intuitive knowledge “by acquiring that knowledge from data, from interacting with the environment, from learning. That’s why current AI is working so much better than the old AI.”

At the same time, classical AI aimed to allow computers to do what humans do—reasoning, or combining ideas “in our mind in a very explicit, conscious way,” concepts that we can explain to other people. “Although the goals of a lot of things I’m doing now are similar to the classical AI goals, allowing machines to reason, the solutions will be very different,” says Bengio. Humans use very few steps when they reason and Bengio contends we need to address the gap that exists between our mind’s two modes of thought: “System 1” (instinctive and emotional) and “system 2” (deliberative and logical). This is “something we really have to address to approach human-level AI,” says Bengio.

To get there, Bengio and other AI researchers are “making baby steps” in some new directions, but “much more needs to be done.” These new directions include tighter integration between deep learning and reinforcement learning, finding ways to teach the machine meta-learning or ”learning to learn”—allowing it to generalize better, and understand better the causal relations embodied in the data, going beyond correlations.

Bengio is confident that AI research will overcome these challenges and will achieve not only human-level AI but will also manage to develop human-like machines. “If we don’t destroy ourselves before then,” says Bengio, “I believe there is no reason we couldn’t build machines that could express emotions. I don’t think that emotions or even consciousness are out of reach of machines in the future. We still have a lot to go… [to] understand them better scientifically in humans but also in ways that are sufficiently formal so we can train machines to have these kinds of properties.”

At the MIT event, I talked to two Intel VPs—Gadi Singer and Carey Kloss—who are very familiar with what companies do today with the current form of AI, deep learning, with all its limitations. “Enterprises are at a stage now where they have figured out what deep learning means to them and they are going to apply it shortly,” says Singer.  “Cloud Service Providers deploy it at scale already. Enterprise customers are still learning how it can affect them,” adds Kloss.

Many of these companies have been using for years machine learning, predictive analytics, and other sophisticated techniques for analyzing data as the basis for improving decision-making, customer relations, and internal processes. But now they are figuring out what deep learning, the new generation of machine learning, can do for their business. Singer has developed what he calls the “four superpowers framework” as a way of explaining what’s new about deep learning from a practical perspective, the four things deep learning does exceptionally well.

Deep learning is very good at spotting patterns. It first demonstrated this capability with its superior performance in analyzing images for object identification, but this exceptional capability can be deployed to other types of data. While traditional machine learning techniques have been used for years in fraud detection, for example, deep learning is very powerful in “identifying remote instances of a pattern,” says Singer.

The second “superpower” is being a universal approximator. Deep learning is very good at mimicking very complex computations with great accuracy and at a fraction of the power and time of traditional computation methods. “Whatever you can accelerate by 10,000x might change your business,” says Singer.

Sequence to sequence mapping is the third exceptional deep learning capability. An example would be real-time language translation. Previously, each word was translated in isolation but deep learning brings the “depth of context,” adding a time dimension by taking into account the entire sequence of words.

Last but not least is generation based on similarities. Once a deep learning model learns how a realistic output looks like, it can generate a similar one. Generating images from text is an example. Another one is WaveNet, a speech generation application from Google, mimicking the human voice. Yet another example is medical records anonymization, allowing for privacy-preserving sharing, research, and analysis of patient records.

EmTech 2019 also featured MIT Technology Review’s recent selection of “35 innovators under 35.” A few of these innovators got on the list because they developed and demonstrated a number of practical and successful applications of deep learning. These included Liang Xu and his AI platform that helps cities across China improve public health, reduce crime, and increase efficiency in public management; Wojciech Zaremba, using deep learning and reinforcement learning to train a robot hand to teach itself to pick up a toy block in different environments; and Archana Venkataraman who developed a deep learning model that can detect epileptic seizures and, as a result, limit invasive monitoring and improve surgical outcomes.

There is no doubt that Bengio and Hinton and LeCun have created in deep learning a tool with tremendous positive social and economic value, today and in the future. But they—and other AI researchers—insist on the ultimate goal being the creation of “human-level intelligence” or even human-like machines. Why do these experts in machine learning refuse to learn from history, from seven decades of predictions regarding the imminent arrival of human-level intelligence leading only to various “AI winters” and a lot of misconceptions, including unfounded fear and anxiety about AI? And why aren’t goals such as curing diseases, eliminating hunger, and making humans more productive and content sufficient enough to serve for them as motivating end-goals?

Originally published on