AI by the Numbers: The Healthcare Industry is Ahead of Other Industries in AI Adoption?

Recent surveys, studies, forecasts and other quantitative assessments of the progress of AI highlight the increasing presence of AI in the healthcare industry, the assistance AI may provide in the future to workers’ cognitive tasks, and the continuing acceleration in data production and dissemination.

Healthcare AI statrups

Source: CB Insights

Read more here

Posted in Uncategorized | Leave a comment

Best of 2019: Bengio and Intel on Why AI is Not Magic

Yoshua Bengio speaking with Karen Hao at the EmTech MIT conference, September 18, 2019

Yoshua Bengio speaking with Karen Hao at the EmTech MIT conference, September 18, 2019

[September 20, 2019]

Asked what is the biggest misconception about AI, Yoshua Bengio answered without hesitation “AI is not magic.” Winner of the 2018 Turing Award (with the other “fathers of the deep learning revolution,” Geoffrey Hinton and Yann LeCun), Bengio spoke at the EmTech MIT event about the “amazing progress in AI” while stressing the importance of understanding its current limitations and recognizing that “we are still very far from human-level AI in many ways.”

Deep learning has moved us a step closer to human-level AI by allowing machines to acquire intuitive knowledge, according to Bengio. Classical AI was missing this “learning component,” and deep learning develops intuitive knowledge “by acquiring that knowledge from data, from interacting with the environment, from learning. That’s why current AI is working so much better than the old AI.”

At the same time, classical AI aimed to allow computers to do what humans do—reasoning, or combining ideas “in our mind in a very explicit, conscious way,” concepts that we can explain to other people. “Although the goals of a lot of things I’m doing now are similar to the classical AI goals, allowing machines to reason, the solutions will be very different,” says Bengio. Humans use very few steps when they reason and Bengio contends we need to address the gap that exists between our mind’s two modes of thought: “System 1” (instinctive and emotional) and “system 2” (deliberative and logical). This is “something we really have to address to approach human-level AI,” says Bengio.

To get there, Bengio and other AI researchers are “making baby steps” in some new directions, but “much more needs to be done.” These new directions include tighter integration between deep learning and reinforcement learning, finding ways to teach the machine meta-learning or ”learning to learn”—allowing it to generalize better, and understand better the causal relations embodied in the data, going beyond correlations.

Bengio is confident that AI research will overcome these challenges and will achieve not only human-level AI but will also manage to develop human-like machines. “If we don’t destroy ourselves before then,” says Bengio, “I believe there is no reason we couldn’t build machines that could express emotions. I don’t think that emotions or even consciousness are out of reach of machines in the future. We still have a lot to go… [to] understand them better scientifically in humans but also in ways that are sufficiently formal so we can train machines to have these kinds of properties.”

At the MIT event, I talked to two Intel VPs—Gadi Singer and Carey Kloss—who are very familiar with what companies do today with the current form of AI, deep learning, with all its limitations. “Enterprises are at a stage now where they have figured out what deep learning means to them and they are going to apply it shortly,” says Singer.  “Cloud Service Providers deploy it at scale already. Enterprise customers are still learning how it can affect them,” adds Kloss.

Many of these companies have been using for years machine learning, predictive analytics, and other sophisticated techniques for analyzing data as the basis for improving decision-making, customer relations, and internal processes. But now they are figuring out what deep learning, the new generation of machine learning, can do for their business. Singer has developed what he calls the “four superpowers framework” as a way of explaining what’s new about deep learning from a practical perspective, the four things deep learning does exceptionally well.

Deep learning is very good at spotting patterns. It first demonstrated this capability with its superior performance in analyzing images for object identification, but this exceptional capability can be deployed to other types of data. While traditional machine learning techniques have been used for years in fraud detection, for example, deep learning is very powerful in “identifying remote instances of a pattern,” says Singer.

The second “superpower” is being a universal approximator. Deep learning is very good at mimicking very complex computations with great accuracy and at a fraction of the power and time of traditional computation methods. “Whatever you can accelerate by 10,000x might change your business,” says Singer.

Sequence to sequence mapping is the third exceptional deep learning capability. An example would be real-time language translation. Previously, each word was translated in isolation but deep learning brings the “depth of context,” adding a time dimension by taking into account the entire sequence of words.

Last but not least is generation based on similarities. Once a deep learning model learns how a realistic output looks like, it can generate a similar one. Generating images from text is an example. Another one is WaveNet, a speech generation application from Google, mimicking the human voice. Yet another example is medical records anonymization, allowing for privacy-preserving sharing, research, and analysis of patient records.

EmTech 2019 also featured MIT Technology Review’s recent selection of “35 innovators under 35.” A few of these innovators got on the list because they developed and demonstrated a number of practical and successful applications of deep learning. These included Liang Xu and his AI platform that helps cities across China improve public health, reduce crime, and increase efficiency in public management; Wojciech Zaremba, using deep learning and reinforcement learning to train a robot hand to teach itself to pick up a toy block in different environments; and Archana Venkataraman who developed a deep learning model that can detect epileptic seizures and, as a result, limit invasive monitoring and improve surgical outcomes.

There is no doubt that Bengio and Hinton and LeCun have created in deep learning a tool with tremendous positive social and economic value, today and in the future. But they—and other AI researchers—insist on the ultimate goal being the creation of “human-level intelligence” or even human-like machines. Why do these experts in machine learning refuse to learn from history, from seven decades of predictions regarding the imminent arrival of human-level intelligence leading only to various “AI winters” and a lot of misconceptions, including unfounded fear and anxiety about AI? And why aren’t goals such as curing diseases, eliminating hunger, and making humans more productive and content sufficient enough to serve for them as motivating end-goals?

Originally published on Forbes.com

Posted in AI, deep learning, Intel | Tagged , | Leave a comment

The Growing Export Business of Israel

It was a very good decade for Startup Nation. Since 2010, capital raising by Israeli tech companies has grown by 400% and the number of deals by 64%, reaching $8.3 billion in 522 deals last year. From 2010 to 2019, the number of exits has increased by 50% and exit value by over 800%, for a total value of $111.29 billion.

Read more here

Posted in AI, cybersecurity, Israel, JVP, startups | Leave a comment

Deep Tech Landscape in Israel: 150 Startups

“Deep Tech” describes forward-thinking technologies based on profound scientific breakthroughs or engineering novelties.

For decades, the Israeli ecosystem established expertise in Frontier Technologies such as: Semiconductors; Quantum Computing; Sensors; Space 2.0; Robotics; Networking & Wireless; Advanced Materials & Nanotechnology; Next Gen Healthcare; AI Platforms; IoT and AR/VR.

Grove Ventures together with IVC Research Center, conducted a comprehensive research mapping more than 150 local startups which operate in the Deep Tech ecosystem.

Source: Mapping the Israeli Deep Tech Ecosystem

Posted in AI, AR/VR, Internet of Things, Israel, Quantum Computing, Robotics, Semiconductors, startups | Tagged | Leave a comment

AI by the Numbers: 35% Of Workers Worldwide Expect Their Job Will Be Automated

Infographic: Automation Could Eliminate 73 Million U.S. Jobs By 2030 | Statista You will find more infographics at Statista

Recent surveys, studies, forecasts and other quantitative assessments of the progress of AI highlight anxiety about AI eliminating jobs, the competition for AI talent, questions about employees AI preparedness, and data quality, literacy, privacy, and security.

Read more here

Posted in AI, Automation, Stats | Tagged , , | Leave a comment

What Happened to AI in 2019?

After years in the (mostly Canadian) wilderness followed by seven years of plenty, Deep Learning was officially recognized as the “dominant” AI paradigm and “a critical component of computing,” with its three key proponents, Geoffrey Hinton, Yann LeCun, and Yoshua Bengio, receiving the Turing Award in March 2019.

Read more here

Posted in AI, AI Enterprise, deep learning | Leave a comment

Best of 2019: The Misleading Language of Artificial Intelligence

 

[September 27, 2019]

Language is imprecise, vague, context-specific, sentence-structure-dependent, full of fifty shades of gray (or grey). It’s what we use to describe progress in artificial intelligence, in improving computers’ performance in tasks such as accurately identifying images or translating between languages or answering questions. Unfortunately, vague or misleading terms can lead to inaccurate and misleading news.

Earlier this month we learned from the New York Times that “…in just the past several months researchers have made significant progress in developing A.I. that can understand languages and mimic the logic and decision-making of humans.” The NYT article reported on “A Breakthrough for A.I. Technology” with the release of a paper by a team of researchers at the Allen Institute for Artificial Intelligence (AI2), summarizing their work on Aristo, a question answering system. While 3 years ago the best AI system scored 59.3% on an eight-grade science exam challenge, Aristo recently correctly answered more than 90% of the non-diagram, multiple choice questions on an eighth-grade science exam and exceeded 83% on a 12th-grade science exam.

No doubt this a remarkable and rapid progress for the AI sub-field of or Natural Language Understanding (NLU) or more specifically, as the AI2 paper states, “machine understanding of textbooks…a grand AI challenge that dates back to the ’70s.” But does Aristo really “reads,” “understands” and “reasons” as one may understand from the language used in the paper and similar NLU papers?

“If I could go back to 1956 [when the field of AI was launched], I would choose a different terminology,” says Oren Etzioni, CEO of AI2. Labeling “anthropomorphizing” this “unfortunate history,” Etzioni clearly states his position about the language of AI researchers:

“When we use these human terms in the context of machines that’s a huge potential for misunderstanding. The fact of the matter is that currently machines don’t understand, they don’t learn, they aren’t intelligent—in the human sense… I think we are creating savants, really really good at some narrow task, whether it’s NLP or playing GO, but that doesn’t mean they understand much of anything.”

Still, “human terms,” misleading or not, is what we have to describe what AI programs do, and Etzioni argues that “if you look at some of the questions that a human would have to reason his or her way to answer, you start to see that these techniques are doing some kind of rudimentary form of reasoning, a surprising amount of rudimentary reasoning.”

The AI2 paper elaborates further on the question “to what extent is Aristo reasoning to answer questions?” While stating that currently “we do not have a sufficiently fine-grained notion of reasoning to answer this question precisely,” it points to a recent shift in the understanding by AI researchers of “reasoning” with the advent of deep learning and “machines performing challenging tasks using neural architectures rather than explicit representation languages.”

Similar to what has happened recently in other AI sub-fields, question answering has gotten a remarkable boost with deep learning, applying statistical analysis to very large data sets, finding hidden correlations and patterns, and leading to surprising results, described sometimes in misleading terms.

What current AI technology does is “sophisticated pattern-matching, not what I would call ‘understanding’ or ‘reasoning,’” says TJ Hazen, Senior Principal Research Manager at Microsoft Research.* Deep learning techniques, says Hazen, “can learn really sophisticated things from examples. They do an incredible job of learning specific tasks, but they really don’t understand what they’re learning.”

What deep learning and its hierarchical layers of complex calculations, plus lots of data and compute power, brought to NLU (and other AI specialties) is unprecedented level of efficiencies in designing models that “understand” the task at hand (e.g., answering a specific question). Machine learning used to require deep domain knowledge and a deep investment of time and effort in coming up with what its practitioners call “features,” the key elements of the model (called “variables” in traditional statistical analysis—professional jargon being yet another challenge for both human and machine language understanding). By adding more layers (steps) to the learning process and using vast quantities of data, deep learning has taken on more of the model design work.

“Deep learning figures out what are the most salient features,” says Hazen. “But it is also constrained by the quality and sophistication of the data. If you only give it simple examples, it’s only going to learn simple strategies.”

AI researchers, at Microsoft, AI2, and other research centers, are aware of deep learning’s limitations when compared with human intelligence, and most of their current work, while keeping within the deep learning paradigm, is aimed at addressing these limitations. “In the next year or two,” says Etzioni, “we are going to see more systems that work not just on one dataset or benchmark but on ten or twenty and they are able to learn from one and transfer to another, simultaneously.”

Jingjing Liu, Principal Research Manager at Microsoft Research also highlights the challenge of “transfer learning” or “domain adaptation,” warning about the hype regarding specific AI programs’ “human parity.” Unlike humans that transfer knowledge acquired in performing one task to a new one, a deep learning model “might perform poorly on a new unseen dataset or it may require a lot of additional labeled data in a new domain to perform well,” says Liu. “That’s why we’re looking into unsupervised domain adaptation, aiming to generalize pre-trained models from a source domain to a new target domain with minimum data.”

Real-world examples, challenges, and constraints help researchers address the limitations of deep learning and offer AI solutions to specific business problems. A company may want to use a question answering system to help employees find what they need in a long and complex operations manual or a travel policy document.

Typically, observes Hazen, the solution is a FAQ document, yet another document to wade through. “Right now, most enterprise search mechanisms are pretty poor at this kind of tasks,” says Hazen. “They don’t have the click-through info that Google or Bing have. That’s where we can add value.” To deploy a general-purpose “reading comprehension” model in a specific business setting, however, requires successful “transfer learning,” adapting the model to work with hundreds of company-specific examples, not tens of thousands or even millions of examples.

Microsoft researchers encounter these real-world challenges when they respond to requests from Microsoft’s customers. A research institute such as AI2 does not have customers so it created a unique channel for its researchers to interact with real-world challenges, the AI2 Incubator, inviting technologists and entrepreneurs to establish their startups with the help of AI2 resources. Lexion.ai is one of these startups, offering NLU software that organizes and reads contracts, and extracts the specific terms employees need for their work.

Unfortunately, human ambition (hubris?) hasn’t stopped at solving specific human challenges as sufficient motivation for AI research. Achieving “human-level intelligence” has been the ultimate goal for AI research for more than six decades. Indeed, it has been an unfortunate history, as a misleading goal has led to misleading terms which in turn lead to unfounded excitement and anxiety.

Fortunately, many AI researchers continue to expand what computers could do in the service of humanity. Says TJ Hazen: “I prefer to think about the work I’m doing as something that will help you do a task but it may not be able to do the full task for you. It’s an aid and not a replacement for your own capabilities.” And Oren Etzioni: “My favorite definition of AI is to think of it as Augmented Intelligence. I’m interested in building tools that help people be much more effective.”

*Opinions expressed by Microsoft’s researchers do not necessarily represent Microsoft’s positions.

Originally published on Forbes.com

Posted in AI, NLP | Tagged , | Leave a comment