Deep Tech Landscape in Israel: 150 Startups

“Deep Tech” describes forward-thinking technologies based on profound scientific breakthroughs or engineering novelties.

For decades, the Israeli ecosystem established expertise in Frontier Technologies such as: Semiconductors; Quantum Computing; Sensors; Space 2.0; Robotics; Networking & Wireless; Advanced Materials & Nanotechnology; Next Gen Healthcare; AI Platforms; IoT and AR/VR.

Grove Ventures together with IVC Research Center, conducted a comprehensive research mapping more than 150 local startups which operate in the Deep Tech ecosystem.

Source: Mapping the Israeli Deep Tech Ecosystem

Posted in AI, AR/VR, Internet of Things, Israel, Quantum Computing, Robotics, Semiconductors, startups | Tagged | Leave a comment

AI by the Numbers: 35% Of Workers Worldwide Expect Their Job Will Be Automated

Infographic: Automation Could Eliminate 73 Million U.S. Jobs By 2030 | Statista You will find more infographics at Statista

Recent surveys, studies, forecasts and other quantitative assessments of the progress of AI highlight anxiety about AI eliminating jobs, the competition for AI talent, questions about employees AI preparedness, and data quality, literacy, privacy, and security.

Read more here

Posted in AI, Automation, Stats | Tagged , , | Leave a comment

What Happened to AI in 2019?

After years in the (mostly Canadian) wilderness followed by seven years of plenty, Deep Learning was officially recognized as the “dominant” AI paradigm and “a critical component of computing,” with its three key proponents, Geoffrey Hinton, Yann LeCun, and Yoshua Bengio, receiving the Turing Award in March 2019.

Read more here

Posted in AI, AI Enterprise, deep learning | Leave a comment

Best of 2019: The Misleading Language of Artificial Intelligence

 

[September 27, 2019]

Language is imprecise, vague, context-specific, sentence-structure-dependent, full of fifty shades of gray (or grey). It’s what we use to describe progress in artificial intelligence, in improving computers’ performance in tasks such as accurately identifying images or translating between languages or answering questions. Unfortunately, vague or misleading terms can lead to inaccurate and misleading news.

Earlier this month we learned from the New York Times that “…in just the past several months researchers have made significant progress in developing A.I. that can understand languages and mimic the logic and decision-making of humans.” The NYT article reported on “A Breakthrough for A.I. Technology” with the release of a paper by a team of researchers at the Allen Institute for Artificial Intelligence (AI2), summarizing their work on Aristo, a question answering system. While 3 years ago the best AI system scored 59.3% on an eight-grade science exam challenge, Aristo recently correctly answered more than 90% of the non-diagram, multiple choice questions on an eighth-grade science exam and exceeded 83% on a 12th-grade science exam.

No doubt this a remarkable and rapid progress for the AI sub-field of or Natural Language Understanding (NLU) or more specifically, as the AI2 paper states, “machine understanding of textbooks…a grand AI challenge that dates back to the ’70s.” But does Aristo really “reads,” “understands” and “reasons” as one may understand from the language used in the paper and similar NLU papers?

“If I could go back to 1956 [when the field of AI was launched], I would choose a different terminology,” says Oren Etzioni, CEO of AI2. Labeling “anthropomorphizing” this “unfortunate history,” Etzioni clearly states his position about the language of AI researchers:

“When we use these human terms in the context of machines that’s a huge potential for misunderstanding. The fact of the matter is that currently machines don’t understand, they don’t learn, they aren’t intelligent—in the human sense… I think we are creating savants, really really good at some narrow task, whether it’s NLP or playing GO, but that doesn’t mean they understand much of anything.”

Still, “human terms,” misleading or not, is what we have to describe what AI programs do, and Etzioni argues that “if you look at some of the questions that a human would have to reason his or her way to answer, you start to see that these techniques are doing some kind of rudimentary form of reasoning, a surprising amount of rudimentary reasoning.”

The AI2 paper elaborates further on the question “to what extent is Aristo reasoning to answer questions?” While stating that currently “we do not have a sufficiently fine-grained notion of reasoning to answer this question precisely,” it points to a recent shift in the understanding by AI researchers of “reasoning” with the advent of deep learning and “machines performing challenging tasks using neural architectures rather than explicit representation languages.”

Similar to what has happened recently in other AI sub-fields, question answering has gotten a remarkable boost with deep learning, applying statistical analysis to very large data sets, finding hidden correlations and patterns, and leading to surprising results, described sometimes in misleading terms.

What current AI technology does is “sophisticated pattern-matching, not what I would call ‘understanding’ or ‘reasoning,’” says TJ Hazen, Senior Principal Research Manager at Microsoft Research.* Deep learning techniques, says Hazen, “can learn really sophisticated things from examples. They do an incredible job of learning specific tasks, but they really don’t understand what they’re learning.”

What deep learning and its hierarchical layers of complex calculations, plus lots of data and compute power, brought to NLU (and other AI specialties) is unprecedented level of efficiencies in designing models that “understand” the task at hand (e.g., answering a specific question). Machine learning used to require deep domain knowledge and a deep investment of time and effort in coming up with what its practitioners call “features,” the key elements of the model (called “variables” in traditional statistical analysis—professional jargon being yet another challenge for both human and machine language understanding). By adding more layers (steps) to the learning process and using vast quantities of data, deep learning has taken on more of the model design work.

“Deep learning figures out what are the most salient features,” says Hazen. “But it is also constrained by the quality and sophistication of the data. If you only give it simple examples, it’s only going to learn simple strategies.”

AI researchers, at Microsoft, AI2, and other research centers, are aware of deep learning’s limitations when compared with human intelligence, and most of their current work, while keeping within the deep learning paradigm, is aimed at addressing these limitations. “In the next year or two,” says Etzioni, “we are going to see more systems that work not just on one dataset or benchmark but on ten or twenty and they are able to learn from one and transfer to another, simultaneously.”

Jingjing Liu, Principal Research Manager at Microsoft Research also highlights the challenge of “transfer learning” or “domain adaptation,” warning about the hype regarding specific AI programs’ “human parity.” Unlike humans that transfer knowledge acquired in performing one task to a new one, a deep learning model “might perform poorly on a new unseen dataset or it may require a lot of additional labeled data in a new domain to perform well,” says Liu. “That’s why we’re looking into unsupervised domain adaptation, aiming to generalize pre-trained models from a source domain to a new target domain with minimum data.”

Real-world examples, challenges, and constraints help researchers address the limitations of deep learning and offer AI solutions to specific business problems. A company may want to use a question answering system to help employees find what they need in a long and complex operations manual or a travel policy document.

Typically, observes Hazen, the solution is a FAQ document, yet another document to wade through. “Right now, most enterprise search mechanisms are pretty poor at this kind of tasks,” says Hazen. “They don’t have the click-through info that Google or Bing have. That’s where we can add value.” To deploy a general-purpose “reading comprehension” model in a specific business setting, however, requires successful “transfer learning,” adapting the model to work with hundreds of company-specific examples, not tens of thousands or even millions of examples.

Microsoft researchers encounter these real-world challenges when they respond to requests from Microsoft’s customers. A research institute such as AI2 does not have customers so it created a unique channel for its researchers to interact with real-world challenges, the AI2 Incubator, inviting technologists and entrepreneurs to establish their startups with the help of AI2 resources. Lexion.ai is one of these startups, offering NLU software that organizes and reads contracts, and extracts the specific terms employees need for their work.

Unfortunately, human ambition (hubris?) hasn’t stopped at solving specific human challenges as sufficient motivation for AI research. Achieving “human-level intelligence” has been the ultimate goal for AI research for more than six decades. Indeed, it has been an unfortunate history, as a misleading goal has led to misleading terms which in turn lead to unfounded excitement and anxiety.

Fortunately, many AI researchers continue to expand what computers could do in the service of humanity. Says TJ Hazen: “I prefer to think about the work I’m doing as something that will help you do a task but it may not be able to do the full task for you. It’s an aid and not a replacement for your own capabilities.” And Oren Etzioni: “My favorite definition of AI is to think of it as Augmented Intelligence. I’m interested in building tools that help people be much more effective.”

*Opinions expressed by Microsoft’s researchers do not necessarily represent Microsoft’s positions.

Originally published on Forbes.com

Posted in AI, NLP | Tagged , | Leave a comment

Best of 2019: Betting on Data Eating the World

IDC predicts that 175 trillion gigabytes of new data will be created worldwide in 2025

IDC predicts that 175 trillion gigabytes of new data will be created worldwide in 2025

[July 23, 2109]

Data is eating the world. All businesses, non-profits, and governments around the world are now in full digital transformation mode, figuring out what data can do to the quality of their decisions and the effectiveness of their actions. In the process, they tap into IT resources and landscape that have changed dramatically over the last decade, offering unprecedented choice, flexibility, and speed, facilitating the management of data eating work.

Launched in 2012, DraftKings is a prime example of a new breed of data-driven, perpetually-learning companies. One of the few players in the market for fantasy sports, it has faced “unique challenges that haven’t been solved by other businesses yet,” says Greg Karamitis, Senior Vice President of Fantasy Sports. To solve these challenges, “we have to lean on our analytical expertise and our ability to absorb and utilize vast amounts of data to drive our business decisions.”

Founded by serial entrepreneur Ash Ashutosh 10 years ago, Actifio is a prime example of the new breed of IT vendors transforming the IT landscape from a processor-centric to data-centric paradigm, from a primary emphasis on the speed of computing to a new focus on the speed of accessing data. “It used to be that only the backup people cared about data,” says Ashutosh. “Then it was the CIO, and later, the Chief Data Officer or CDO. Now, every CEO is a data-driven CEO. If you are not data-driven, most likely you are not the CEO for long.”

Data “as a strategic asset” was the vision driving Ashutosh and Actifio in 2009, bringing to the enterprise the same attitude towards data that has made the fortunes of consumer-oriented, digital native companies such as Amazon. “We wanted to facilitate getting to the data as fast as possible, to make it available to anybody, anywhere,” recalls Ashutosh. Since only backup people cared about enterprise data 10 years ago, they were the customers Actifio initially targeted.

The value proposition for these customers centered on reduced cost, as Actifio helped them maintain only one copy of any piece of data, available for multiple uses, instead of maintaining numerous copies, each one for a specific application or data management activity. Actifio achieved this magic trick (e.g., reducing 50 terabytes of data to only 2 terabytes) by capitalizing on another trend shaking the tech world 10 years ago, virtualization. Replacing analog data with digital data gave rise to data-driven companies and replacing the physical with the logical—virtualization—created a new IT landscape.

It took a while for enterprises to adapt to the new IT realities. But around 2015 the cloud flood gates opened because of the business pressures to do everything faster and faster, especially the development of new (online) applications, and the fact that more and more enterprise roles and activities required at the very least some creation, management, manipulation, analysis, and consumption of data. These changes manifested themselves in Actifio’s business. In its most recent quarter (ended April 30, 2019), “60% of our customers used Actifio to accelerate application development, up from close to 0% in 2016,” reports Ashutosh, “and over 30% of our workloads today are in cloud platforms.”

The new attitude towards data as a strategic asset and the widespread availability of cloud computing have opened up new uses for Actifio’s offerings such as compliance with data regulations and near real-time security checks on the data. But possibly the most important recent development is the increased use by data scientists for machine learning and artificial intelligence-related tasks.

A very significant chunk of data scientists’ time (and their most popular complaint) is the time they spend on data preparation. And a significant chunk of the time spent on data preparation is simply waiting for the data. In the past, they had to wait between eight to forty days for the IT department to deliver the data to them. Now, says Ashutosh, “they have an automated, on-demand process,” providing them data from the relevant pool of applications, in the format they require. Bringing up the new term “MLOps” (as in machine learning operations),  Ashutosh defines it as “allowing people to make decisions faster by not having data as a bottleneck.” The end result? “The more you give people access to data in self-service way, the more they find new and smarter ways of using it.”

As 40% of Actifio’s sales come from $1 million-plus deals, these new uses of data are not “tiny departmental stuff,” says Ashutosh. “Large enterprises are beginning to use data as a strategic asset, as a service, on premises or in multi-cloud environments.”

Large enterprises today learn from, compete with, and often invest in or acquire the likes of DraftKings, a startup that runs on data. Growing the business for DraftKings means making their contests bigger and bigger. “But if we make our contests too big and we don’t get enough users to fill 90% of the seats, we start to lose money really fast,” says DraftKings’ Karamitis. Balancing user satisfaction and engagement with the company’s business performance requires accurate demand predictions for each of the thousands of contests which DraftKings runs daily in a constantly changing sports environment.

“We need to absorb tons of data points to figure this out on a daily basis,” explains Karamitis. But these data points are not only based on DraftKings accumulated experience with past contests. Befitting a business living online, another important data source is social networks—“our users are giving us enormous amount of data in terms of what they are twitting to us, what they engage with and what they don’t, allowing us to understand better which ways we want to shift as a company and which way we want to build a product,” says Karamitis.

There is yet another source of the data that is driving decisions at DrafKings, possibly the most important one: The data DraftKings creates by constantly experimenting. “We create data points by running structured tests,” says Karamitis. They cannot run A/B tests, he explains, because running smaller size contests will not produce the same effect of larger size contests and because their users tend to communicate a lot among themselves and compare notes about their experiences. “We are willing to take the risk testing our underlying beliefs around user behavior,” says Karamitis, by changing the top prize or changing the marketing treatment, for example.

In the past, domain expertise was the key to a company’s success. Today, it is data expertise, and the skill of applying it instantly to new opportunities as they arise. “Our analytical expertise allows us to learn really fast, learn in the traditional meaning of learning from experience and figuring out what matters to our users and also learn in the more modern sense of building machine learning algorithms around the key principles our users care about,” says Karamitis. Learning from data has become a core competency that can be applied to new markets, a competency that should serve DraftKings well as it pursues the new business opportunity of legalized sports betting.

DraftKings—and the new breed of data-driven companies—are data science labs, creating data and acquiring new insights with continuous experimentation. Like good scientists, they test their hypotheses through carefully structured experiments, challenging their own assumptions about customers and markets. Karamitis recalls the start of the 2017 NFL season when DraftKings offered a new contest site: ”We had a very specific expectation as to who it’s going to appeal to and how big it will be. We were totally wrong, 100% wrong.” But because the new product was developed through experimentation, the data led to what is now a “super valuable product, so different from what we initially offered.”

Ashutosh predicts that over the next few years we will see a bifurcation of the economy into two segments, one focused on producing physical assets and the other comprised of data-driven companies. Like DraftKings, these companies, whether startups or established enterprises, will view data as a strategic asset and its analysis as a core competency and a key competitive differentiator.

And like DraftKings, these companies will increasingly resemble scientific labs, continuously learning through experimentation and creating new data points. Data growth will drive business growth as data continues eating the world.

Originally published on Forbes.com

Posted in AI, Big Data Analytics, Data growth, Data is eating the world | Tagged , | Leave a comment

Shakey, the World’s First Mobile Intelligent Robot

shakey

Developed at the Artificial Intelligence Center of the Stanford Research Institute (SRI) from 1966 to 1972, SHAKEY was the world’s first mobile intelligent robot. According to the 2017 IEEE Milestone citation, it “could perceive its surroundings, infer implicit facts from explicit ones, create plans, recover from errors in plan execution, and communicate using ordinary English. SHAKEY’s software architecture, computer vision, and methods for navigation and planning proved seminal in robotics and in the design of web servers, automobiles, factories, video games, and Mars rovers.”

Read more here

Posted in AI, Computer History, robots | Tagged , | Leave a comment

Best of 2019: Big Data AI

bigdata

[July 1, 2019]

In December 2014, I asked whether we were at the beginning of “the end of the Hadoop bubble.” I kept updating my Hadoop bubble watch (here and here) through the much-hyped IPOs of Hortonworks and Cloudera. The question was whether an open-source distributed storage technology which Google invented (and quickly replaced with better tools) could survive as a business proposition at a time when enterprises have moved rapidly to adopting the cloud and “AI”—advanced machine learning or deep learning.

In January 2019, perennially unprofitable Hortonworks closed an all-stock $5.2 billion merger with Cloudera. In May 2019, another Hadoop-based provider, MapR, announced that it would shut down if it were unable to find a buyer or a new source of funding. On June 6, 2019, Cloudera’s stock declined 43% after it cut its revenue forecast and announced that its CEO is leaving the company. Valued at $4.1 billion in 2014, Cloudera’s current market cap is $1.4 billion.

Is this just the end of Hadoop or is it the death of Big Data? Was our fascination with lots and lots of data only a temporary bubble?

The news last month were not all negative for the Data is Eating the World phenomenon. Google announced its intent to acquire data discovery and analytics startup Looker for $2.6 billion and Salesforce announced its intent to acquire data visualization and analytics leader Tableau for $15.7 billion.

“The addition of Looker to Google Cloud,” said an Alphabet press release, “will provide customers with a more comprehensive analytics solution — from ingesting and integrating data to gain insights, to embedded analytics and visualizations — enabling enterprises to leverage the power of analytics, machine learning and AI.” The Google Cloud blog explained that “A fundamental requirement for organizations wanting to transform themselves digitally is the need to store, manage, and analyze large quantities of data from a variety of sources… The addition of Looker to Google Cloud will help us offer customers a more complete analytics solution from ingesting data to visualizing results and integrating data and insights into their daily workflows.”

Digital transformation is finding out what data can do to your business decisions and actions. It’s focusing your company on mining and benefiting from its second-most important resource after its people: Data. While digital-born, Web-native, data-driven companies such as Google and Salesforce have been doing this for twenty years, many other businesses around the world, large and small, are now in full digital transformation mode, exploring the power of data eating the world. In the process, they tap into IT resources and data science tools in the cloud and experiment with advanced machine learning or deep learning. The remarkable and rapid progress in computer vision and natural language processing capabilities over the last 7 years has been enabled by big data—lots of tagged and labeled online data. Deep learning is Big Data AI.

Here’s what two CEOs of startups providing data mining services have to say about where we are in the evolution of Big Data to Big Data AI:

“The value of the data analytics market can’t be ignored. The Looker and Tableau acquisitions demonstrate that even the biggest tech players are snapping up data analytics companies with big price tags, clearly demonstrating the value these companies have in the larger cloud ecosystem. And in terms of what this means for the evolution of AI, we’ve reached a point where we have more than enough anonymized data to train the system, and now it’s a matter of honing how we use the AI to extract the maximum value from data”—Amir Orad, CEO, Sisense

“The Google Cloud/Looker and Salesforce/Tableau acquisitions are a direct reaction to the rate at which analytics workloads have been shifting to the cloud over the past few years. The state of AI is a reflection of this shift as machine learning, AI and analytics have become the primary growth opportunities for the cloud today. Yet, it’s this same growth that is causing barrier to success as AI project overwhelming face the same problem — data quality”—Adam Wilson, CEO, Trifacta

Sisense is a business intelligence startup providing “a complete solution for preparing, analyzing and visualizing big data.” It has raised $174 million over 5 rounds and in May 2019, it acquired Periscope Data. Trifacta has raised $124.3 million over 6 rounds and is focused on data preparation. It announced today a partnership with IBM to develop a new data preparation tool.

A search for “big data” in the Crunchbase database results in close to 15,000 entries. A search for “AI” results in close to 12,000 entries. There is probably a huge overlap between those two categories. And the real-world overlap will only intensify in the near future.

How many of the hundreds of the “big data” startups will merge with one another or be acquired by established data-driven companies as “big data” evolves into “big data AI”?

Originally published on Forbes.com

Posted in AI, Big Data Analytics, deep learning | Tagged , | Leave a comment