Self-Driving Cars: Challenges, Expectations, Hype, and Healthy Skepticism

The Growth Of The Autonomous Car Market

See also

Rodney Brooks on the Unexpected Consequences of Self Driving Cars:

In this post I will explore two possible consequences of having self driving cars, two consequences that I have not seen being discussed, while various car companies, non-traditional players, and startups debate what level of autonomy we might expect in our cars and when. These potential consequences are self-driving cars as social outcasts and anti-social behavior of owners. Both may have tremendous and unexpected influence on the uptake of self-driving cars. Both are more about the social realm than the technical realm, which is perhaps why technologists have not addressed them. And then I’ll finish, however, by dissing a non-technical aspect of self driving cars that has been overdone by technologists and other amateur philosophers with an all out flame.

Ryan Gariepy, co-founder and CTO of Clearpath Robotics:

“Everybody is going after 3 billion people,” said Gariepy. This is the market represented by all current drivers worldwide and it’s “perfect from a company building perspective” to go after them, he said. But trying to get to level 5 of autonomous driving on city streets is attempting to do too much too quickly. Instead, Clearpath Robotics is focused on industrial self-driving vehicles, operating in controlled environments where people are trained to follow certain procedures.

The Verge:

The ad is meant to drive home the point that many of the futuristic, lifesaving technologies the car companies have been hyping to consumers already exists in various modes of public transportation, like buses. Electrified propulsion? Check. Freedom from the shackles of driving? Check. The ultimate shared vehicle? Big check.

Posted in AI, self-driving cars | Leave a comment

Big Data Ecosystem and Benefits



Organizations are achieving a broad range of Big Data–related business outcomes from their work. These can broadly be categorized as delivering value from operational optimization, improved compliance, and innovation. Interestingly, and as illustrated, Big Data outcomes tied to operational efficiencies feature significantly in responses (four of the top 5 answers focus on optimizing business or IT processes); this is despite the fact that 41% of organizations… stated a need to drive innovation as a top business driver.

Improve ability to plan and forecast                                                         66%

Improve our operational, fraud,and risk management                      63%

Improve and optimize our business processes and operations       58%

Posted in Big Data Analytics, Big Data Landscape | Tagged | 1 Comment

Will Google Own AI? (2)

According to the tally Google provided to MIT Technology Review, it published 218 journal or conference papers on machine learning in 2016, nearly twice as many as it did two years ago…  Compared to all companies that publish prolifically on artificial intelligence, Clarivate ranks Google No. 1 by a wide margin.

See also AI And Community Development Are Two Key Reasons Why Google May Win The Cloud Wars

Posted in AI | Tagged | 1 Comment

Using AI And Deep Learning To Improve Consumer Access To Credit


Neural network created in SAS Visual Data Mining and Machine Learning 8.1

Artificial intelligence, machine learning and neural networks-based deep learning are concepts that have recently come to dominate venture capital funding, startup formation, promotion and exits and policy discussions. The highly-publicized triumphs over humans in Go and Poker, rapid progress in speech recognition, image identification, and language translation, and the proliferation of talking and texting virtual assistants and chatbots, have helped inflate the market cap of Apple (#1 as of February 17), Google (#2), Microsoft (#3), Amazon (#5), and Facebook (#6).

While these companies dominate the headlines—and the war for the relevant talent—other companies that have been analyzing data or providing tools for analysis for years are also capitalizing on recent AI advances. A case in point are Equifax and SAS: The former developing deep learning tools to improve credit scoring and the latter adding new deep learning functionality to its data mining tools and offering a deep learning API.

Both companies have a lot of experience in what they do. Equifax, founded in 1899, is a credit reporting agency, collecting and analyzing data on more than 820 million consumers and more than 91 million businesses worldwide. SAS, founded in 1976, develops and sells data analytics and data management software.

The AI concepts that make headlines today also have a long history. Moving beyond speedy calculation, two approaches emerged in the 1950s to applying early computers to other type of cognitive work. One was labeled “artificial intelligence,” the other “machine learning” (a decidedly less sexy and attention-grabbing name). While the artificial intelligence approach was related to symbolic logic, a branch of mathematics, the machine-learning approach was related to statistics. And there was another important distinction between the two: The artificial intelligence approach was part of the dominant computer science paradigm and the practice of a programmer defining what the computer had to do by coding an algorithm, a model, a program in a programming language. The machine-learning approach relied on data and on statistical procedures that found patterns in the data or classified the data into different buckets, allowing the computer to “learn” (e.g., optimize the performance—accuracy—of a certain task) and “predict” (e.g., classify or put in different buckets) the type of new data that is fed to it.

For traditional computer science, data was what the program processed and the output of that processing. With machine learning, the data itself defines what to do next. Says Oliver Schabenberger, Executive Vice President and Chief Technology Officer at SAS: “What sometimes gets overlooked is that it’s really the data that drives machine learning.”

Over the years, machine learning has been applied successfully to problems such as spam filtering, handwriting recognition, machine translation, fraud detection, and product recommendations. Many successful “digital natives” such as Google, Amazon and Netflix, have built their fortunes with the help of machine learning algorithms. The real-world experiences of these companies have proved how successful machine learning can be in using lots of data from a variety of sources to predict consumer behavior. Using lots and lots of data makes predictive models more robust and predictions more accurate. “Big Data,” however, gave rise not only to new type of data-driven companies, but also to a new type of machine learning: “Deep Learning.”

Deep learning takes the machine-learning approach much further by applying it to multi-layer “artificial neural networks.” Influenced by a computational model for human neural networks first developed in 1943, artificial neural networks got their first software manifestation in the 1957 Perceptron, an algorithm for pattern recognition based on a two-layer network. Abandoned for a while because of the limited computing power of the day, deep neural networks have seen a remarkable revival over the last decade, fueled by advanced algorithms, big data, and increased computer power, specifically in the form Graphics Processing Units (GPU) which process data in parallel, thus cutting down on the time required to “train” the computer.

Today’s deep neural networks move vast amounts of data through many layers of hardware and software, each layer coming up with its own representation of the data and passing what it “learned” to the next layer. Artificial intelligence attempts “to make a machine that thinks like a human. Deep neural networks try to solve pretty narrow tasks,” says Schabenberger. Relinquishing the quest for human-like intelligence, deep learning has succeeded in vastly expanding the range of narrow tasks machines can learn and perform.

“We noticed a couple of years ago,” says Peter Maynard, Senior Vice President of Global Analytics at Equifax, “that we were not getting enough statistical lift from our traditional credit scoring methodology.” The conventional wisdom in the credit scoring industry at the time was that they must continue to use traditional machine learning approaches such as logistical regression because the results were interpretable, i.e., in compliance with regulation. Modern machine-learning approaches such as deep neural networks, which promised more accurate results, presented a challenge in that regard as they were not interpretable. They are considered a “black box,” a process so complex that even its programmers do not fully understand how the learning machine reached the results it produced.

“My team decided to challenge that and find a way to make neural nets interpretable,” says Maynard.  He explains: “We developed a mathematical proof that shows that we could generate a neural net solution that can be completely interpretable for regulatory purposes. Each of the inputs can map into the hidden layer of the neural network and we imposed a set of criteria that enable us to interpret the attributes coming into the final model. We stripped apart the black box so we can have an interpretable outcome. That was revolutionary, no one has ever done that before.”

Maynard reports that the neural net has improved the predictive ability of the model by up to 15%. The larger the size of the data set analyzed and the more complex the analysis, the bigger is the improvement. “In credit scoring,” says Maynard, “we spend a lot of time creating segments to build a model on. Determining the optimal segment could take sometimes 20% of the time that it takes to build a model. In the context of neural nets, those segments are the hidden layers—the neural net does it all for you. The machine is figuring out what are the segments and what are the weights in a segment instead of having an analyst do that. I find it really powerful.”

The immediate benefit of using neural nets is faster model development as some of the work previously done by data scientists in building and testing a model is automated. But Maynard envisions “full automation,” especially regarding a big part of a data scientist’s job—the ongoing tweaking of the model. Maynard: ”You have a human reviewing it to make sure it’s executing as intended but the whole thing is done automatically. It’s similar to search optimization or product recommendations where the model gets tweaked every time you click. In credit scoring, when you have a neural network with superior predictability and interpretability, there is no reason to have a person in the middle of that process.”

In addition, the “attributes” or the factors affecting a credit score (e.g., the size of an individual’s checking account balance and how it was used over the last 6 months), are now “data-driven.” Instead of being hypotheses developed by data scientists, now the attributes are created by the deep learning process, on the basis of a much larger set of historical or “trended data.” “We are looking at 72 months of data and identifying patterns of consumer behavior over time, using machine learning to understand the signal and the strength of the signal over that time period,” says Maynard. “Now, instead of creating thousands of attributes, we can create hundreds of thousands of attributes for testing. The algorithms will determine what’s the most predictive in terms of the behavior we are trying to model.”

The result—and the most important benefit of using modern machine learning tools—is greater access to credit. Analyzing two years’ worth of U.S. mortgage data, Equifax determined that numerous declined loans could have been loaned safely. That promises a considerable expansion of the universe of approved mortgages. “The use case we showed regulators,” says Maynard, “was in the telecom industry where people had to put down a down payment to get a cell phone—with this model they don’t need to do that anymore.”

Equifax has filed for a patent for its work on improving credit scoring. “It’s the dawn of a new age—enabling greater access to credit is a huge opportunity,” says Maynard.

Originally published on

Posted in AI, deep learning | Tagged , | 1 Comment

10 Hottest AI Technologies

Forrester AI technologies

The market for artificial intelligence (AI) technologies is flourishing. Beyond the hype and the heightened media attention, the numerous startups and the internet giants racing to acquire them, there is a significant increase in investment and adoption by enterprises. A Narrative Science survey found last year that 38% of enterprises are already using AI, growing to 62% by 2018. Forrester Research predicted a greater than 300% increase in investment in artificial intelligence in 2017 compared with 2016. IDC estimated that the AI market will grow from $8 billion in 2016 to more than $47 billion in 2020.

Coined in 1955 to describe a new computer science sub-discipline, “Artificial Intelligence” today includes a variety of technologies and tools, some time-tested, others relatively new. To help make sense of what’s hot and what’s not, Forrester just published a TechRadar report on Artificial Intelligence (for application development professionals), a detailed analysis of 13 technologies enterprises should consider adopting to support human decision-making.

Based on Forrester’s analysis, here’s my list of the 10 hottest AI technologies:

    1. Natural Language Generation: Producing text from computer data. Currently used in customer service, report generation, and summarizing business intelligence insights. Sample vendors: Attivio, Automated Insights, Cambridge Semantics, Digital Reasoning, Lucidworks, Narrative Science, SAS, Yseop.
    1. Speech Recognition: Transcribe and transform human speech into format useful for computer applications. Currently used in interactive voice response systems and mobile applications. Sample vendors: NICE, Nuance Communications, OpenText, Verint Systems.
    2. Virtual Agents: “The current darling of the media,” says Forrester (I believe they refer to my evolving relationships with Alexa), from simple chatbots to advanced systems that can network with humans. Currently used in customer service and support and as a smart home manager. Sample vendors: Amazon, Apple, Artificial Solutions, Assist AI, Creative Virtual, Google, IBM, IPsoft, Microsoft, Satisfi.
  1. Machine Learning Platforms: Providing algorithms, APIs, development and training toolkits, data, as well as computing power to design, train, and deploy models into applications, processes, and other machines. Currently used in a wide range of enterprise applications, mostly `involving prediction or classification. Sample vendors: Amazon, Fractal Analytics, Google,, Microsoft, SAS, Skytree.
  2. AI-optimized Hardware: Graphics processing units (GPU) and appliances specifically designed and architected to efficiently run AI-oriented computational jobs. Currently primarily making a difference in deep learning applications. Sample vendors: Alluviate, Cray, Google, IBM, Intel, Nvidia.
  3. Decision Management: Engines that insert rules and logic into AI systems and used for initial setup/training and ongoing maintenance and tuning. A mature technology, it is used in a wide variety of enterprise applications, assisting in or performing automated decision-making. Sample vendors: Advanced Systems Concepts, Informatica, Maana, Pegasystems, UiPath.
  4. Deep Learning Platforms: A special type of machine learning consisting of artificial neural networks with multiple abstraction layers. Currently primarily used in pattern recognition and classification applications supported by very large data sets. Sample vendors: Deep Instinct, Ersatz Labs, Fluid AI, MathWorks, Peltarion, Saffron Technology, Sentient Technologies.
  5. Biometrics: Enable more natural interactions between humans and machines, including but not limited to image and touch recognition, speech, and body language. Currently used primarily in market research. Sample vendors: 3VR, Affectiva, Agnitio, FaceFirst, Sensory, Synqera, Tahzoo.
  6. Robotic Process Automation: Using scripts and other methods to automate human action to support efficient business processes. Currently used where it’s too expensive or inefficient for humans to execute a task or a process. Sample vendors: Advanced Systems Concepts, Automation Anywhere, Blue Prism, UiPath, WorkFusion.
  7. Text Analytics and NLP: Natural language processing (NLP) uses and supports text analytics by facilitating the understanding of sentence structure and meaning, sentiment, and intent through statistical and machine learning methods. Currently used in fraud detection and security, a wide range of automated assistants, and applications for mining unstructured data. Sample vendors: Basis Technology, Coveo, Expert System, Indico, Knime, Lexalytics, Linguamatics, Mindbreeze, Sinequa, Stratifyd, Synapsify.

There are certainly many business benefits gained from AI technologies today, but according to a survey Forrester conducted last year, there are also obstacles to AI adoption as expressed by companies with no plans of investing in AI:

There is no defined business case                                                       42%

Not clear what AI can be used for                                                       39%

Don’t have the required skills                                                             33%

Need first to invest in modernizing data mgt platform             29%

Don’t have the budget                                                                           23%

Not certain what is needed for implementing an AI system     19%

AI systems are not proven                                                                    14%

Do not have the right processes or governance                             13%

AI is a lot of hype with little substance                                             11%

Don’t own or have access to the required data                                8%

Not sure what AI means                                                                          3%

Once enterprises overcome these obstacles, Forrester concludes, they stand to gain from AI driving accelerated transformation in customer-facing applications and developing an interconnected web of enterprise intelligence.

Originally published on


Posted in AI | 2 Comments

Data Is Eating the World: Enterprise Edition


HT: ArchiTECHt

Enterprise Innovation:

With its 150-year history, over $2.4 trillion in assets, 37 million customers, and 4,000-strong presence across 70 countries, [HSBC] is an important financial institution in a heavily-regulated industry. “We have to make sure our customers feel confident and trust in us to be the custodian of their assets,” stated Darryl West, Group Chief Information Officer at HSBC, at the recently held Google Cloud Next conference…

“Apart from our $2.4 trillion dollars of assets on our balance sheet, we have at the core of the company a massive asset in [the form of] our data. And what’s been happening in the last three years is a massive growth in the size of our data assets,” shared West, pointing out that data at HSBC has grown tremendously from 56 petabytes in 2014 to over 100 petabytes as of early 2017. “Our customers are adopting digital channels more aggressively and we’re collecting more data about how our customers interact with us. As a bank, we need to work with partners to enable us to understand what’s happening and draw out insights in order for us to run a better business and create some amazing customer experiences,” said West.

Posted in Data growth | 1 Comment

Machine Learning Beats Data Science for Top Tech Job By Salary

Rank Job Title Number of Postings per Million Average Base Salary Average Growth in Postings 2013-2016
1 Machine Learning Engineer 58 $134,306 36%
2 Data Scientist 360 $129,938 108%
3 Computer Vision Engineer 20 $127,849 34%
4 Development Operations Engineer 731 $123,165 106%
5 Cloud Engineer 217 $118,878 67%
6 Senior Audit Manager 53 $118,692 52%
7 Penetration Tester 3l7 $115,557 52%
8 Oracle HCM Manager 44 $113,107 41%
9 Full Stack Developer 641 $110,770 122%
10 Salesforce Developer 230 $108,089 83%

Source: IEEE Spectrum

Posted in Data Science, Jobs, Machine Learning | Tagged | 1 Comment