AI has achieved recent performance breakthroughs across numerous cognitive applications (Figure 7), from image classification to pattern recognition and ontological reasoning. This progress is due largely to convergent advances across three enablers: computing power, training data and learning algorithms. To illustrate this, automated
image recognition and classification has improved in accuracy over the past decade, from 85% to 95% (a human averages 93%), allowing such algorithms to progress from being novelties to enablers of real innovations, such as autonomous transportation for warehouse order picking.
Solutions are currently trained on millions of image data, a 100-fold increase compared with a decade ago. They are powered by specialized graphics processing unit chips that
are more than 1,000 times faster, and five to ten times more complex (based on a 150 to 200-layer neural network) than those of previous generations. Computing and storage costs have declined commensurately by an average of 35% year on year.
In the near future, AI will build on adoption enablers to unlock faster, smarter and more intuitive applications, although progress will probably be confined to broad adoption of narrow, context-aware intelligence across domains. The chasm separating narrow and general intelligence is believed to represent a fundamentally different set of learning algorithms and non-deterministic computing architecture compared with what exits currently.