The machines are taking over big data. In 2008, the number of things connected to the Internet exceeded the number of people on Earth, according to Cisco. Last year, IDC estimated that in 2020, there will be 26 times more connected things than people. And earlier this year, Wikibon issued a forecast of $514 billion to be spent on the “Industrial Internet” in 2020.
The “industrial Internet” is GE’s term for what others call The Internet of Things (Cisco now prefers “Internet of Everything”), which McKinsey defines as “sensors and actuators embedded in physical objects—from roadways to pacemakers—linked through wired and wireless networks, often using the same Internet Protocol (IP) that connects the Internet. These networks churn out huge volumes of data that flow to computers for analysis.”
The Wikibon report observed that while the analysis of the digital crumbs we leave on the Internet is focused mainly on improving the return on investment in advertising, the value created from the analysis of the big data generated by networked sensors is much greater because it is aimed at increasing the efficiency of equipment, tools, and infrastructure, and their maintenance and management. Wikibon estimates the value of increased efficiency could reach close to $1.3 trillion in 2020.
Whatever the size of the market generated by analysis of sensor data, GE aims to capture some of it, adding to the revenue it is already generating through “the use of analytics to automate processes, optimize performance, eliminate downtime, and predict when a machine or component will fail,” GE’s Jeff Immelt said this week at the “Minds and Machines Europe” event in London. GE also announced earlier this week a big data analytics platform that allows airlines, railroads, hospitals and utilities manage and operate machines such as jet engines and gas turbines in the cloud. It claims that this is the first time “industrial companies” will have a common architecture, combining intelligent machines, sensors and advanced analytics all enhanced by expanded partnerships with Accenture, Amazon, and Pivotal.
Google famously wants to “organize the world’s information and make it universally accessible and useful.” But the world’s information comes from three distinct but increasingly overlapping data pools: Enterprise, consumer, and machine-to-machine. Enterprise information is produced by people working for enterprises and is housed in their databases and electronic files. Consumer information lives in personal devices and the storehouses of enterprises that collect data from individuals using their online services. Machine-to-machine information is produced and transmitted by networked sensors and is captured (and potentially analyzed) by the owners of the sensors.
The enterprise data pool was created with the proliferation of enterprise computer networks and gave rise to or amplified the presence of IT vendors such as IBM, HP, EMC, Cisco, and Oracle. The consumer data pool has been driven by the advent of the World Wide Web and gave rise to new “IT vendors” (Information vendors? Big data vendors?) such as Google, Facebook, Netflix, and Amazon. Now we have more and more information created by networked sensors and GE wants to be the new IT vendor, serving its customers with data and analysis to help them improve efficiency, manage smarter, and identify new business opportunities.
This is the Googlization of GE. Data is no longer an after-thought, something to be discarded or kept only for regulatory reason. Data is not a cost anymore, it’s a wealth of opportunities. Data should no longer be seen as a business only by traditional “IT vendors.” Data is everybody’s business.
Over the next decade or so we will find out whether the dominant “IT vendors” will need to play in all three data pools or IT markets, eventually making “IT” one market for producing, storing, transmitting, and analyzing enterprise, consumer, and machine data.
[Originally published on Forbes.com]