IBM, Watson, and Cognitive Computing

ThinkMagReacting to 10 quarters in a row of declining revenues and the abandonment of IBM’s profit target for 2015, UBS’s Steve Milunovich asked on the Q3 earnings call about IBM’s appeal to Silicon Valley startups. Giving voice to the rising conviction on Wall Street and beyond that the answer to the “disruption” of large companies is to “focus,” Milunovich stated that “they all argue of course they are going to disrupt the large companies, that the large companies basically have to break up.”

IBM’s CEO Ginni Rometty had a two-fold answer. The new areas of “higher value”–big data analytics, the cloud, social/mobile/security–grew almost 20%. IBM’s investments and offerings in these markets appeal to startups, argued Rometty, as evident by the 3,000 applications to join the Watson ecosystem. IBM can and will deliver the type of innovative, non-traditional IT infrastructure and solutions startups typically use.

Innovation is also very much on the mind of IBM’s traditional customers. Rometty reported on a meeting she had recently with 30 CIOs of IBM’s largest customers, where IBM was called “a navigator,” the company that understands “how an enterprise operates and how you should pull all of this together.”

It is important to keep in mind that innovation—new technologies, new business models, new processes—is making a big impact not only on IT vendors such as IBM but also on the customers of these vendors. The investments IBM is making in new growth areas are important not only for its appeal to startups, but also for its ability to help its traditional customers innovate. The success of IBM’s reinvention hangs on its ability to help others reinvent themselves.

At the forefront of IBM’s reinvention journey is a $1 billion investment in Jeopardy-winning Watson, which it hopes will usher in a new era of “cognitive computing.” Earlier this month, Rometty and Mike Rhodin, head of IBM’s Watson business unit, opened its worldwide headquarters at the heart of New York Silicon Alley, across the street from Facebook. IBM also announced new customers for Watson in 20 different countries, new partners developing Watson apps, five new Watson client experience centers around the world, and that Watson has started to learn Spanish so it could help Spain’s CaixaBank employees advise the bank’s customers.

The cognitive computing era is defined by “systems that can understand natural language, that can start to connect the dots or create an understanding of what they read, and then learn through practice,” Rhodin told me last month on the sidelines of the EmTech MIT event hosted by MIT Technology Review. He added: “Eras are measured in decades. We are in year three. Every day we are finding new things we could be doing.”

These are indeed early days. At the time of the Jeopardy! contest, each time a new document was added to Watson’s library, it needed to read the entire library again. Now, Watson can ingest new information in real time. Other challenges are yet to be resolved. For example, teaching Watson to carry context from question to question to enable continuous dialog. Or teaching Watson when not to answer a question and how to break a question into multiple questions.

As IBM learns from its work with customers and partners and overcomes these type of challenges, Rhodin sees Watson’s great promise mainly in its ability to help humans deal with information overload. He says: “In many professions, what we are seeing is that the information is overwhelming. I don’t know how doctors or lawyers or teachers keep up with the amount of things that are changing around them. The idea of tooling to help them makes sense to me.”

In medicine, the answer to information overload is over-specialization. But specialization can stand in the way of more holistic treatments of patients and personalized medicine. Watson can help a highly specialized physician—or just about any other professional—see the bigger picture but it can also help newcomers to the profession learn best practices and get answers to their questions.

Help or replace? At the end of his 2011 Jeopardy! contest with Watson, Ken Jennings added to his final response “I for one welcome our new computer overlords.” He later wrote: “When I was selected as one of the two human players… I envisioned myself as the Great Carbon-Based Hope against a new generation of thinking machines… ‘Quiz show contestant’ may be the first job made redundant by Watson, but I’m sure it won’t be the last.”

IBM responds to the endless talk about “the rise of the machines” by emphasizing Watson’s “partnership” with humans and the way it “enhances” their work. As an example, Rhodin brought up IBM’s work with Genesys, a leading call center vendor. Watson is used both to help callers by answering frequently asked questions and as agent-assist technology when the call is escalated to a human. Rometty is quoted by Walter Isaacson in his new book, The Innovators: “I watched Watson interact in a collegial way with the doctors. It was the clearest testament of how machines can truly be partners with humans rather than try to replace them.”

In addition to age-old fears about automation and loss of jobs, there are other potential societal challenges to Watson and cognitive computing. One that Rhodin talked about is the need to educate the market that Watson was designed as a probabilistic, rather than a deterministic system. “Probabilistic systems are going to give you different answers in different times based on the best available information,” says Rhodin. “They are going to be based on a confidence level supported by evidence as opposed to a degree of certainty. Watson is giving you hypotheses with a confidence factor and these help you explore other avenues.”

Indeed, explaining to the public and to Watson’s users, how it works and what to expect from it, would require a concerted educational effort by IBM. People, including educated professionals, demand answers and certainty, not hypotheses, especially when they interact with technology and engage with science. Priyamvada Natarajan sums up this educational challenge in The New York Review of Books, questioning the degree to which people understand the scientific method and “whether they have an adequate sense of what a scientific theory is, how evidence for it is collected and evaluated, how uncertainty (which is inevitable) is measured, and how one theory can displace another, either by offering a more economical, elegant, honed, and general explanation of phenomena or, in the rare event, by clearly falsifying it…. In a word, the general public has trouble understanding the provisionality of science.”

Automation and augmentation of work can free us to engage in more interesting tasks or become more productive or simply enjoy life better… as long as we don’t blindly rely on it and believe that the machine can “think” for us, completely replace us, even have better judgment without us. In Smart Machines: IBM’s Watson and the era of cognitive computing, John E. Kelly III (head of IBM’s research organization) and Steve Hamm state this position clearly: “The goal isn’t to replicate human brains… This isn’t about replacing human thinking with machine thinking. Rather, in the era of cognitive systems, humans and machines will collaborate to produce better results, each bringing their own superior skills to the partnership.”

Still, while the goal “isn’t to replicate the human brain,” Kelly and Hamm devote an entire chapter to IBM’s TrueNorth chip. The language used to describe the effort is far from consistent (maybe Watson could have helped). Is it a “brain-inspired” chip? Or is it a “brain-based” chip? (“Based” means, at least to me, that we have a complete understanding of how the brain works.) And why lump Watson, TrueNorth, and attempts at computer simulation of the brain (e.g., European Union’s Brain Simulation Platform) together as “cognitive computing”?

These are not just some minor quibbles. A number of prominent academics have recently commented on the “brain-like” hype. Cognitive scientist and machine learning expert Michael Jordan: “We have no idea how neurons are storing information, how they are computing, what the rules are, what the algorithms are, what the representations are, and the like. So we are not yet in an era in which we can be using an understanding of the brain to guide us in the construction of intelligent systems.” Deep Learning expert Andrew Ng agrees, stating at the EmTech event that “We don’t really know how the brain works.”

When you have a massive educational project on your hands, you’d better be very cautious, accurate, and consistent about your claims for a “new era” and what it represents. Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence, writes: “Watson was an impressive demonstration but it was narrowly targeted at Jeopardy and exhibited very little semantic understanding. Now Watson has become an IBM brand for any knowledge based activity they do. The intelligence is largely in their PR department.” It well may be that the IBM DNA, while providing it with a great blueprint for getting a message out and getting people excited about what it does, could also be the wrong path to follow today.

In 1948, IBM opened its first frontier homestead in New York, that one for the era of (just) computing. In late 1947, Thomas Watson Sr., IBM’s CEO at the time, “made a decision that forever altered the public perception of computers and linked IBM to the new generation of information machines,” writes Kevin Maney in The Maverick and his Machine. Maney: “He told the engineers to disassemble the SSEC [IBM’s Selective Sequence Electronic Calculator] and set it up in the ground floor lobby of IBM’s 590 Madison Avenue headquarters. The lobby was open to the public and its large windows allowed a view of the SSEC for the multitudes cramming the sidewalks on Madison and 57th street. … The spectacle of the SSEC defined the public’s image of a computer for decades. Kept dust-free behind glass panels, reels of electronic tape ticked like clocks, punches stamped out cards and whizzed them into hoppers, and thousands of tiny lights flashed on and off in no discernable pattern… Pedestrians stopped to gawk and gave the SSEC the nickname ‘Poppy.’ … Watson took the computer out of the lab and sold it to the public.”

Watson understood that successful selling to the public was an important factor in the success of selling to businesses (today it’s called “thought leadership”). IBM has successfully continued to capitalize and improve on this tradition.

It may well be, however, that our times call for a somewhat different approach. IBM should extend and expand the brilliant Jeopardy! public relations coup, maybe even provide the public with free access to some of Watson’s capabilities (IBM already provides a cloud-based version of Watson to 10 universities in North America for their students to use in cognitive computing classes). At the same time, it’s probably best not to generate unnecessary hype and speculation, and not indulge in grand visions of where computing may be going. After all, we’ve gotten used to surprising and useful new technologies coming from unexpected corners that succeed or fail based on the benefits they provide us. Google (and Facebook, and Baidu, and all the other companies investing in a new generation of artificial intelligence systems) don’t talk about a new era.

What Watson has done so far is quite impressive, so why not stick to its achievements and avoid using vague language about a new era of computing? Isn’t Watson Oncology, providing medical diagnostics to parts of the world where access to modern medicine is limited, an impressive achievement all on its own?

It will be great to see many more similar achievements by IBM and its partners in the years to come. What’s required are long-term investments, eliminating unnecessary hype, and not breaking-up IBM. The abandonment of the profit road map first announced by Rometty’s predecessor is a giant leap on the road to reinvention.

[Originally Published on Forbes.com]