IBM, Watson, and Cognitive Computing

ThinkMagReacting to 10 quarters in a row of declining revenues and the abandonment of IBM’s profit target for 2015, UBS’s Steve Milunovich asked on the Q3 earnings call about IBM’s appeal to Silicon Valley startups. Giving voice to the rising conviction on Wall Street and beyond that the answer to the “disruption” of large companies is to “focus,” Milunovich stated that “they all argue of course they are going to disrupt the large companies, that the large companies basically have to break up.”

IBM’s CEO Ginni Rometty had a two-fold answer. The new areas of “higher value”–big data analytics, the cloud, social/mobile/security–grew almost 20%. IBM’s investments and offerings in these markets appeal to startups, argued Rometty, as evident by the 3,000 applications to join the Watson ecosystem. IBM can and will deliver the type of innovative, non-traditional IT infrastructure and solutions startups typically use.

Innovation is also very much on the mind of IBM’s traditional customers. Rometty reported on a meeting she had recently with 30 CIOs of IBM’s largest customers, where IBM was called “a navigator,” the company that understands “how an enterprise operates and how you should pull all of this together.”

It is important to keep in mind that innovation—new technologies, new business models, new processes—is making a big impact not only on IT vendors such as IBM but also on the customers of these vendors. The investments IBM is making in new growth areas are important not only for its appeal to startups, but also for its ability to help its traditional customers innovate. The success of IBM’s reinvention hangs on its ability to help others reinvent themselves.

At the forefront of IBM’s reinvention journey is a $1 billion investment in Jeopardy-winning Watson, which it hopes will usher in a new era of “cognitive computing.” Earlier this month, Rometty and Mike Rhodin, head of IBM’s Watson business unit, opened its worldwide headquarters at the heart of New York Silicon Alley, across the street from Facebook. IBM also announced new customers for Watson in 20 different countries, new partners developing Watson apps, five new Watson client experience centers around the world, and that Watson has started to learn Spanish so it could help Spain’s CaixaBank employees advise the bank’s customers.

The cognitive computing era is defined by “systems that can understand natural language, that can start to connect the dots or create an understanding of what they read, and then learn through practice,” Rhodin told me last month on the sidelines of the EmTech MIT event hosted by MIT Technology Review. He added: “Eras are measured in decades. We are in year three. Every day we are finding new things we could be doing.”

These are indeed early days. At the time of the Jeopardy! contest, each time a new document was added to Watson’s library, it needed to read the entire library again. Now, Watson can ingest new information in real time. Other challenges are yet to be resolved. For example, teaching Watson to carry context from question to question to enable continuous dialog. Or teaching Watson when not to answer a question and how to break a question into multiple questions.

As IBM learns from its work with customers and partners and overcomes these type of challenges, Rhodin sees Watson’s great promise mainly in its ability to help humans deal with information overload. He says: “In many professions, what we are seeing is that the information is overwhelming. I don’t know how doctors or lawyers or teachers keep up with the amount of things that are changing around them. The idea of tooling to help them makes sense to me.”

In medicine, the answer to information overload is over-specialization. But specialization can stand in the way of more holistic treatments of patients and personalized medicine. Watson can help a highly specialized physician—or just about any other professional—see the bigger picture but it can also help newcomers to the profession learn best practices and get answers to their questions.

Help or replace? At the end of his 2011 Jeopardy! contest with Watson, Ken Jennings added to his final response “I for one welcome our new computer overlords.” He later wrote: “When I was selected as one of the two human players… I envisioned myself as the Great Carbon-Based Hope against a new generation of thinking machines… ‘Quiz show contestant’ may be the first job made redundant by Watson, but I’m sure it won’t be the last.”

IBM responds to the endless talk about “the rise of the machines” by emphasizing Watson’s “partnership” with humans and the way it “enhances” their work. As an example, Rhodin brought up IBM’s work with Genesys, a leading call center vendor. Watson is used both to help callers by answering frequently asked questions and as agent-assist technology when the call is escalated to a human. Rometty is quoted by Walter Isaacson in his new book, The Innovators: “I watched Watson interact in a collegial way with the doctors. It was the clearest testament of how machines can truly be partners with humans rather than try to replace them.”

In addition to age-old fears about automation and loss of jobs, there are other potential societal challenges to Watson and cognitive computing. One that Rhodin talked about is the need to educate the market that Watson was designed as a probabilistic, rather than a deterministic system. “Probabilistic systems are going to give you different answers in different times based on the best available information,” says Rhodin. “They are going to be based on a confidence level supported by evidence as opposed to a degree of certainty. Watson is giving you hypotheses with a confidence factor and these help you explore other avenues.”

Indeed, explaining to the public and to Watson’s users, how it works and what to expect from it, would require a concerted educational effort by IBM. People, including educated professionals, demand answers and certainty, not hypotheses, especially when they interact with technology and engage with science. Priyamvada Natarajan sums up this educational challenge in The New York Review of Books, questioning the degree to which people understand the scientific method and “whether they have an adequate sense of what a scientific theory is, how evidence for it is collected and evaluated, how uncertainty (which is inevitable) is measured, and how one theory can displace another, either by offering a more economical, elegant, honed, and general explanation of phenomena or, in the rare event, by clearly falsifying it…. In a word, the general public has trouble understanding the provisionality of science.”

Automation and augmentation of work can free us to engage in more interesting tasks or become more productive or simply enjoy life better… as long as we don’t blindly rely on it and believe that the machine can “think” for us, completely replace us, even have better judgment without us. In Smart Machines: IBM’s Watson and the era of cognitive computing, John E. Kelly III (head of IBM’s research organization) and Steve Hamm state this position clearly: “The goal isn’t to replicate human brains… This isn’t about replacing human thinking with machine thinking. Rather, in the era of cognitive systems, humans and machines will collaborate to produce better results, each bringing their own superior skills to the partnership.”

Still, while the goal “isn’t to replicate the human brain,” Kelly and Hamm devote an entire chapter to IBM’s TrueNorth chip. The language used to describe the effort is far from consistent (maybe Watson could have helped). Is it a “brain-inspired” chip? Or is it a “brain-based” chip? (“Based” means, at least to me, that we have a complete understanding of how the brain works.) And why lump Watson, TrueNorth, and attempts at computer simulation of the brain (e.g., European Union’s Brain Simulation Platform) together as “cognitive computing”?

These are not just some minor quibbles. A number of prominent academics have recently commented on the “brain-like” hype. Cognitive scientist and machine learning expert Michael Jordan: “We have no idea how neurons are storing information, how they are computing, what the rules are, what the algorithms are, what the representations are, and the like. So we are not yet in an era in which we can be using an understanding of the brain to guide us in the construction of intelligent systems.” Deep Learning expert Andrew Ng agrees, stating at the EmTech event that “We don’t really know how the brain works.”

When you have a massive educational project on your hands, you’d better be very cautious, accurate, and consistent about your claims for a “new era” and what it represents. Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence, writes: “Watson was an impressive demonstration but it was narrowly targeted at Jeopardy and exhibited very little semantic understanding. Now Watson has become an IBM brand for any knowledge based activity they do. The intelligence is largely in their PR department.” It well may be that the IBM DNA, while providing it with a great blueprint for getting a message out and getting people excited about what it does, could also be the wrong path to follow today.

In 1948, IBM opened its first frontier homestead in New York, that one for the era of (just) computing. In late 1947, Thomas Watson Sr., IBM’s CEO at the time, “made a decision that forever altered the public perception of computers and linked IBM to the new generation of information machines,” writes Kevin Maney in The Maverick and his Machine. Maney: “He told the engineers to disassemble the SSEC [IBM’s Selective Sequence Electronic Calculator] and set it up in the ground floor lobby of IBM’s 590 Madison Avenue headquarters. The lobby was open to the public and its large windows allowed a view of the SSEC for the multitudes cramming the sidewalks on Madison and 57th street. … The spectacle of the SSEC defined the public’s image of a computer for decades. Kept dust-free behind glass panels, reels of electronic tape ticked like clocks, punches stamped out cards and whizzed them into hoppers, and thousands of tiny lights flashed on and off in no discernable pattern… Pedestrians stopped to gawk and gave the SSEC the nickname ‘Poppy.’ … Watson took the computer out of the lab and sold it to the public.”

Watson understood that successful selling to the public was an important factor in the success of selling to businesses (today it’s called “thought leadership”). IBM has successfully continued to capitalize and improve on this tradition.

It may well be, however, that our times call for a somewhat different approach. IBM should extend and expand the brilliant Jeopardy! public relations coup, maybe even provide the public with free access to some of Watson’s capabilities (IBM already provides a cloud-based version of Watson to 10 universities in North America for their students to use in cognitive computing classes). At the same time, it’s probably best not to generate unnecessary hype and speculation, and not indulge in grand visions of where computing may be going. After all, we’ve gotten used to surprising and useful new technologies coming from unexpected corners that succeed or fail based on the benefits they provide us. Google (and Facebook, and Baidu, and all the other companies investing in a new generation of artificial intelligence systems) don’t talk about a new era.

What Watson has done so far is quite impressive, so why not stick to its achievements and avoid using vague language about a new era of computing? Isn’t Watson Oncology, providing medical diagnostics to parts of the world where access to modern medicine is limited, an impressive achievement all on its own?

It will be great to see many more similar achievements by IBM and its partners in the years to come. What’s required are long-term investments, eliminating unnecessary hype, and not breaking-up IBM. The abandonment of the profit road map first announced by Rometty’s predecessor is a giant leap on the road to reinvention.

[Originally Published on Forbes.com]

 

About these ads
Posted in AI, IBM | 1 Comment

96% of World Population on Mobile and 40% on Internet by End of 2014

mobile-cellular-subscriptions

The number of mobile-cellular subscriptions worldwide is approaching the number of people on earth. Mobile-cellular subscriptions will reach almost 7 billion by end 2014, corresponding to a penetration rate of 96%. More than half of these (3.6 billion subscriptions) will be in the Asia-Pacific region.

By the end of 2014, there will be almost 3 billion Internet users (40% of global population), two-thirds of them coming from the developing world.

Over 50% of the global population will have Internet access within three years’ time.

Over 2.3 billion people will access mobile broadband by end 2014, climbing steeply to a predicted 7.6 billion within the next five years. There are now over three times as many mobile broadband connections as there are conventional fixed broadband subscriptions. The popularity of broadband-enabled social media applications continues to soar, with 1.9 billion people now active on social networks.

Sources: Half the world will be online by 2017; The World in 2014

Posted in Data growth, Internet, Mobile, Stats | 2 Comments

Jeopardy champion Jennings on how a computer beat him at his own game (Video)

Jennings in Slate:

…there’s no shame in losing to silicon, I thought to myself as I greeted the (suddenly friendlier) team of IBM engineers after the match. After all, I don’t have 2,880 processor cores and 15 terabytes of reference works at my disposal—nor can I buzz in with perfect timing whenever I know an answer. My puny human brain, just a few bucks worth of water, salts, and proteins, hung in there just fine against a jillion-dollar supercomputer.

“Watching you on Jeopardy! is what inspired the whole project,” one IBM engineer told me, consolingly. “And we looked at your games over and over, your style of play. There’s a lot of you in Watson.” I understood then why the engineers wanted to beat me so badly: To them, I wasn’t the good guy, playing for the human race. That was Watson’s role, as a symbol and product of human innovation and ingenuity. So my defeat at the hands of a machine has a happy ending, after all. At least until the whole system becomes sentient and figures out the nuclear launch codes. But I figure that’s years away.

Posted in AI, IBM | Leave a comment

Michael Jordan on the coming big data winter and the state of machine learning

michaelijordanGreat Interview in IEEE Spectrum with machine learning expert, UC Berkeley Professor, and IEEE Fellow Michael Jordan:

“…people continue to infer… that deep learning is taking advantage of an understanding of how the brain processes information, learns, makes decisions, or copes with large amounts of data. And that is just patently false.”

“There is progress at the very lowest levels of neuroscience. But for issues of higher cognition—how we perceive, how we remember, how we act—we have no idea how neurons are storing information, how they are computing, what the rules are, what the algorithms are, what the representations are, and the like. So we are not yet in an era in which we can be using an understanding of the brain to guide us in the construction of intelligent systems.”

“…with big data, it will take decades, I suspect, to get a real engineering approach, so that you can say with some assurance that you are giving out reasonable answers and are quantifying the likelihood of errors.”

“The main [adverse consequences  if we remain on the big data trajectory we are on] will be a ‘big-data winter.’ After a bubble, when people invested and a lot of companies overpromised without providing serious analysis, it will bust. And soon, in a two- to five-year span, people will say, “The whole big-data thing came and went. It died. It was wrong.” I am predicting that. It’s what happens in these cycles when there is too much hype, i.e., assertions not based on an understanding of what the real problems are or on an understanding that solving the problems will take decades, that we will make steady progress but that we haven’t had a major leap in technical progress. And then there will be a period during which it will be very hard to get resources to do data analysis. The field will continue to go forward, because it’s real, and it’s needed. But the backlash will hurt a large number of important projects.”

Note that Jordan took issue with the title and the lead-in to the IEEE Spectrum article.

More on what needs to be done to avoid a big data winter is in Jordan’s Reddit AMA and in the Frontiers in Massive Data Analysis report from the US National Research Council’s Committee on the Analysis of Massive Data (which Jordan chaired).

 

Posted in Big Data Backlash, Data Science | 2 Comments

How Data Travels Around the World

Posted in Big Data Analytics, Data growth | Leave a comment

Is Privacy Becoming a Luxury Good? Julia Angwin Keynote at Strata + Hadoop 2014 (Video)

We are being watched – by companies, by the government, by our neighbors. Technology has made powerful surveillance tools available to everyone. And now some of us are investing in counter-surveillance techniques and tactics. Julia Angwin discusses how much she has spent trying to protect her privacy, and raises the question of whether we want to live in a society where only the rich can buy their way out of ubiquitous surveillance.

Julia Angwin is an award-winning investigative journalist at the independent news organization ProPublica. From 2000 to 2013, she was a reporter at The Wall Street Journal, where she led a privacy investigative team that was a Finalist for a Pulitzer Prize in Explanatory Reporting in 2011 and won a Gerald Loeb Award in 2010. Her book, Dragnet Nation: A Quest for Privacy, Security and Freedom in a World of Relentless Surveillance, was published by Times Books in 2014. In 2003, she was on a team of reporters at The Wall Street Journal that was awarded the Pulitzer Prize in Explanatory Reporting for coverage of corporate corruption. She is also the author of “Stealing MySpace: The Battle to Control the Most Popular Website in America” (Random House, March 2009).

Posted in Big Data Analytics, Big Data Backlash, Privacy | Leave a comment

Recruiting Data Scientists to Mine the Data Explosion

DigitalUniverse_WSJ

 

Wes Hunt, Chief Data Officer (CDO) at Nationwide Mutual Insurance Co. on recruiting data scientists:

Finding talent is my largest challenge. Someone who understands our business, who has quantitative skills, who has the technical skills to create the models, and who is able to persuade others that the insights they’ve come up with are ones you can trust and take action on. The hardest part is persuasion. You get the quantitative skills, but there’s a struggle in that ability to communicate effectively. We’ll often pair people together, but we’d really like to grow the talent.

When I was in marketing, we put a focus on liberal-arts-educated individuals, because abstract thinking where there are ambiguous data sets is an area where they are comfortable. Ph.D.s in psychology were a great recruiting pool. A psych Ph.D. has a fair amount of statistical training. We created a program to recruit Ph.D.s.

There’s not yet an educational discipline and curriculum that produces data scientists at the scale that would clear the market. So the way we’ve focused on it is to find people with innate curiosity and critical thinking. You can teach the other skills. On my team, I have a pathologist, a bioengineering student who trained in doing heart research, an M.B.A., and someone who is trained in traditional data architecture. I also have a landscape construction engineer and a psychology Ph.D.

 

Posted in Data growth, Data Science, Data Science Careers | 1 Comment