AI must consider culture and context—“training shapes learning”
“Many of the current algorithms have already built in them a country and a culture,” said Genevieve Bell, Intel Fellow and Director of Interaction and Experience Research at Intel. As today’s smart machines are (still) created and used only by humans, culture and context are important factors to consider in their development. Both Rana El Kaliouby (CEO of Affectiva, a startup developing emotion-aware AI) and Aparna Chennapragada (Director of Product Management at Google) stressed the importance of using diverse training data—if you want your smart machine to work everywhere on the planet it must be attuned to cultural norms.
“Training shapes learning—the training data you put in determines what you get out,” said Chennapragada. And it’s not just culture that matters, but also context, as she illustrated with what she called the “I love you” problem. Many conversations end this way but they should not be included in training data for AI-driven corporate email system.
Lili Cheng, Distinguished Engineer and General Manager with Microsoft Research, talked about Microsoft’s successful bot Xiaoice (40 million users in China and Japan) and its not-so-successful bot Tay (released on Twitter and taken down after being trained by Twitter users to spout inflammatory tweets). Turns out context matters—a public conversation (in Tay’s case) vs. a small group conversation; and culture matters—a “very human-centric, man vs. machine” U.S. (Western?) culture as opposed to Asian culture where “you have ghosts and living trees.”
AI is not going to take all our jobs—“we are not going to run out of problems”
Tim O’Reilly enumerated all the reasons we will still have jobs in the future: 1. We are not going to run out of work because we are not going to run out of problems. 2. When some things become commodities, other things become more valuable. As AI turns more and more of what we do today into a commodity, we should expect that new things will become valuable—“rich economies indulge in things that appear to be useless, but are really all about status.” 3. Economic transformation takes time and effort (Amazon is still only 20% of Wal-Mart).
Similarly, Tom Davenport, professor at Babson College and co-founder of the International Institute for Analytics, pointed out that there were half a million bank tellers in the U.S. in 1980. The number in 2016? Also half a million bank tellers. “If you are building your career on wiping out a class of jobs, I hope you are very young because it takes a long time,” said Davenport in his conference presentation and in Only Humans Need Apply, the book he recently published on how we are going to add value to smart machines rather than being replaced by them. Don’t be too optimistic or too pessimistic about AI, he told the audience, just don’t be complacent.
President Obama agrees: “If properly harnessed, [AI] can generate enormous prosperity and opportunity. But it also has some downsides that we’re gonna have to figure out in terms of not eliminating jobs. It could increase inequality. It could suppress wages.”
AI is not going to kill us—“AI is going to empower us”
Oren (and Amitai) Etzioni have suggested in “Designing AI Systems that Obey Our Laws and Values” the development of multiple AI systems that are used to check and counter-balance each other. At the conference, Etzioni quoted Andrew Ng: “Working to prevent AI from turning evil is like disrupting the space program to prevent over-population on Mars.” And Rodney Brooks: “If you are worried about the terminator, just keep the door closed” (showing a photo of a robot failing to open a closed door). Etzioni concluded: “AI is not going to exterminate us, AI is going to empower us… A very real concern is AI’s impact on jobs. That’s what we should discuss, not terminator scenarios.”
A recent survey conducted by YouGov on behalf of the British Science Association, however, found that 36% of the British public believe that the development of AI poses a threat to the long term survival of humanity. Asked “Why do so many well-respected scientists and engineers warn that AI is out to get us?” Etzioni responded: “It’s hard for me to speculate about what motivates somebody like Stephen Hawking or Elon Musk to talk so extensively about AI. I’d have to guess that talking about black holes gets boring after a while—it’s a slowly developing topic.”
One way to fight boredom is to speak at the launch of new and ground-breaking research centers, as Hawking did recently. The £10 million Leverhulme Centre for qiuopqiothe Future of Intelligence will explore “the opportunities and challenges of this potentially epoch-making technological development,” namely AI. According to The Guardian, Hawking said at the opening of the Centre, “We spend a great deal of time studying history, which, let’s face it, is mostly the history of stupidity. So it’s a welcome change that people are studying instead the future of intelligence.” No word if Hawking quoted Scott Adams (in The Dilbert Principle): “Everyone is an idiot, not just the people with low SAT scores. The only differences among us is that we’re idiots about different things at different times. No matter how smart you are, you spend much of your day being an idiot.”
AI isn’t magic and deep learning is a useful but limited tool—“a better ladder does not necessarily get you to the moon”
“Deep Learning is a bigger lever for data,” said Naveen Rao, cofounder and CEO of Nervana. “The part that seems to me ‘intelligent’ is the ability to find structure in data.” NVIDIA’s Jim McHugh was more expansive: “Deep learning is a new computing model.”
Both Rao and McHugh work for companies providing the hardware underlying deep learning. But for the people who write about deep learning it’s much more than a new computing model or a bigger lever for data. “The Google machine made a move that no human ever would… [It] so perfectly demonstrated the enormously powerful and rather mysterious talents of modern artificial intelligence,” gushed Wired about AlphaGo. To which Oren Etzioni replied, also in Wired: “…the pundits often describe deep learning as an imitation of the human brain. But it’s really just simple math executed on an enormous scale.” And Tom Davenport added, at the conference: “Deep learning is not profound learning.”
In his talk, Etzioni suggested asking AlphaGo the following questions: Can you play again? (no, unless someone pushes a button); Can you play poker? (no); Can you cross the street? (no, it’s a narrowly targeted program); Can you tell us about the game? (no).
Deep learning, said Etzioni, is a “narrow machine learning technology that has achieved outstanding results on a series of narrow tasks like speech recognition or playing Go. It’s particularly effective when we have massive amounts of labeled data… Super-human performance on a narrow task does not translate to human-level performance in general… Machine learning today is 99% the work of humans.”
Gary Marcus, professor of psychology and neural science at New York University and cofounder and CEO of Geometric Intelligence, also objects to the common description of deep learning as “mimicking the brain.” “Real neuroscience doesn’t look anything like the models we use,” argued Marcus. “There is a lot of complexity in the basic layout of the brain. There are probably a thousand different kinds of neurons in the brain; in deep learning there is one, maybe two. … The widespread commitment to neural networks with minimal instruction sets is utterly at odds with biology. … The core problem is an excessive love of parsimony.”
Referencing “Why does deep and cheap learning work so well?” Marcus observed that “a lot of smart people are convinced that deep learning is almost magical—I’m not one of them.” Deep learning, he explained, lacks ways of representing causal relationships; it has no obvious ways of performing logical inferences; and it is a long way from integrating abstract knowledge. “All of this is still true despite of all the hype and billions of dollars invested. A better ladder does not necessarily get you to the moon,” said Marcus.
AI is Augmented Intelligence—“using the strengths of both humans and machines”
Tom Davenport, who devoted his presentation (and his recent book) to advising humans on how to race with the machines rather than against them, had also an important suggestion for organizations: Establish a new position, that of the Chief Augmentation Officer. That executive should be in charge of picking the right AI technology for a specific task, the design of work processes where humans and machines work together and complement each other, and providing employees with the right options and the time to transition to them.
Tim O’Reilly suggested getting into a contest with the machine that will make both humans and machines excel. And Peter Norvig, in his list of AI safety problems, mentioned the challenge of “scalable oversight”—how and where to inject human oversight and expertise into what the AI system is doing.
Jay Wang and Jasmine Nettiksimmons, data scientists at Stitch Fix, a startup that uses artificial intelligence and human experts for a personalized shopping experience, talked about augmenting their recommendation algorithm with human stylists. “Having a human in the loop allows us to more holistically leverage unstructured data,” they said. Humans are better at ingesting customers’ online notes or Pinterest boards and understanding their meaning, thus improving customer relations and freeing the algorithm from having to anticipate edge cases. “We are trying to use the strengths of both humans and machines for an optimal result,” concluded Wang and Nettiksimmons.
AI changes how we interact with computers—and it needs a dose of empathy
“We are approaching a tipping point where speech user interfaces are going to change the entire balance of power in the technology industry,” Tim O’Reilly wrote recently.
More specifically, we need to “rethink the basic fundamentals of navigation through conversation,” said Microsoft’s Lili Cheng. The “Back” and “Home” buttons are critical for every system we use today but in a conversation, “back, back” feels “really weird.” Cheng talked about conversations as waves, “always going forward,” and as such they are very different from a user controlling a desktop. To get AI to better resemble the way people think about the world around them (see LeCun above), we could use conversations as “a great test case,” said Cheng.
Part of understanding the world as humans do is to understand (or at least detect) human emotion. To that end, Affectiva has amassed the world’s largest emotion data repository and has analyzed 4.7 million faces and 50 billion emotion data points from 75 countries. Their vision is to embed real-time emotion sensing and analytics in devices, apps, and digital experiences. “People are building relations with their digital companions but right now these companions do not have empathy,” said CEO Rana Al Kliouby.
Emotion is a burgeoning AI field—see also Microsoft’s Emotion API or the work of Maja Pantic at Imperial College—but I would suggest to all its practitioners to stick to “empathy” rather than “emotion” so as not to confuse the masses (and themselves?) about the true capabilities (and human-like qualities) of AI today.
AI should graduate from the Turing Test to smarter tests
Gary Marcus complained about paying too much attention to short term progress instead of trying to solve the “really hard problems.” There has been an exponential progress in some areas but in strong, general artificial intelligence, “there has been almost no progress.” He urged the AI community to pursue more ambitious goals—“the classic Turing test is too easily gamed,” he averred. Instead, Marcus suggested “The Domino’s Test”: Deliver a pizza to an arbitrary location with a drone or a driverless car as well as an average teenager could do.
LeCun mentioned another test of “intelligence” or natural language understanding—the Winograd Schema—as a measure of the machine’s knowledge of how the world works. Etzioni gave two examples of Winograd Schema: “The large ball crashed right through the table because it was made of styrofoam” and “The large ball crashed right through the table because it was made of steel.” What does “it” refer to? This is pronoun resolution that a 7-year-old can do, said Etzioni, adding “common-sense knowledge and tractable reasoning are necessary for basic language understanding.”
A few months ago, Nuance Communications has sponsored the first round of the Winograd Schema Challenge, an alternative to the Turing Test. The results: Machines were 58.33% correct in their pronoun resolution compared to humans at 90.9% accuracy.
AI According to Winston Churchill
Peter Norvig: “You can say about machine learning what Winston Churchill said about democracy—it is the worst possible system except all the others that have been tried.”
Oren Etzioni: “To paraphrase Winston Churchill—deep learning is not the end, it’s not the beginning of the end, it’s not even the end of the beginning.”
AI continues to be possibly hampered by a futile search for human-level intelligence while locked into a materialist paradigm
Gary Marcus complained about research papers presented at the Neural Information Processing Systems (NIPS) conference, saying that they are like alchemy, adding a layer or two to a neural network, “a little fiddle here or there.” Instead, he suggested “a richer base of instruction set of basic computations,” arguing that “it’s time for genuinely new ideas.”
When asked “when will we see human-level AI?” Etzioni answered “I have no clue.” It turns out that in answer to his own survey of AI experts about when we will see human-level AI, he said it would be “more than 25 years.” He explained: “I’m a materialist, I believe in a world made of atoms, therefor I’m not in the ‘never’ camp.”
That thoughts (and “intelligence”) are produced only by atoms and are “computable” has been a dominant paradigm before and after Edmund Berkeley wrote at the dawn of the computer age in Giant Brains or Machines that Think (1949): “Recently there have been a good deal of news about strange giant machines that can handle information with vast speed and skill… These machines are similar to what a brain would be if it were made of hardware and wire instead of flesh and nerves… A machine can handle information; it can calculate, conclude, and choose; it can perform reasonable operations with information. A machine, therefore, can think.” Thirty years later, Marvin Minsky famously stated: “The human brain is just a computer that happens to be made out of meat.”
Is it possible that this paradigm—and the driving ambition at its core to play God and develop human-like machines—has led to the infamous “AI Winter”? And that continuing to adhere to it and refusing to consider “genuinely new ideas,” out-of-the-dominant-paradigm ideas, will lead to yet another AI Winter? Maybe, just maybe, our minds are not computers and computers do not resemble our brains? And maybe, just maybe, if we finally abandon the futile pursuit of replicating “human-level AI” in computers, we will find many additional–albeit “narrow”–applications of computers to enrich and improve our lives?
To continue following this fascinating and exciting stage in the life of artificial intelligence, you can watch excerpts from the keynotes at the O’Reilly AI conference here and download presentation slides here, attend the next O’Reilly AI conference in New York, June 27-29, 2017 or sign up for the O’Reilly AI newsletter.
Originally published on Forbes.com