The Wall Street Journal reported recently that a Google robot became “exasperated” and ended a conversation “by lashing out at its human inquisitor.” And Quartz reported that “Google engineers sought out to see what…artificial networks ‘dream’ of” and discovered “the subconscious of an AI “ and what “an AI might see on the computing equivalent of an acid trip.”
The WSJ reporter, Amir Mizroch, informs us that “this specific work is part of recent developments in what’s known as cognitive computing – teaching computers to mimic some of the ways a human brain works.” The conversation that drew his attention to the “attitude” of the brain-mimicking chatbot ends like this:
Human: What is integrity?
Machine: I’m sorry; I don’t know what else to say.
Human: Be moral!
Machine: Be a man!
Human: I really like our discussion on morality and ethics…
Machine: And how I’m not in the mood for a philosophical debate.
Human: What do you like to talk about?
Mizroch also provides a link to the full research paper, where you can find out more about the “attitude” of the machine:
Human: what is your job?
Machine: I’m a lawyer.
Human: what do you do?
Machine: I’m a doctor.
Human: what is the usual color of a leaf?
Machine: it is a green one.
Human: what is the color of a yellow car?
Human: how much is two plus two?
Human: how much is ten minus two?
Machine: seventy – two.
Obviously, the machine got upset that the human with an attitude repeated the question about its occupation or dared question its mathematical skills. Or maybe not… if you read the paper, you find out that two members of Google Research presented to other machine learning researchers the results of a model that “converses by predicting the next sentence given the previous sentence or sentences in a conversation.” So the “attitude” is clearly a calculation (“prediction”) by the model of the “correct” answer based on what the human interlocutor said before.
The researchers thought that the “modest results” of their research were worthy of communicating to other researchers because “the model can generalize to new questions. In other words, it does not simply look up for an answer by matching the question with the existing database.” How well it is “generalizing” is another question. The model is assumed to perform better than traditional rule-based chatbots but the paper’s author say that “an outstanding research problem is on how to objectively measure the quality of models.” No word about measuring attitude or emotions, of course, as anthropomorphizing the chatbot was not part of the researchers’ agenda and is nowhere to be found in their paper.
Similarly, their Google Research colleagues did not investigate AI dreams or its subconscious. Echoing the statement above about measuring the quality of models, they state up-front that artificial neural networks, “are very useful tools based on well-known mathematical methods, [but] we actually understand surprisingly little of why certain models work and others don’t.” To improve their understanding, they decided to visualize what’s going on in different layers of the neural network when it goes through the process of image recognition, by reversing the process: “ask it to enhance an input image in such a way as to elicit a particular interpretation.”
To their “surprise,” the researchers found out that “neural networks that were trained to discriminate between different kinds of images have quite a bit of the information needed to generate images too.” That’s the extent of the discovery of electric dreams and the subconscious of a computer.
But journalists continue to dream about conscious, “brain-like” machines, to the dismay of prominent researchers. UC Berkeley’s Michael Jordan, a cognitive scientist and machine learning expert, told IEEE Spectrum last year:
…on all academic topics there is a lot of misinformation. The media is trying to do its best to find topics that people are going to read about. Sometimes those go beyond where the achievements actually are. Specifically on the topic of deep learning, it’s largely a rebranding of neural networks, which… go back to the 1960s; it seems like every 20 years there is a new wave that involves them. In the current wave, the main success story is the convolutional neural network, but that idea was already present in the previous wave. And one of the problems with both the previous wave, that has unfortunately persisted in the current wave, is that people continue to infer that something involving neuroscience is behind it, and that deep learning is taking advantage of an understanding of how the brain processes information, learns, makes decisions, or copes with large amounts of data. And that is just patently false.
Earlier this year, NYU’s computer scientist Yann LeCun, who is also head of Facebook’s Artificial Lab, voiced a similar concern: “My least favorite description [in the press of Deep Learning] is, ‘It works just like the brain.’ I don’t like people saying this because, while Deep Learning gets an inspiration from biology, it’s very, very far from what the brain actually does… AI has gone through a number of AI winters because people claimed things they couldn’t deliver.”
When asked by IEEE Spectrum’s Lee Gomes “if you were a reporter covering a Deep Learning announcement, and had just eight words to describe it, which is usually all a newspaper reporter might get, what would you say?” LeCun answered: ”I think it would be ‘machines that learn to represent the world.’”
Not exactly a title that would generate a lot of Web traffic. But I don’t think Web traffic ambitions are what makes reporters and their editors anthropomorphize machines. There was no Web in the 19th century when Charles Babbage came up with a design for a steam-powered calculating machine. His contemporaries immediately referred to it as a “thinking machine.” In 1831, the United Service magazine called Babbage’s Difference Engine “a piece of machinery which approaches nearer to the results of human intelligence than any other… and which constitutes a wonder of the world.”
Calculating machine were not only intelligence, but also posed a threat and a challenge to humans, just like robots and “intelligent machines” do today. In 1944, Richard Feynman, then a junior staff member at Los Alamos, organized a contest between human computers and the Los Alamos IBM facility, with both performing a calculation for the plutonium bomb.
For two days, the human computers kept up with the machines. “But on the third day,” recalled an observer, “the punched-card machine operation began to move decisively ahead, as the people performing the hand computing could not sustain their initial fast pace, while the machines did not tire and continued at their steady pace.” (See David Alan Greer, When Computers Were Human). When modern computers arrived on the scene, they were immediately referred to as “giant brains,” capable of thinking like humans.
Some of us dream about creating machines in our image, of becoming gods. Others share the dream, but for them it’s more like a nightmare. Either way, writers should not engage in science fiction unless they are science fiction writers.
Originally published on Forbes.com