Robots Inspired by Animals and Rodney Brooks’ Inspiration

futurism_bio-robots

Machine Man: Rodney Brooks, Boston Magazine, November 2014:

After getting bachelor’s and master’s degrees in mathematics from Flinders University, in Adelaide, by 1977, [Rodney Brooks, former director of the Artificial Intelligence Lab at MIT and founder of iRobots and Rethink Robotics] left Australia for the first time, and in 1981 earned a Ph.D. in computer science at Stanford. He married his first wife (currently an information systems lecturer at Suffolk University), and then headed east, eventually landing a teaching job in MIT’s Artificial Intelligence Lab.

In the summer of 1985, while visiting his first wife’s family in a Thailand village, Brooks found himself house-bound: He didn’t speak Thai, they didn’t speak English, and there was some concern for the young professor’s safety. Without much to do, Brooks sat and thought and watched mosquitoes buzz around him. Which led him to wonder how it was that mosquitoes could move so efficiently even though they were such simple creatures, with only a few-hundred-thousand neurons? (Human brains, by contrast, have about 86 billion neurons.) The robots being created in labs like Brooks’s at MIT utilized vast amounts of computational power, yet they were still dumber than insects.

At the time, robots were designed with sensors that fed information into a transmitter, which then fed that information into a mainframe computer, which then processed the data and figured out what the robot needed to do, then sent a command back through the transmitter to the robot, which executed the command. Needless to say, those robots were really slow. Throw a few obstacles in their way, and the robots could take hours to move a few meters.

Brooks’s revolutionary idea, which he sketched out on paper at his in-laws’ house, was to simplify. Don’t build a map of the robot’s world and dump all that data into its microprocessors just to keep it from running into a wall. Instead, give the robot one rule: “Move your legs,” for instance. Then give it another: “Don’t run into stuff.” Then give it something complicated, like, “Explore.” With Brooks’s simplified approach, the robot wouldn’t need to know everything about the world before starting out. It would do simple stuff in simple ways, just like a mosquito.

Brooks says that his contemporaries considered his idea naive. “They thought we were playing with toys,” he says of the insectlike robots he subsequently built in the MIT lab. Well, they did look like toys. And yet one of those “toys,” built in 1988, was perhaps the most important robot ever made. It had six legs and could walk, climb, and move at a speed few other robots had reached up to that point. Its name was Genghis.

Brooks talks about Genghis like a guy talking about his high school football glory days. His eyes widen to almost Baxterlike cartoonishness, his Australian accent getting more pronounced as he raises his voice. Genghis was radical because it was endowed with basic artificial intelligence so it could adapt to its environment. That tiny design enhancement sparked a programming revolution—nearly all robots are now built that way—which is why Genghis spent a decade on display at the Smithsonian’s National Air and Space Museum, not far from the Apollo 11 command module and Orville and Wilbur Wright’s Flyer.

The impact of Brooks’s breakthrough—which made robots ever cheaper, smarter, and easier to produce—has been profound. In hospitals, robots made by Aethon shuttle bed sheets to and from laundry facilities. In Amazon warehouses, robots made by Kiva Systems ferry shelves full of products to and from the people who prepare them for shipping. In Afghanistan, iRobot’s PackBot scuttles around disposing of bombs, rendering The Hurt Locker’s life-risking approach obsolete.

While all of these robots were influenced by Genghis, they don’t look like insects. Nor do they look humanoid, like Baxter. And many of them fail to give humans the cues needed to interact with them. That’s a fundamental concept missing from Google’s self-driving, robotic car, for example, a shortcoming that infuriates Brooks. You can’t make eye contact with it, so how do you know that it sees you? “You can make the assumption that most human drivers are not out to kill pedestrians,” Brooks says. “Well, maybe in some parts of Boston they are. But with a person at the wheel who you can see, you behave accordingly. With the robotic car, how do you know what assumption to make?”