The VC Leagues: Top Unicorn Hunters and Gatherers

CB_Unicorns_0715

CB Insights:

Sequoia Capital continues to retain its spot as the top investor in unicorns, with 17 currently in their portfolio including Instacart, AirBnB, and Square. Kleiner Perkins and Andreessen Horowitz were tied for second place with 15 unicorn companies each. Some other interesting points:

  • 4 of the top 8 investors in unicorns are large, typically public market investors (mutual fund, bulge-bracket investment banks, etc.). Goldman, Fidelity Investments, Wellington Management, and T. Rowe Price all have 12 or more US unicorns in their portfolios.
  • There are now 43 institutional investors with at least 5 US unicorns in their portfolio, compared to 14 when we did this analysis in early March.

The proportion of unicorn investments that were early-stage offers another way to gauge the firms’ relative investment prowess… Some of the best investors on this list are SV Angel (12 unicorns with 9 investments at the early stage), Y-Combinator (6 unicorns and 6 early-stage investments in unicorns), as well as First Round Capital, Benchmark Capital, and Lowercase Capital.

Posted in Venture capital | Tagged | 1 Comment

A Boom or a Bubble? VC Financing Trends

CB_1H2015

CB_Q22015

CB Insights

Venture capital (VC)-backed companies raised more than US$32 billion in Q2 2015 across 1,819 deals, bringing the total raised by VC-backed companies globally to a staggering $59.8 billion for the first half of 2015, according to Venture Pulse Q2 ’15 the first in a quarterly VC report series from KPMG International and VC data company CB Insights. The surge in funding in Q2 represents a 49 percent increase over the first two quarters of 2014.

[In the U.S.], after a multi-year high of $56.4B in 2014, 2015 is on track to reach five-year highs with $36.9B already invested in the first half of the year.

The Economist

The enormous, disruptive creativity of Silicon Valley is unlike anything since the genius of the great 19th-century inventors. Its triumph is to be celebrated. But the accumulation of so much wealth so fast comes with risks. The 1990s saw a financial bubble that ended in a spectacular bust. This time the danger is insularity. The geeks live in a bubble that seals off their empire from the world they are doing so much to change.

The American economy would be hit hard by a repeat of the financial shock that followed the dotcom crash in 2000. With the NASDAQ index near its record high, this is a common fear. Fortunately, although money and talent are pouring into the Valley, there is not yet much danger of a disastrous bust. That is because tech companies today not only have more robust business models than their dotcom predecessors did (ie, many actually make money), but they also rely on a smaller group of financial backers.

Posted in Venture capital | Tagged , | Leave a comment

Google Robots’ Dream Psychedelic Dreams and Lash Out at Humans

The Wall Street Journal reported recently that a Google robot became “exasperated” and ended a conversation “by lashing out at its human inquisitor.” And Quartz reported that “Google engineers sought out to see what…artificial networks ‘dream’ of” and discovered “the subconscious of an AI “ and what “an AI might see on the computing equivalent of an acid trip.”

From Google's

From Google’s “Inceptionism Gallery”

The WSJ reporter, Amir Mizroch, informs us that “this specific work is part of recent developments in what’s known as cognitive computing – teaching computers to mimic some of the ways a human brain works.”  The conversation that drew his attention to the “attitude” of the brain-mimicking chatbot ends like this:

Human: What is integrity?

Machine: I’m sorry; I don’t know what else to say.

Human: Be moral!

Machine: Be a man!

Human: I really like our discussion on morality and ethics…

Machine: And how I’m not in the mood for a philosophical debate.

Human: What do you like to talk about?

Machine: Nothing.

Mizroch also provides a link to the full research paper, where you can find out more about the “attitude” of the machine:

Human: what is your job?

Machine: I’m a lawyer.

Human: what do you do?

Machine: I’m a doctor.

And…

Human: what is the usual color of a leaf?

Machine: it is a green one.

Human: what is the color of a yellow car?

Machine: yellow.

Human: how much is two plus two?

Machine: four.

Human: how much is ten minus two?

Machine: seventy – two.

Obviously, the machine got upset that the human with an attitude repeated the question about its occupation or dared question its mathematical skills. Or maybe not… if you read the paper, you find out that two members of Google Research presented to other machine learning researchers the results of a model that “converses by predicting the next sentence given the previous sentence or sentences in a conversation.” So the “attitude” is clearly a calculation (“prediction”) by the model of the “correct” answer based on what the human interlocutor said before.

The researchers thought that the “modest results” of their research were worthy of communicating to other researchers because “the model can generalize to new questions. In other words, it does not simply look up for an answer by matching the question with the existing database.” How well it is “generalizing” is another question. The model is assumed to perform better than traditional rule-based chatbots but the paper’s author say that “an outstanding research problem is on how to objectively measure the quality of models.” No word about measuring attitude or emotions, of course, as anthropomorphizing the chatbot was not part of the researchers’ agenda and is nowhere to be found in their paper.

Similarly, their Google Research colleagues did not investigate AI dreams or its subconscious.  Echoing the statement above about measuring the quality of models, they state up-front that artificial neural networks, “are very useful tools based on well-known mathematical methods, [but] we actually understand surprisingly little of why certain models work and others don’t.”  To improve their understanding, they decided to visualize what’s going on in different layers of the neural network when it goes through the process of image recognition, by reversing the process: “ask it to enhance an input image in such a way as to elicit a particular interpretation.”

To their “surprise,” the researchers found out that “neural networks that were trained to discriminate between different kinds of images have quite a bit of the information needed to generate images too.” That’s the extent of the discovery of electric dreams and the subconscious of a computer.

But journalists continue to dream about conscious, “brain-like” machines, to the dismay of prominent researchers. UC Berkeley’s Michael Jordan, a cognitive scientist and machine learning expert, told IEEE Spectrum last year:

…on all academic topics there is a lot of misinformation. The media is trying to do its best to find topics that people are going to read about. Sometimes those go beyond where the achievements actually are. Specifically on the topic of deep learning, it’s largely a rebranding of neural networks, which… go back to the 1960s; it seems like every 20 years there is a new wave that involves them. In the current wave, the main success story is the convolutional neural network, but that idea was already present in the previous wave. And one of the problems with both the previous wave, that has unfortunately persisted in the current wave, is that people continue to infer that something involving neuroscience is behind it, and that deep learning is taking advantage of an understanding of how the brain processes information, learns, makes decisions, or copes with large amounts of data. And that is just patently false.

Earlier this year, NYU’s computer scientist Yann LeCun, who is also head of Facebook’s Artificial Lab, voiced a similar concern: “My least favorite description [in the press of Deep Learning] is, ‘It works just like the brain.’ I don’t like people saying this because, while Deep Learning gets an inspiration from biology, it’s very, very far from what the brain actually does… AI has gone through a number of AI winters because people claimed things they couldn’t deliver.”

When asked by IEEE Spectrum’s Lee Gomes “if you were a reporter covering a Deep Learning announcement, and had just eight words to describe it, which is usually all a newspaper reporter might get, what would you say?” LeCun answered: ”I think it would be ‘machines that learn to represent the world.’”

Not exactly a title that would generate a lot of Web traffic. But I don’t think Web traffic ambitions are what makes reporters and their editors anthropomorphize machines. There was no Web in the 19th century when Charles Babbage came up with a design for a steam-powered calculating machine. His contemporaries immediately referred to it as a “thinking machine.” In 1831, the United Service magazine called Babbage’s Difference Engine “a piece of machinery which approaches nearer to the results of human intelligence than any other… and which constitutes a wonder of the world.”

Calculating machine were not only intelligence, but also posed a threat and a challenge to humans, just like robots and “intelligent machines” do today.  In 1944, Richard Feynman, then a junior staff member at Los Alamos, organized a contest between human computers and the Los Alamos IBM facility, with both performing a calculation for the plutonium bomb.

For two days, the human computers kept up with the machines. “But on the third day,” recalled an observer, “the punched-card machine operation began to move decisively ahead, as the people performing the hand computing could not sustain their initial fast pace, while the machines did not tire and continued at their steady pace.” (See David Alan Greer, When Computers Were Human). When modern computers arrived on the scene, they were immediately referred to as “giant brains,” capable of thinking like humans.

Some of us dream about creating machines in our image, of becoming gods. Others share the dream, but for them it’s more like a nightmare. Either way, writers should not engage in science fiction unless they are science fiction writers.

Originally published on Forbes.com

Posted in AI, Google, Machine Learning, Robotics | Tagged , , , | Leave a comment

Top 10 Programming Languages 2015

top-ten-programming-languagesNote: Left column shows 2015 ranking; right column shows 2014 ranking.

Source: IEEE Spectrum

The big five—Java, C, C++, Python, and C#—remain on top, with their ranking undisturbed, but C has edged to within a whisper of knocking Java off the top spot. The big mover is R, a statistical computing language that’s handy for analyzing and visualizing big data, which comes in at sixth place. Last year it was in ninth place, and its move reflects the growing importance of big data to a number of fields. A significant amount of movement has occurred further down in the rankings, as languages like Go, Perl, and even Assembly jockey for position…

A number of languages have entered the rankings for the first time. Swift, Apple’s new language, has already gained enough traction to make a strong appearance despite being released only 13 months ago. Cuda is another interesting entry—it’s a language created by graphics chip company Nvidia that’s designed for general-purpose computing using the company’s powerful but specialized graphics processors, which can be found in many desktop and mobile devices. Seven languages in all are appearing for the first time.

Posted in Uncategorized | Tagged | Leave a comment

John Markoff on automation, jobs, Deep Learning and AI limitations

markoff640My sense, after spending two or three years working on this, is that it’s a much more nuanced situation than the alarmists seem to believe. Brynjolfsson and McAfee, and Martin Ford, and Jaron Lanier have all written about the rapid pace of automation. There are two things to consider: One, the pace is not that fast. Deploying these technologies will take more time than people think. Two, the structure of the workforce may change in ways that means we need more robots than we think we do, and that the robots will have a role to play. The other thing is that the development of the technologies to make these things work is uneven.

Right now, we’re undergoing a rapid acceleration in pattern recognition technologies. Machines, for the first time are learning how to recognize objects; they’re learning how to understand scenes, how to recognize the human voice, how to understand human language. That’s all happening, no question that the advances have been dramatic and it’s largely happened due to this technique called deep learning, which is a modern iteration of the artificial neural nets, which of course have been around since the 1950s and even before.

What hasn’t happened is the other part of the AI problem, which is called cognition. We haven’t made any breakthroughs in planning and thinking, so it’s not clear that you’ll be able to turn these machines loose in the environment to be waiters or flip hamburgers or do all the things that human beings do as quickly as we think. Also, in the United States the manufacturing economy has already left, by and large. Only 9 percent of the workers in the United States are involved in manufacturing.

There’s this wonderful counter situation to the popular belief that there will be no jobs. The last time someone wrote about this was in 1995 when a book titled The End of Work predicted this. The decade after that, the US economy grew faster than the population for the next decade. It’s not clear to me at all that things are going to work out the way they felt.

The classic example is that almost everybody cites this apparent juxtaposition of Instagram—thirteen programmers taking out a giant corporation, Kodak, with 140,000 workers. In fact, that’s not what happened at all. For one thing, Kodak wasn’t killed by Instagram. Kodak was a company that put a gun to its head and pulled the trigger multiple times until it was dead. It just made all kinds of strategic blunders. The simplest evidence of that is its competitor, Fuji, which did very well across this chasm of the Internet. The deeper thought is that Instagram, as a new‑age photo sharing system, couldn’t exist until the modern Internet was built, and that probably created somewhere between 2.5 and 5 million jobs, and made them good jobs. The notion that Instagram killed both Kodak and the jobs is just fundamentally wrong…

…What worries me about the future of Silicon Valley, is that one-dimensionality, that it’s not a Renaissance culture, it’s an engineering culture. It’s an engineering culture that believes that it’s revolutionary, but it’s actually not that revolutionary. The Valley has, for a long time, mined a couple of big ideas…

…In fact, things are slowing down. In 2045, it’s going to look more like it looks today than you think.

Source: Edge

Posted in AI, Automation, deep learning, Machine Learning | Leave a comment

Why Humans Will Forever Rule Over the Machines

Terminator-2Everywhere you turn nowadays, you hear about the imminent triumph of intelligent machines over humans. They will take our jobs, they will make their own decisions, they will be even more intelligent than humans, they pose a threat to humanity (per Stephen Hawking, Bill Gates, and Elon Musk). Marc Andreesen recently summed up on Twitter the increased hubbub about the dangers of Artificial Intelligence: “From ‘It’s so horrible how little progress has been made’ to ‘It’s so horrible how much progress has been made’ in one step.”

Don’t worry. The machines will never take over, no matter how much progress will be made in artificial intelligence . It will forever remain artificial, devoid of what makes us human (and intelligent in the full sense of the word), and what accounts for our unlimited creativity, the fountainhead of ideas that will always keep us at least a few steps ahead of the machines.

In a word, intelligent machines will never have culture, our unique way of transmitting meanings and context over time, our continuously invented and re-invented inner and external realities.

When you stop to think about culture—the content of our thinking—it is amazing that it has been missing from the thinking of the people creating “thinking machines” and/or debating how much they will impact our lives for as long as this work and conversation has been going on. No matter what position they take in the debate and/or what path they follow in developing robots and/or artificial intelligence, they have collectively made a conscious or unconscious decision to reduce the incredible bounty and open-endedness of our thinking to computation, an exchange of information between billions of neurons, which they either hope or are afraid that we will eventually replicate in a similar exchange between increasingly powerful computers. It’s all about quantity and we know that Moore’s Law takes care of that.

Almost all the people participating in the debate about the rise of the machines have subscribed to the Turing Paradigm which basically says “let’s not talk about what we cannot define or investigate and simply equate thinking with computation.”

The dominant thinking about thinking machines, whether of the artificial or the human kind, has not changed since Edward C. Berkeley wrote in Giant Brains or Machines that Think, his 1949 book about the recently invented computers: “These machines are similar to what a brain would be if it were made of hardware and wire instead of flesh and nerves… A machine can handle information; it can calculate, conclude, and choose; it can perform reasonable operations with information. A machine, therefore, can think.” Thirty years later, MIT’s Marvin Minsky famously stated: “The human brain is just a computer that happens to be made out of meat.” Today, Harvard geneticist George Church goes further (reports Joichi Ito), suggesting that we should make brains as smart as computers, and not the other way around.

Still, from time to time we do hear new and original challenges to the dominant paradigm. In “Computers Versus Humanity: Do We Compete?” Liah Greenfeld and Mark Simes bring culture and the mind into the debate over artificial intelligence, concepts that do not exist in the prevailing thinking about thinking. They define culture as the symbolic process by which humans transmit their ways of life. It is a historical process, i.e., it occurs in time, and it operates on both the collective and individual levels simultaneously.

The mind, defined as “culture in the brain,” is a process representing an individualization of the collective symbolic environment. It is supported by the brain and, in turn, it organizes the connective complexity of the brain.  Greenfeld and Simes argue that “mapping and explaining the organization and biological processes in the human brain will only be complete when such symbolic, and therefore non-material, environment is taken into account.”

They conclude that what distinguishes humanity from all other forms of life “is its endless, unpredictable creativity. It does not process information: It creates. It creates information, misinformation, forms of knowledge that cannot be called information at all, and myriads of other phenomena that do not belong to the category of knowledge. Minds do not do computer-like things, ergo computers cannot outcompete us all.”

The mind, the continuous and dynamic creative process by which we live our conscious lives, is missing from the debates over the promise and perils of artificial intelligence. A recent example is a special section on robots in the July/August issue of Foreign Affairs, in which the editors brought together a number of authors with divergent opinions about the race against the machines. All of them, however, do not question the assumption that we are in a race:

  • A roboticist, MIT’s Daniela Rus, writes about the “significant gaps” that have to be closed in order to make robots our little helpers and makes the case for robots and humans augmenting and complementing each other’s skills (in “The Robots Are Coming”).
  • Another roboticist, Carnegie Mellon’s Illah Reza Nourbakhsh, highlights robots’ “potential to produce dystopian outcomes” and laments the lack of required training in ethics, human rights, privacy, or security at the academic engineering programs that grant degrees in robotics (in “The Coming Robot Dystopia”).
  • The authors of The Second Machine Age, MIT’s Erik Brynjolfsson and Andrew McAfee, predict that human labor will not disappear anytime soon because “we humans are a deeply social species, and the desire for human connection carries over to our economic lives.” But the prediction is limited to “within the next decade,” after which “there is a real possibility… that human labor will, in aggregate, decline in relevance because of technological progress, just as horse labor did earlier” (in “Will Humans Go the Way of Horses?”).
  • The chief economics commentator at the Financial Times, Martin Wolf, dismisses the predictions regarding the imminent “breakthroughs in information technology, robotics, and artificial intelligence that will dwarf what has been achieved in the past two centuries” and the emergence of machines that are “supremely intelligent and even self-creating.” While also hedging his bets about the future, he states categorically “what we know for the moment is that there is nothing extraordinary in the changes we are now experiencing. We have been here before and on a much larger scale” (in “Same as It Ever Was: Why the Techno-optimists Are Wrong”).

Same as it ever was, indeed. A lively debate and lots of good arguments: Robots will help us, robots could harm us, robots may or may not take our jobs, robots—for the moment—are nothing special.  Beneath the superficial disagreement lies a fundamental shared acceptance of the general premise that we are not different from computers, only have the temporary and fleeting advantage of greater computing power.

No wonder that the editor of Foreign Affairs, Gideon Rose, concludes that “something is clearly happening here, but we don’t know what it means. And by the time we do, authors and editors might well have been replaced by algorithms along with everybody else.”

Let me make a bold prediction. Algorithms will not create on their own a competitor to Foreign Affairs. No matter how intelligent machines will become (and they will be much smarter than they are today), they will not create science or literature or any of the other components of our culture that we have created over the course of millennia and will continue to create, in some cases aided by technologies that we create and control.

And by “we,” I don’t mean only Einstein and Shakespeare. I mean the entire human race, engaged in creating, absorbing, manipulating, processing, communicating the symbols that make our culture, making sense of our reality. I doubt that we will ever have a machine creating Twitter on its own, not even the hashtag.

I’m sure we will have smart machines that could perform special tasks, augmenting our capabilities and improving our lives. That many jobs will be taken over by algorithms and robots, and many others will be created because of them, as we have seen over the last half-century. And that bad people will use these intelligent machines to harm other people and that we will make many mistakes relying too much on them and not thinking about all the consequences of what we are developing.

But intelligent machines will not have a mind of their own. Intelligent machines will not have our imagination, our creativity, our unique human culture. Intelligent machines will not take over because they will never be human.

Originally published on Forbes.com

Posted in Uncategorized | 1 Comment

Hype Curve of (Hardware) Neural Networks

Neural_Networks_hype_curve

Source: Olivier Temam

Posted in AI, neural networks | 2 Comments