Israel’s Chief Scientist on Mastering the Art of Public-Private Partnership

Avi Hasson 3

Avi Hasson, Chief Scientist, Ministry of the Economy, Israel

“Imagine you had a machine where you put in a dollar and five dollars come out, sometimes ten,” says Avi Hasson. “We have that machine and it’s called the Office of the Chief Scientist.”

After working for ten years as a partner in a venture capital firm, Hasson became four years ago the Chief Scientist at Israel’s Ministry of the Economy. Buttressing this position is a long tradition that has helped turn Israel into a “startup nation,” as the Office of the Chief Scientist (OCS) has been perfecting the art of public-private partnership since it was created in 1965.

With dual responsibilities for the government’s R&D investment (excluding basic research and defense R&D) and for its innovation policy, the OCS has been acting as an innovation catalyst, entrepreneurial matchmaker, and strategic investor. It has learned how to balance stability and dynamism, due diligence and risk taking, encouraging foreign investment and ensuring Israel’s competitiveness, helping enact regulation and legislation while also serving as the voice of entrepreneurs within the government.

The annual budget of the OCS is about half a billion dollars. “A lot of people know that Israel invests in R&D more than any other country worldwide as a percentage of GDP,” says Hasson. “However, not many people know that the government’s share in that investment is the lowest in all OECD countries.” It amounts to only 5% of total business R&D investment.

The economic impact of the government’s R&D investment, however, is not measured by the size of the investment, but by the size of the “additionality,” a term used by economists to capture the net positive difference that results from the government’s intervention in the economy. A 2008 study (re-validated in 2014, according to Hasson), found that a government grant of NIS1 million causes firms in the manufacturing sector to invest another NIS1.28 million (the comparable figure in computer-related sectors is NIS1.81 million), meaning that the economy gains NIS2.28 million invested in business R&D that would not have been invested without government intervention. The study also found that the economic effects (in terms of incremental GDP) of the government’s investment in R&D are, at a minimum, between 5 and 6 times the amount of money invested by the government.

It’s nice to have an additionality generating machine that gives back at least 5 dollars for every dollar you put in, but does the Israeli government pick up winners in the marketplace? To ensure that it doesn’t, the OCS has designed a unique and elaborate selection process to evaluate the applications it gets. It is not based on a master plan, designed by a government committee. “We don’t decide that there are five strategic sectors and that’s how we allocate our budget,” says Hasson. “It’s a very bottom-up process.”

The OCS has 180 evaluators, working as contractors, selected every few years in competitive tenders on the basis of their educational background and industry expertise. Each application is typically reviewed by two evaluators conducting due diligence similar to what a VC would do, assessing criteria such as market potential and the capabilities of the team. Unlike VCs, however, instead of looking at the exit potential of the startup, they assess its potential impact on the Israeli economy. The evaluators’ recommendation is brought before a committee, typically chaired by Hasson, for a final decision. In general terms, this process is also used for other types of OCS investments, such as assisting with the R&D of established companies or collaboration between foreign and Israeli companies.

The key is to design into the process the government’s exit. “We are very careful not to crowd-out the private sector,” says Hasson, “always giving it the option to buy the government out at cost. We let the market do its thing.”

The public-private partnership starts by “acknowledging what each side does best,” says Hasson. “We don’t think that we know better than the private sector what’s the next big thing or how to build companies and mentor them. “ But he believes his office is much better than the private sector in designing infrastructure-related projects and in managing risk.

The first aspect includes R&D infrastructure, development of human capital, and innovation-related policies. “If we identify a missing piece of infrastructure or a market failure, some sort of a gap, we design and initiate the desired change, carrying it forward, funding it, even if eventually it will be implemented by a different ministry,” says Hasson. One example is a national tissue bank his office helped create which is managed by the Health Ministry, providing services (for a fee) to thousands of companies and researchers. Another example is the “angel’s law” the OCS helped draft and promote, giving a tax benefit to angel investors, with the aim of bringing “more people to the table and increasing investment in early-stage companies.”

Counter-intuitively, Hasson also identifies handling risk as an area in which the OCS has an advantage over the private sector. “Taxpayer money and risk don’t usually go together, but when a project is described in our committee deliberations as a risky project, it actually excites us,” he says.

If a company fails, the venture capitalist sees it only as a failed investment. But the OCS looks beyond the failure, at the know-how gained, the intellectual property and talent developed, the lessons learned, and the number of companies that grew out of the failure. “We have a different risk profile and return function,” says Hasson. “Unlike the single investor, I gain from failure, which means I can take more risk. If 70% of the projects we fund were successful, I would say we are funding the wrong projects. We shouldn’t be too successful.”

The OCS’s 40 programs cover all stages in the lifecycle of innovation and all types of companies from pre-seed investments to startup incubators to long-term support of the R&D budgets of established companies to international collaborations to the Israeli R&D centers of foreign multinationals. With all of these programs, the OCS strikes a balance between providing a stable innovation environment and pursuing a dynamic and flexible decision-making process.

The key to keeping innovation policies and processes stable is the de-politicization of the office. The Chief Scientist is appointed to a six-year term and the OCS agenda is widely supported regardless of which government (and which Minister of the Economy—four different politicians during Hasson’s tenure so far) is in place. “There is no political agenda impacting our work,” says Hasson. It is also important, he says, to have processes and regulations: “Government money shouldn’t be too easy to obtain.”

But this also presents a challenge to the type of work the OCS does. Hasson: “If it takes you two years to design a program and another year to launch it, in these three years, either the problem has moved somewhere else or my company is dead.”

For Hasson, the required dynamism, flexibility, and adaptability of his government agency is not necessarily about making quick decisions—they already do that: “On average, we give an answer within three months,” he says. The challenge is in execution, in rapidly developing the tools and programs required by changing market conditions. “The time that passes between knowing what you want to do and the first company benefiting from that program or policy is too long,” says Hasson. ”In high-tech, you don’t have that privilege.”

Hasson hopes that a new law currently under consideration by the Israeli parliament will address this challenge. It will create a new national administration for innovation and “will take the OCS to the next twenty years.” The new administration, while a government agency, will be more independent than the OCS and will not sit in a specific ministry. Creating a new program in the new administration will not require going through a lengthy government approval process. “What used to take two years, may take only four months,” says Hasson.

More operational flexibility will be important given the new, multi-faceted, mission of the new innovation administration. Whereas the mission so far has been focused on creating an innovation ecosystem, the new mission will be to pursue two goals simultaneously: Maintain Israel’s position in an increasingly competitive global innovation marketplace and inject innovation into all sectors of the Israeli economy.

Today, the high-tech sector employs 10% of the Israeli workforce but is responsible for 50% of Israeli exports. Hasson: “We want to bring the rest of the Israeli economy to the same place and that means injecting innovation into traditional industries such as food, plastics, agriculture, and the services sector. When you want to support the continuing growth of Israeli companies, you can’t use the same tools you used to start them up.” When I take this to mean infusing technology know-how into traditional industries, Hasson corrects me: “It’s not just technology, it’s innovation and R&D. I don’t want to finance a new machine for the food factory. I want them to develop an R&D team, answering the question of how to create new products for the global market.”

Hasson is optimistic about the future success of startup nation. He observes that innovation is going to be driven in the future by trends that favor Israel’s specific strengths. He cites primarily the convergence of technologies and the interdisciplinarity that will be the hallmark of tomorrow’s innovation breakthroughs. “All of a sudden biology meets software and medical devices meet communications—this is exactly where Israel excels because everybody talks to each other and they don’t work in silos. People here are very much out of the box,” he says. (For more on out of the box behavior, see Intel’s guide for doing business with Israelis).

When we talked by phone, Hasson just came back from his weekly field trip, visiting this time a biotech company his office supports. He doesn’t like to spend time in conference rooms, he told me. Instead, he prefers to walk around “and see the actual stuff, the clean rooms, the labs.” Then he talks to the company’s management and insists on hearing their concerns. That day, he heard about issues with healthcare regulation. “I’m not the regulator,” Hasson says, “that’s the Ministry of Health. Yet they come to us with these issues. We are their voice within the government. So we talk to the ministries involved and try to find a solution.”

Acting as the voice of entrepreneurs inside the government. Balancing stability and dynamism. Selling Israel’s as THE startup nation to multinationals, states, provinces, and countries. Forging productive collaborations in Israel and across the globe. Taking risk off the table and encouraging tolerance for failure. Advancing legislation and managing forty different programs. I asked Hasson how he and his team find the time for all of these doings, designs, and decisions. His laconic answer: “We wake up an hour earlier.”

Originally published on

Posted in startups | Tagged , , | Leave a comment

The VC Leagues: Top Unicorn Hunters and Gatherers


CB Insights:

Sequoia Capital continues to retain its spot as the top investor in unicorns, with 17 currently in their portfolio including Instacart, AirBnB, and Square. Kleiner Perkins and Andreessen Horowitz were tied for second place with 15 unicorn companies each. Some other interesting points:

  • 4 of the top 8 investors in unicorns are large, typically public market investors (mutual fund, bulge-bracket investment banks, etc.). Goldman, Fidelity Investments, Wellington Management, and T. Rowe Price all have 12 or more US unicorns in their portfolios.
  • There are now 43 institutional investors with at least 5 US unicorns in their portfolio, compared to 14 when we did this analysis in early March.

The proportion of unicorn investments that were early-stage offers another way to gauge the firms’ relative investment prowess… Some of the best investors on this list are SV Angel (12 unicorns with 9 investments at the early stage), Y-Combinator (6 unicorns and 6 early-stage investments in unicorns), as well as First Round Capital, Benchmark Capital, and Lowercase Capital.

Posted in Venture capital | Tagged | 1 Comment

A Boom or a Bubble? VC Financing Trends



CB Insights

Venture capital (VC)-backed companies raised more than US$32 billion in Q2 2015 across 1,819 deals, bringing the total raised by VC-backed companies globally to a staggering $59.8 billion for the first half of 2015, according to Venture Pulse Q2 ’15 the first in a quarterly VC report series from KPMG International and VC data company CB Insights. The surge in funding in Q2 represents a 49 percent increase over the first two quarters of 2014.

[In the U.S.], after a multi-year high of $56.4B in 2014, 2015 is on track to reach five-year highs with $36.9B already invested in the first half of the year.

The Economist

The enormous, disruptive creativity of Silicon Valley is unlike anything since the genius of the great 19th-century inventors. Its triumph is to be celebrated. But the accumulation of so much wealth so fast comes with risks. The 1990s saw a financial bubble that ended in a spectacular bust. This time the danger is insularity. The geeks live in a bubble that seals off their empire from the world they are doing so much to change.

The American economy would be hit hard by a repeat of the financial shock that followed the dotcom crash in 2000. With the NASDAQ index near its record high, this is a common fear. Fortunately, although money and talent are pouring into the Valley, there is not yet much danger of a disastrous bust. That is because tech companies today not only have more robust business models than their dotcom predecessors did (ie, many actually make money), but they also rely on a smaller group of financial backers.

Posted in Venture capital | Tagged , | Leave a comment

Google Robots’ Dream Psychedelic Dreams and Lash Out at Humans

The Wall Street Journal reported recently that a Google robot became “exasperated” and ended a conversation “by lashing out at its human inquisitor.” And Quartz reported that “Google engineers sought out to see what…artificial networks ‘dream’ of” and discovered “the subconscious of an AI “ and what “an AI might see on the computing equivalent of an acid trip.”

From Google's

From Google’s “Inceptionism Gallery”

The WSJ reporter, Amir Mizroch, informs us that “this specific work is part of recent developments in what’s known as cognitive computing – teaching computers to mimic some of the ways a human brain works.”  The conversation that drew his attention to the “attitude” of the brain-mimicking chatbot ends like this:

Human: What is integrity?

Machine: I’m sorry; I don’t know what else to say.

Human: Be moral!

Machine: Be a man!

Human: I really like our discussion on morality and ethics…

Machine: And how I’m not in the mood for a philosophical debate.

Human: What do you like to talk about?

Machine: Nothing.

Mizroch also provides a link to the full research paper, where you can find out more about the “attitude” of the machine:

Human: what is your job?

Machine: I’m a lawyer.

Human: what do you do?

Machine: I’m a doctor.


Human: what is the usual color of a leaf?

Machine: it is a green one.

Human: what is the color of a yellow car?

Machine: yellow.

Human: how much is two plus two?

Machine: four.

Human: how much is ten minus two?

Machine: seventy – two.

Obviously, the machine got upset that the human with an attitude repeated the question about its occupation or dared question its mathematical skills. Or maybe not… if you read the paper, you find out that two members of Google Research presented to other machine learning researchers the results of a model that “converses by predicting the next sentence given the previous sentence or sentences in a conversation.” So the “attitude” is clearly a calculation (“prediction”) by the model of the “correct” answer based on what the human interlocutor said before.

The researchers thought that the “modest results” of their research were worthy of communicating to other researchers because “the model can generalize to new questions. In other words, it does not simply look up for an answer by matching the question with the existing database.” How well it is “generalizing” is another question. The model is assumed to perform better than traditional rule-based chatbots but the paper’s author say that “an outstanding research problem is on how to objectively measure the quality of models.” No word about measuring attitude or emotions, of course, as anthropomorphizing the chatbot was not part of the researchers’ agenda and is nowhere to be found in their paper.

Similarly, their Google Research colleagues did not investigate AI dreams or its subconscious.  Echoing the statement above about measuring the quality of models, they state up-front that artificial neural networks, “are very useful tools based on well-known mathematical methods, [but] we actually understand surprisingly little of why certain models work and others don’t.”  To improve their understanding, they decided to visualize what’s going on in different layers of the neural network when it goes through the process of image recognition, by reversing the process: “ask it to enhance an input image in such a way as to elicit a particular interpretation.”

To their “surprise,” the researchers found out that “neural networks that were trained to discriminate between different kinds of images have quite a bit of the information needed to generate images too.” That’s the extent of the discovery of electric dreams and the subconscious of a computer.

But journalists continue to dream about conscious, “brain-like” machines, to the dismay of prominent researchers. UC Berkeley’s Michael Jordan, a cognitive scientist and machine learning expert, told IEEE Spectrum last year:

…on all academic topics there is a lot of misinformation. The media is trying to do its best to find topics that people are going to read about. Sometimes those go beyond where the achievements actually are. Specifically on the topic of deep learning, it’s largely a rebranding of neural networks, which… go back to the 1960s; it seems like every 20 years there is a new wave that involves them. In the current wave, the main success story is the convolutional neural network, but that idea was already present in the previous wave. And one of the problems with both the previous wave, that has unfortunately persisted in the current wave, is that people continue to infer that something involving neuroscience is behind it, and that deep learning is taking advantage of an understanding of how the brain processes information, learns, makes decisions, or copes with large amounts of data. And that is just patently false.

Earlier this year, NYU’s computer scientist Yann LeCun, who is also head of Facebook’s Artificial Lab, voiced a similar concern: “My least favorite description [in the press of Deep Learning] is, ‘It works just like the brain.’ I don’t like people saying this because, while Deep Learning gets an inspiration from biology, it’s very, very far from what the brain actually does… AI has gone through a number of AI winters because people claimed things they couldn’t deliver.”

When asked by IEEE Spectrum’s Lee Gomes “if you were a reporter covering a Deep Learning announcement, and had just eight words to describe it, which is usually all a newspaper reporter might get, what would you say?” LeCun answered: ”I think it would be ‘machines that learn to represent the world.’”

Not exactly a title that would generate a lot of Web traffic. But I don’t think Web traffic ambitions are what makes reporters and their editors anthropomorphize machines. There was no Web in the 19th century when Charles Babbage came up with a design for a steam-powered calculating machine. His contemporaries immediately referred to it as a “thinking machine.” In 1831, the United Service magazine called Babbage’s Difference Engine “a piece of machinery which approaches nearer to the results of human intelligence than any other… and which constitutes a wonder of the world.”

Calculating machine were not only intelligence, but also posed a threat and a challenge to humans, just like robots and “intelligent machines” do today.  In 1944, Richard Feynman, then a junior staff member at Los Alamos, organized a contest between human computers and the Los Alamos IBM facility, with both performing a calculation for the plutonium bomb.

For two days, the human computers kept up with the machines. “But on the third day,” recalled an observer, “the punched-card machine operation began to move decisively ahead, as the people performing the hand computing could not sustain their initial fast pace, while the machines did not tire and continued at their steady pace.” (See David Alan Greer, When Computers Were Human). When modern computers arrived on the scene, they were immediately referred to as “giant brains,” capable of thinking like humans.

Some of us dream about creating machines in our image, of becoming gods. Others share the dream, but for them it’s more like a nightmare. Either way, writers should not engage in science fiction unless they are science fiction writers.

Originally published on

Posted in AI, Google, Machine Learning, Robotics | Tagged , , , | Leave a comment

Top 10 Programming Languages 2015

top-ten-programming-languagesNote: Left column shows 2015 ranking; right column shows 2014 ranking.

Source: IEEE Spectrum

The big five—Java, C, C++, Python, and C#—remain on top, with their ranking undisturbed, but C has edged to within a whisper of knocking Java off the top spot. The big mover is R, a statistical computing language that’s handy for analyzing and visualizing big data, which comes in at sixth place. Last year it was in ninth place, and its move reflects the growing importance of big data to a number of fields. A significant amount of movement has occurred further down in the rankings, as languages like Go, Perl, and even Assembly jockey for position…

A number of languages have entered the rankings for the first time. Swift, Apple’s new language, has already gained enough traction to make a strong appearance despite being released only 13 months ago. Cuda is another interesting entry—it’s a language created by graphics chip company Nvidia that’s designed for general-purpose computing using the company’s powerful but specialized graphics processors, which can be found in many desktop and mobile devices. Seven languages in all are appearing for the first time.

Posted in Uncategorized | Tagged | Leave a comment

John Markoff on automation, jobs, Deep Learning and AI limitations

markoff640My sense, after spending two or three years working on this, is that it’s a much more nuanced situation than the alarmists seem to believe. Brynjolfsson and McAfee, and Martin Ford, and Jaron Lanier have all written about the rapid pace of automation. There are two things to consider: One, the pace is not that fast. Deploying these technologies will take more time than people think. Two, the structure of the workforce may change in ways that means we need more robots than we think we do, and that the robots will have a role to play. The other thing is that the development of the technologies to make these things work is uneven.

Right now, we’re undergoing a rapid acceleration in pattern recognition technologies. Machines, for the first time are learning how to recognize objects; they’re learning how to understand scenes, how to recognize the human voice, how to understand human language. That’s all happening, no question that the advances have been dramatic and it’s largely happened due to this technique called deep learning, which is a modern iteration of the artificial neural nets, which of course have been around since the 1950s and even before.

What hasn’t happened is the other part of the AI problem, which is called cognition. We haven’t made any breakthroughs in planning and thinking, so it’s not clear that you’ll be able to turn these machines loose in the environment to be waiters or flip hamburgers or do all the things that human beings do as quickly as we think. Also, in the United States the manufacturing economy has already left, by and large. Only 9 percent of the workers in the United States are involved in manufacturing.

There’s this wonderful counter situation to the popular belief that there will be no jobs. The last time someone wrote about this was in 1995 when a book titled The End of Work predicted this. The decade after that, the US economy grew faster than the population for the next decade. It’s not clear to me at all that things are going to work out the way they felt.

The classic example is that almost everybody cites this apparent juxtaposition of Instagram—thirteen programmers taking out a giant corporation, Kodak, with 140,000 workers. In fact, that’s not what happened at all. For one thing, Kodak wasn’t killed by Instagram. Kodak was a company that put a gun to its head and pulled the trigger multiple times until it was dead. It just made all kinds of strategic blunders. The simplest evidence of that is its competitor, Fuji, which did very well across this chasm of the Internet. The deeper thought is that Instagram, as a new‑age photo sharing system, couldn’t exist until the modern Internet was built, and that probably created somewhere between 2.5 and 5 million jobs, and made them good jobs. The notion that Instagram killed both Kodak and the jobs is just fundamentally wrong…

…What worries me about the future of Silicon Valley, is that one-dimensionality, that it’s not a Renaissance culture, it’s an engineering culture. It’s an engineering culture that believes that it’s revolutionary, but it’s actually not that revolutionary. The Valley has, for a long time, mined a couple of big ideas…

…In fact, things are slowing down. In 2045, it’s going to look more like it looks today than you think.

Source: Edge

Posted in AI, Automation, deep learning, Machine Learning | Leave a comment

Why Humans Will Forever Rule Over the Machines

Terminator-2Everywhere you turn nowadays, you hear about the imminent triumph of intelligent machines over humans. They will take our jobs, they will make their own decisions, they will be even more intelligent than humans, they pose a threat to humanity (per Stephen Hawking, Bill Gates, and Elon Musk). Marc Andreesen recently summed up on Twitter the increased hubbub about the dangers of Artificial Intelligence: “From ‘It’s so horrible how little progress has been made’ to ‘It’s so horrible how much progress has been made’ in one step.”

Don’t worry. The machines will never take over, no matter how much progress will be made in artificial intelligence . It will forever remain artificial, devoid of what makes us human (and intelligent in the full sense of the word), and what accounts for our unlimited creativity, the fountainhead of ideas that will always keep us at least a few steps ahead of the machines.

In a word, intelligent machines will never have culture, our unique way of transmitting meanings and context over time, our continuously invented and re-invented inner and external realities.

When you stop to think about culture—the content of our thinking—it is amazing that it has been missing from the thinking of the people creating “thinking machines” and/or debating how much they will impact our lives for as long as this work and conversation has been going on. No matter what position they take in the debate and/or what path they follow in developing robots and/or artificial intelligence, they have collectively made a conscious or unconscious decision to reduce the incredible bounty and open-endedness of our thinking to computation, an exchange of information between billions of neurons, which they either hope or are afraid that we will eventually replicate in a similar exchange between increasingly powerful computers. It’s all about quantity and we know that Moore’s Law takes care of that.

Almost all the people participating in the debate about the rise of the machines have subscribed to the Turing Paradigm which basically says “let’s not talk about what we cannot define or investigate and simply equate thinking with computation.”

The dominant thinking about thinking machines, whether of the artificial or the human kind, has not changed since Edward C. Berkeley wrote in Giant Brains or Machines that Think, his 1949 book about the recently invented computers: “These machines are similar to what a brain would be if it were made of hardware and wire instead of flesh and nerves… A machine can handle information; it can calculate, conclude, and choose; it can perform reasonable operations with information. A machine, therefore, can think.” Thirty years later, MIT’s Marvin Minsky famously stated: “The human brain is just a computer that happens to be made out of meat.” Today, Harvard geneticist George Church goes further (reports Joichi Ito), suggesting that we should make brains as smart as computers, and not the other way around.

Still, from time to time we do hear new and original challenges to the dominant paradigm. In “Computers Versus Humanity: Do We Compete?” Liah Greenfeld and Mark Simes bring culture and the mind into the debate over artificial intelligence, concepts that do not exist in the prevailing thinking about thinking. They define culture as the symbolic process by which humans transmit their ways of life. It is a historical process, i.e., it occurs in time, and it operates on both the collective and individual levels simultaneously.

The mind, defined as “culture in the brain,” is a process representing an individualization of the collective symbolic environment. It is supported by the brain and, in turn, it organizes the connective complexity of the brain.  Greenfeld and Simes argue that “mapping and explaining the organization and biological processes in the human brain will only be complete when such symbolic, and therefore non-material, environment is taken into account.”

They conclude that what distinguishes humanity from all other forms of life “is its endless, unpredictable creativity. It does not process information: It creates. It creates information, misinformation, forms of knowledge that cannot be called information at all, and myriads of other phenomena that do not belong to the category of knowledge. Minds do not do computer-like things, ergo computers cannot outcompete us all.”

The mind, the continuous and dynamic creative process by which we live our conscious lives, is missing from the debates over the promise and perils of artificial intelligence. A recent example is a special section on robots in the July/August issue of Foreign Affairs, in which the editors brought together a number of authors with divergent opinions about the race against the machines. All of them, however, do not question the assumption that we are in a race:

  • A roboticist, MIT’s Daniela Rus, writes about the “significant gaps” that have to be closed in order to make robots our little helpers and makes the case for robots and humans augmenting and complementing each other’s skills (in “The Robots Are Coming”).
  • Another roboticist, Carnegie Mellon’s Illah Reza Nourbakhsh, highlights robots’ “potential to produce dystopian outcomes” and laments the lack of required training in ethics, human rights, privacy, or security at the academic engineering programs that grant degrees in robotics (in “The Coming Robot Dystopia”).
  • The authors of The Second Machine Age, MIT’s Erik Brynjolfsson and Andrew McAfee, predict that human labor will not disappear anytime soon because “we humans are a deeply social species, and the desire for human connection carries over to our economic lives.” But the prediction is limited to “within the next decade,” after which “there is a real possibility… that human labor will, in aggregate, decline in relevance because of technological progress, just as horse labor did earlier” (in “Will Humans Go the Way of Horses?”).
  • The chief economics commentator at the Financial Times, Martin Wolf, dismisses the predictions regarding the imminent “breakthroughs in information technology, robotics, and artificial intelligence that will dwarf what has been achieved in the past two centuries” and the emergence of machines that are “supremely intelligent and even self-creating.” While also hedging his bets about the future, he states categorically “what we know for the moment is that there is nothing extraordinary in the changes we are now experiencing. We have been here before and on a much larger scale” (in “Same as It Ever Was: Why the Techno-optimists Are Wrong”).

Same as it ever was, indeed. A lively debate and lots of good arguments: Robots will help us, robots could harm us, robots may or may not take our jobs, robots—for the moment—are nothing special.  Beneath the superficial disagreement lies a fundamental shared acceptance of the general premise that we are not different from computers, only have the temporary and fleeting advantage of greater computing power.

No wonder that the editor of Foreign Affairs, Gideon Rose, concludes that “something is clearly happening here, but we don’t know what it means. And by the time we do, authors and editors might well have been replaced by algorithms along with everybody else.”

Let me make a bold prediction. Algorithms will not create on their own a competitor to Foreign Affairs. No matter how intelligent machines will become (and they will be much smarter than they are today), they will not create science or literature or any of the other components of our culture that we have created over the course of millennia and will continue to create, in some cases aided by technologies that we create and control.

And by “we,” I don’t mean only Einstein and Shakespeare. I mean the entire human race, engaged in creating, absorbing, manipulating, processing, communicating the symbols that make our culture, making sense of our reality. I doubt that we will ever have a machine creating Twitter on its own, not even the hashtag.

I’m sure we will have smart machines that could perform special tasks, augmenting our capabilities and improving our lives. That many jobs will be taken over by algorithms and robots, and many others will be created because of them, as we have seen over the last half-century. And that bad people will use these intelligent machines to harm other people and that we will make many mistakes relying too much on them and not thinking about all the consequences of what we are developing.

But intelligent machines will not have a mind of their own. Intelligent machines will not have our imagination, our creativity, our unique human culture. Intelligent machines will not take over because they will never be human.

Originally published on

Posted in Uncategorized | 1 Comment