Advancing Your AI Career

AI jobs

AI Career Pathways” is designed to guide aspiring AI engineers in finding jobs and building a career. The table above shows Workera’s key findings about AI roles and the tasks they perform. You’ll find more insights like this in the free PDF.

From the report:

People in charge of data engineering need strong coding and software
engineering skills, ideally combined with machine learning skills to help them
make good design decisions related to data. Most of the time, data engineering is done using database query languages such as SQL and object-oriented programming languages such as Python, C++, and Java. Big data tools such as Hadoop and Hive are also commonly used.
Modeling is usually programmed in Python, R, Matlab, C++, Java, or another language. It requires strong foundations in mathematics, data science, and machine learning. Deep learning skills are required by some organizations, especially those focusing on computer vision, natural language processing, or speech recognition.
People working in deployment need to write production code, possess strong back-end engineering skills (in Python, Java, C++, and the like), and understand cloud technologies (for example AWS, GCP, and Azure).
Team members working on business analysis need an understanding of
mathematics and data science for analytics, as well as strong communication skills and business acumen. They sometimes use programming languages suchas R, Python, and Tableau, although many tasks can be carried out in a spreadsheet, PowerPoint or Keynote, or an A/B testing software.
Working on AI infrastructure requires broad software engineering skills to write production code and understand cloud technologies.

Posted in AI, Careers, Data Science Careers | Tagged , | Leave a comment

Best of 2019: The Global Restructuring of Startup Funding

[April 15, 2019]

Ourcrowd

The venture capital industry is under attack. Prominent VC firm Andreessen Horowitz is “blowing up the venture capital model,” transforming itself into a “financial adviser” so it can beat “antiquated rules about what is and isn’t a ‘venture capital’ investment.” The customers are also dissatisfied: VC funding isn’t right for most startups says Indie.vc founder Bryce Roberts, voicing the concerns of a growing number of entrepreneurs who are “jaded by the traditional [VC] playbook.“ And millions of participants in the stock market (about half of U.S. households own stocks) watch haplessly as a number of “unicorns” finally go public this year, “after large gains have been captured by elite early investors.”

Israel’s most active venture investor, OurCrowd, answers these grievances with its twin goals of democratization and globalization of startup funding. “Venture capital has been the most esoteric, the most exclusive investment activity in the world,” says Jon Medved, OurCrowd’s founder and CEO. In 2013, he founded OurCrowd as a new type of venture investment platform, following VC best practices but providing global, worldwide access—to individuals looking to invest in startups and to entrepreneurs looking for venture funds.

Just before establishing OurCrowd, Medved left a startup he co-founded, Vringo, and went on a speaking tour around the world, promoting the thriving Israeli innovation ecosystem. Twenty years earlier, he already took on “the role, that in any other country, would typically belong to the local chamber of commerce, minister of trade, or foreign secretary,” according to Start-Up Nation: The Story of Israel’s Economic Miracle. A traditional VC at the time, Medved founded in 1995 (and co-managed until 2006) Israel Seed Partners, which grew to $262 million invested in 60 Israeli startups (a number of them were eventually acquired or went public on NASDAQ).

There was something different, however, about the new Medved worldwide promotional tour in 2012—many people, individual investors, came to him after his presentation, offered their business cards and said “find me a deal.” They were mostly thinking about Angle-type deals, according to Medved. But the Jumpstart Our Business Startups Act (JOBS) that was signed into law in the US at the time got him to view the opportunity in much larger terms, as an investment platform for accredited individual investors. Other entrepreneurs in the US and elsewhere were in the process of developing or launching equity crowdfunding platforms, but as OurCrowd was coming out of the Israeli innovation ecosystem, there had to be a twist, a differentiated take on an emerging trend.

“The overall model of what they call regulation crowdfunding is antithetical to my belief system,” says Medved. “The last thing an investor needs is to buy common stock from a poor man’s broker-dealer whose being paid by the company so now you got a negative selection bias in terms of company portfolio formation and no board representation, no anti-dilution, nobody awake at the helm, no one representing [individual investors’] money.”

Instead, Medved decided to follow the VC model to some extent and regard individual investors as limited partners. However, OurCrowd’s “limited partners,” unlike those of traditional VC firms, are given the choice of which company they invest in and the investment is structured as a single company venture fund (a Special Purpose Vehicle or SPV).

“The other big difference between so-called crowdfunding and what we do is that they’re done when you bought the shares. They don’t add any value,” says Medved. “When we interview our companies, we ask them what do you want? And if they say, we don’t need any help, just give us the money, we walk away.”

OurCrowd has created a new “investment platform” category. It’s not a traditional VC because of three special characteristics: it is open to any accredited investor in the world who is willing to invest at least $10,000; the single company venture fund structure; and OurCrowd’s habit of building up their position in a startup over time and over multiple investment rounds. “We don’t have a limited fund, we have a platform with unlimited potential. How often do you start investing and then lead the later rounds?” says Medved.

At the same time, OurCrowd cannot be classified as an equity crowdfunding platform–like traditional VCs it performs due diligence (investing in only 2% of the startups it considers) and like more recent VCs (e.g., Andreessen Horowitz) it adds value by hiring domain experts and providing services to the startups in its portfolio.

This added-value dimension of OurCrowd’s relationships with the entrepreneurs it supports gets magnified by the community it has built over the last six years: 30,000 investors from 180 countries (including about 1,000 corporations, family offices, and institutional investors), entrepreneurs, VCs, government agencies, and other OurCrowd friends and fans. Medved calls this impressive ecosystem “a force multiplier” and it was on full display in OurCrowd’s most recent annual conference which had 18,000 registrants from 183 countries (up from 800 in its first gathering in 2014).

Medved singles out one component of this community—large, multinational corporations—as yet another OurCrowd differentiator. By design, he says, OurCrowd has a very broad portfolio, unlike most venture funds with their limited sector focus. That attracts the attention of large corporations, increasingly looking to invest in, experiment with, and sometimes acquire a broad range of technologies (funding from corporate venture capital firms worldwide increased 47% in 2018, according to CBI). In addition to funding, these multinational corporations provide the startups supported by OurCrowd with important connections and business expertise. They also represent a new revenue stream: Some ten multinationals now pay OurCrowd scouting fees, says Medved.

In 6 years, OurCrowd has raised $1 billion for 18 funds, invested in 170 startups, and saw 29 exits (mostly acquisitions by large corporations) and 13 write-offs. What’s next? “We think that this asset class deserves scale,” says Medved. “We would like to have 300,000 or even 3 million individual investors.” In terms of dollars invested, however, Medved predicts that already this year or next, the ratio between individual and institutional investors will flip, as institutional investors invest far more than individuals in each deal. That gives OurCrowd the ability to invest in and even lead late-stage rounds, but it will require communicating even louder the democratic nature of the platform and how it uniquely caters to individual investors. “These guys are happy when they know they are investing with institutions on the same terms and that we protect their allocation,” says Medved

Other communications and management challenges for OurCrowd are finding new channels for customer acquisition, segmenting the intended audience, and improving the targeting of specific potential investors with specific deals. New revenue-making opportunities are also part of OurCrowd’s future, especially taking advantage of the “enormous data sets” they are collecting in the course of their “data-driven business.” Says Medved: “We want to play Moneyball at some point.”

OurCrowd is a typical offspring of the flourishing, dynamic, contrarian Israeli innovation ecosystem. With more than 6,600 active startups, Israel has the most startups per capita in the world. In 2018, VC funding per capita in Israel was $674, more than double the US figure ($303). This according to Start-Up Nation Central’s recent report (PDF) which attributes Israel’s success in developing its “dense innovation ecosystem” to several characteristics of Israeli society, including having people that are “often less receptive to authority or to common practice” and “doubt conventional wisdom and challenge it quite frequently.”

This urge to see things differently is what drives Israelis like Medved and Israeli startups like OurCrowd. Constantly playing devil’s advocate is of great help when you are creating new ventures in a society that operates under many constraints.

Take for example, Beresheet, the smallest spacecraft ever sent to the Moon (and which almost made Israel the 4th nation to land there, and a first for a privately-funded effort). It turns out Israel became the world’s miniature spacecraft leader because it could not launch communications satellites to the east for fear that parts of the launcher would fall in the territory of countries that may respond aggressively.

Launching satellites to the west, against the direction by which the Earth revolves, decreases their carrying capacity by 30%. “So we were forced to either abandon the option of launching satellites ourselves or to build smaller satellites,” Raz Itzhaki, cofounder and CEO of nanosatellite startup NSLComm (OurCrowd is an investor) told ISRAEL21c.

Seeing things differently also helps in flashing out Israel’s unique advantages and inventing products and services to capitalize on them. For example, the decision twenty years ago by Israel’s healthcare system to standardize the ongoing collection and archiving of patients’ data is now turning out to be (as anonymized data) “digital health gold” for researchers and startups alike. Or how the groundbreaking research 50 years ago in Israel into the science of cannabinoids and how they can serve a medicinal purpose has led to the creation of a supportive network of academic, public, and private sector entities and to the existence today of over 70 cannabis-related startups.

The contrarian mindset goes beyond high-tech, driving Israelis to take risks, to experiment, to do what seems to be impossible, in all types of endeavors. Medved’s LinkedIn profile says that he collects rare single malts. I guess he does it on his many trips around the world, as there have never been whisky distilleries in Israel. Until now. The Milk & Honey distillery has recently produced Israel’s first single malt whisky, “a winner.” Given the hot climate, the product matures about three times faster than the competition’s, traditionally based in cooler climates. And Israel’s three or four different climate zones allows for experimentation with different maturation rates and other variations. Now “the whisky industry [in Israel] is booming,” with additional distilleries up and running or in development.

This is what Medved calls “the unbelievable nature of the Israeli ecosystem” and the “sheer explosion of its entrepreneurial talent” which he believes is “only in its early days.”

Originally published on Forbes.com

See also How To Let Ordinary Investors Invest In Startups? OurCrowd Has The Answer

 

Posted in startups, Venture capital | Tagged | Leave a comment

Best of 2019: The Web at 30

bernersLee

[March 12, 2019] Tim Berners-Lee liberated data so it can eat the world. In his book Weaving the Web, he wrote:

I was excited about escaping from the straightjacket of hierarchical documentation systems…. By being able to reference everything with equal ease, the web could also represent associations between things that might seem unrelated but for some reason did actually share a relationship. This is something the brain can do easily, spontaneously. … The research community has used links between paper documents for ages: Tables of content, indexes, bibliographies and reference sections… On the Web… scientists could escape from the sequential organization of each paper and bibliography, to pick and choose a path of references that served their own interest.

With this one imaginative leap, Berners-Lee moved beyond a major stumbling block for all previous information retrieval systems: The pre-defined classification system at their core. This insight was so counter-intuitive that even during the early years of the Web, attempts were made to do just that: To classify (and organize in pre-defined taxonomies) all the information on the Web.

Thirty years ago, Tim Berners-Lee circulated a proposal for “Mesh” to his management at CERN. While the Internet started as a network for linking research centers, the World Wide Web started as a way to share information among researchers at CERN. Both have expanded to touch today more than half of the world’s population because they have been based on open standards.

Creating a closed and proprietary system has been the business model of choice for many great inventors and some of the greatest inventions of the computer age. That’s where we were headed towards in the early 1990s: The establishment of global proprietary networks owned by a few computers and telecommunications companies, whether old or new. Tim Berners-Lee’s invention and CERN’s decision to offer it to the world for free in 1993 changed the course of this proprietary march, giving a new—and much expanded—life to the Internet (itself a response to proprietary systems that did not inter-communicate) and establishing a new, open platform, for a seemingly infinite number of applications and services.

As Bob Metcalfe told me in 2009: “Tim Berners-Lee invented the URL, HTTP, and HTML standards… three adequate standards that, when used together, ignited the explosive growth of the Web… What this has demonstrated is the efficacy of the layered architecture of the Internet. The Web demonstrates how powerful that is, both by being layered on top of things that were invented 17 years before, and by giving rise to amazing new functions in the following decades.”

Metcalfe also touched on the power and potential of an open platform: “Tim Berners-Lee tells this joke, which I hasten to retell because it’s so good. He was introduced at a conference as the inventor of the World Wide Web. As often happens when someone is introduced that way, there are at least three people in the audience who want to fight about that, because they invented it or a friend of theirs invented it. Someone said, ‘You didn’t. You can’t have invented it. There’s just not enough time in the day for you to have typed in all that information.’ That poor schlemiel completely missed the point that Tim didn’t create the World Wide Web. He created the mechanism by which many, many people could create the World Wide Web.”

Metcalfe’s comments were first published in ON magazine which I created and published for my employer at the time, EMC Corporation. For a special issue (PDF) commemorating the 20th anniversary of the invention of the Web, we asked some 20 digital influencers (as we would call them today) how the Web has changed their and our lives and what it will look like in the future. Here’s a sample:

Howard Rheingold: “The Web allows people to do things together that they weren’t allowed to do before. But… I think we are in danger of drowning in a sea of misinformation, disinformation, spam, porn, urban legends, and hoaxes.”

Chris Brogan: “We look at the Web as this set of tools that allow people to try any idea without a whole lot of expense… Anyone can start anything with very little money, and then it’s just a meritocracy in terms of winning the attention wars.”

Dany Levy (founder of DailyCandy): “With the Web, everything comes so easily. I wonder about the future and the human ability to research and to seek and to find, which is really an important skill. I wonder, will human beings lose their ability to navigate?”

We also interviewed Berners-Lee in 2009. He said that the Web has “changed in the last few years faster than it changed before, and it is crazy to for us to imagine this acceleration will suddenly stop.” He pointed out the ongoing tendency to lock what we do with computers in a proprietary jail: “…there are aspects of the online world that are still fairly ‘pre-Web.’ Social networking sites, for example, are still siloed; you can’t share your information from one site with a contact on another site.”

But he remained both realistic and optimistic, the hallmarks of an entrepreneur: “The Web, after all, is just a tool…. What you see on it reflects humanity—or at least the 20% of humanity that currently has access to the Web… No one owns the World Wide Web, no one has a copyright for it, and no one collects royalties from it. It belongs to humanity, and when it comes to humanity, I’m tremendously optimistic.”

Originally published on Forbes.com

See also A Very Short History Of The Internet And The Web

Posted in World Wide Web | Tagged | Leave a comment

Best of 2019: How AI Killed Google’s Social Network

[February 4, 2019] Facebook turns 15 today, after announcing last week a record profit and 30% revenue growth. Also today, “you will no longer be able to create new Google+ profiles, pages, communities or events,” in anticipation of the complete shutdown in April of Google’s social network, its bet-the-company challenge to Facebook.

Both Google and Facebook have proved many business mantras wrong, not the least of which is the one about “first-mover advantages.” In business, timing is everything. There is no first-mover advantage just as there is no late-mover advantage (and there are no “business laws,” regardless of what countless books, articles, and lectures tell you).

When Google was launched on September 4, 1998, it had to compete with a handful of other search engines. Google vanquished all of them because instead of “organizing the world’s information” (in the words of its stated mission), it opted for automated self-organization. Google built its “search” business (what used to be called “information retrieval”) by closely tracking cross-references (i.e., links between web pages) as they were happening and correlating relevance with quantity of cross-references (i.e., popularity of pages as judged by how many other pages linked to them). In contrast, the dominant player at the time, Yahoo, followed the traditional library model by attempting to build a card-catalog (ontologies) of all the information on the web. Automated classification (i.e., Google) won.

Similarly, Facebook wasn’t the first social network. The early days of the web saw SixDegrees.com and LiveJournal and, in 2002, Friendster reached 3 million users in just a few months. MySpace launched in 2003 and 2 years later reached 25 million users. These early movers conditioned consumers to the idea (and possible benefits) of social networking and helped encourage increased investment in broadband connections. They also provided Facebook with a long list of technical and business mistakes to avoid.

There was also a shining example of a successful web-born company–Google–for Facebook to emulate. Like Google, it attracted clever engineers to build a smart and scalable infrastructure and, like Google, it established a successful and sustainable business model by re-inventing advertising. Facebook, however, went much further than its role model in responding to rising competition by either buying competitors or successfully copying them.

It also led Google to launch Google+, its most spectacular failure to date. The major culprit was the misleading concept of a “social signal.” Driven by the rise of Facebook (and Twitter), the conventional wisdom around 2010 was that the data Google was collecting, the data that was behind the success of its search engine, was missing out the “social” dimension of finding and discovering information. People on the web (and on Facebook and Twitter) were increasingly relying on getting relevant information from the members of their social networks, reducing their use of Google Search.

When Larry Page took over as Google CEO in 2011, adding a “social signal” to its search engine—and trying to beat Facebook at its own game—became his primary mission. In his first week as CEO in April 2011, Page sent a company-wide memo tying 25% of every employee’s bonus to Google’s success in social. Google introduced its answer to the Facebook “like” button, the Google “+1” recommendations, which, according to Danny Sullivan, the most astute Google watcher at the time, could “become an important new signal for Google to use as part of its overall ranking algorithm, during a time when it desperately needs new signals.”

The complete competitive answer to Facebook, Google+, was launched in June 2011, as “one of the most ambitious bets in the company’s history,” and a “response to the disruption of Web 2.0 and the emergence of the social web,” per Eric Schmidt and Jonathan Rosenberg in How Google Works (2014). But in January 2012, ComScore estimated that users averaged 3.3 minutes on the site compared to 7.5 hours on Facebook. And it was all downhill from there. Why?

A part of the problem was that Google tried very hard to show the world it’s not just copying Facebook but improving on it. Facebook’s simple approach to creating a social network was perceived to be too simple as it designated as “friends” (and still does) everybody in your network from your grandmother to someone you never met in person who has worked with you on a time-limited work-related project. Google’s clever answer was “circles,” allowing you to classify “friends” into specific and meaningful sub-networks. This, of course, went against Google’s early great hunch that user (or librarian) classification does not work on the web because it does not “scale.” So what looked like a much-needed correction to Facebook ultimately failed. Trained well by Google to expect and enjoy automated classification, users did not want to play librarians.

More important, I guess that even the relatively small number of active participants in Google+ (90 million by the end of 2011) was enough for Google to discover pretty quickly that the belief that “Making use of social signals gives Google a valuable new signal closely tied with individuals and known accounts that it could use” was simply a mirage. “Social signals” did not improve search results. In addition, 2012 brought about the Deep Learning (what we now call “AI”) revolution that changed everything at Google, especially how it engineered its search algorithm.

Sophisticated statistical classification—finding hidden correlations in huge amounts of data and using them to put seemingly unrelated entities into common buckets—was the foundation of Google’s initial success. In 2012, a specific approach to this type of statistical analysis of vast quantities of data, variously called “machine learning,” “deep learning,” and “artificial intelligence (AI),” burst out of obscure academic papers and precincts and became the buzzword of the day.

Two major milestones marked the emergence of what I prefer to call “statistics on steroids”: In June 2012, Google’s Jeff Dean and Stanford’s Andrew Ng reported an experiment in which they showed a deep learning neural network 10 million unlabeled images randomly taken from YouTube videos, and “to our amusement, one of our artificial neurons learned to respond strongly to pictures of… cats.” And in October of the same year, a deep learning neural network achieved an error rate of only 16% in the ImageNet Large Scale Visual Recognition Challenge, a significant improvement over the 25% error rate achieved by the best entry the year before. “AI” was off to the races.

The impact of the statistics on steroids revolution was such that even Google’s most sacred cow, its search algorithm, had to—after some resistance—incorporate the new, automated, scalable, not-user-dependent, “AI” signal, an improved way to statistically analyze the much bigger pile of data Google now collects. “RankBrain has moved in, a machine-learning artificial intelligence that Google’s been using to process a ‘very large fraction’ of search results per day,” observed Danny Sullivan in October 2015.

AI killed Google+.

Good for Google. Analysts expect Google’s parent Alphabet to report earnings today after the market close of $11.08 per share and adjusted revenue of $31.3 billion. These results would represent year-over-year growth rates of 14% and 21%, respectively.

Update: Alphabet’s Q4 2018 revenues were up 22% at $39.3 billion, and earnings per share were $12.77, up 31.6%.

Originally published on Forbes.com

Posted in AI, Facebook, Google, social networks, Social search | Tagged | Leave a comment

Best of 2019: 60 Years of Progress in AI

New Zealand flatworm

New Zealand flatworm

[January 8, 2019] Today is the first day of CES 2019 and artificial intelligence (AI) “will pervade the show,” says Gary Shapiro, chief executive of the Consumer Technology Association. One hundred and thirty years ago today (January 8, 1889), Herman Hollerith was granted a patent titled “Art of Compiling Statistics.” The patent described a punched card tabulating machine which heralded the fruitful marriage of statistics and computer engineering—called “machine learning” since the late 1950s, and reincarnated today as “deep learning,” or more popularly as “artificial intelligence.”

Commemorating IBM’s 100th anniversary in 2011, The Economist wrote:

In 1886, Herman Hollerith, a statistician, started a business to rent out the tabulating machines he had originally invented for America’s census. Taking a page from train conductors, who then punched holes in tickets to denote passengers’ observable traits (e.g., that they were tall, or female) to prevent fraud, he developed a punch card that held a person’s data and an electric contraption to read it. The technology became the core of IBM’s business when it was incorporated as Computing Tabulating Recording Company (CTR) in 1911 after Hollerith’s firm merged with three others.

In his patent application, Hollerith explained the usefulness of his machine in the context of a population survey and the statistical analysis of what we now call “big data”:

The returns of a census contain the names of individuals and various data relating to such persons, as age, sex, race, nativity, nativity of father, nativity of mother, occupation, civil condition, etc. These facts or data I will for convenience call statistical items, from which items the various statistical tables are compiled. In such compilation the person is the unit, and the statistics are compiled according to single items or combinations of items… it may be required to know the numbers of persons engaged in certain occupations, classified according to sex, groups of ages, and certain nativities. In such cases persons are counted according to combinations of items. A method for compiling such statistics must be capable of counting or adding units according to single statistical items or combinations of such items. The labor and expense of such tallies, especially when counting combinations of items made by the usual methods, are very great.

In Before the Computer, James Cortada describes the results of the first large-scale machine learning project:

The U.S. Census of 1890… was a milestone in the history of modern data processing…. No other occurrence so clearly symbolized the start of the age of mechanized data handling…. Before the end of that year, [Hollerith’s] machines had tabulated all 62,622,250 souls in the United States. Use of his machines saved the bureau $5 million over manual methods while cutting sharply the time to do the job. Additional analysis of other variables with his machines meant that the Census of 1890 could be completed within two years, as opposed to nearly ten years taken for fewer data variables and a smaller population in the previous census.

But the efficient output of the machine was considered by some as “fake news.” In 1891, the Electrical Engineer reported (quoted in Patricia Cline Cohen’s A Calculating People):

The statement by Mr. Porter [the head of the Census Bureau, announcing the initial count of the 1890 census] that the population of this great republic was only 62,622,250 sent into spasms of indignation a great many people who had made up their minds that the dignity of the republic could only be supported on a total of 75,000,000. Hence there was a howl, not of “deep-mouthed welcome,” but of frantic disappointment.  And then the publication of the figures for New York! Rachel weeping for her lost children and refusing to be comforted was a mere puppet-show compared with some of our New York politicians over the strayed and stolen Manhattan Island citizens.

A century later, no matter how even more efficiently machines learned, they were still accused of creating and disseminating fake news. On March 24, 2011, the U.S. Census Bureau delivered “New York’s 2010 Census population totals, including first look at race and Hispanic origin data for legislative redistricting.” In response to the census data showing that New York has about 200,000 less people than originally thought, Senator Chuck Schumer said, “The Census Bureau has never known how to count urban populations and needs to go back to the drawing board. It strains credulity to believe that New York City has grown by only 167,000 people over the last decade.” Mayor Bloomberg called the numbers “totally incongruous” and Brooklyn borough president Marty Markowitz said “I know they made a big big mistake.” [The results of the 1990 census were also disappointing and were unsuccessfully challenged in court, according to the New York Times].

Complaints by politicians and other people have not slowed down the continuing advances in using computers in ingenious ways for increasingly sophisticated statistical analysis. In 1959, Arthur Samuel experimented with teaching computers how to beat humans in chess, calling his approach “machine learning.”

Later applied successfully to modern challenges such as spam filtering and fraud detection, the machine-learning approach relied on statistical procedures that found patterns in the data or classified the data into different buckets, allowing the computer to “learn” (e.g., optimize the performance—accuracy—of a certain task) and “predict” (e.g., classify or put in different buckets) the type of new data that is fed to it. Entrepreneurs such as Norman Nie (SPSS) and Jim Goodnight (SAS) accelerated the practical application of computational statistics by developing software programs that enabled the widespread use of machine learning and other sophisticated statistical analysis techniques.

In his 1959 paper, Samuel described machine learning as particularly suited for very specific tasks, in distinction to the “Neural-net approach,” which he thought could lead to the development of general-purpose leaning machines. The neural networks approach was inspired by a 1943 paper by Warren S. McCulloch and Walter Pitts in which they described networks of idealized and simplified artificial “neurons” and how they might perform simple logical functions, leading to the popular description of today’s neural networks as “mimicking the brain.”

Over the years, the popularity of “neural networks” have gone up and down a number of hype cycles, starting with the Perceptron, a 2-layer neural network that was considered by the US Navy to be “the embryo of an electronic computer that.. will be able to walk, talk, see, write, reproduce itself and be conscious of its existence.” In addition to failing to meet these lofty expectations—similar in tone to today’s perceived threat of “super-intelligence”—neural networks suffered from a fierce competition from the academics who coined the term “artificial intelligence” in 1955 and preferred the manipulation of symbols rather than computational statistics as a sure path to creating a human-like machine.

It didn’t work and “AI Winter” set in. With the invention and successful application of “backpropagation” as a way to overcome the limitations of simple neural networks, statistical analysis was again on the ascendance, now cleverly labeled as “deep learning.” In Neural Networks and Statistical Models (1994), Warren Sarle explained to his worried and confused fellow statisticians that the ominous-sounding artificial neural networks

are nothing more than nonlinear regression and discriminant models that can be implemented with standard statistical software… like many statistical methods, [artificial neural networks] are capable of processing vast amounts of data and making predictions that are sometimes surprisingly accurate; this does not make them “intelligent” in the usual sense of the word. Artificial neural networks “learn” in much the same way that many statistical algorithms do estimation, but usually much more slowly than statistical algorithms. If artificial neural networks are intelligent, then many statistical methods must also be considered intelligent.

Sarle provided his colleagues with a handy dictionary translating the terms used by “neural engineers” to the language of statisticians (e.g., “features” are “variables”). In anticipation of today’s “data science” and predictions of algorithms replacing statisticians (and even scientists), Sarle reassured them that no “black box” can substitute for human intelligence:

Neural engineers want their networks to be black boxes requiring no human intervention—data in, predictions out. The marketing hype claims that neural networks can be used with no experience and automatically learn whatever is required; this, of course, is nonsense. Doing a simple linear regression requires a nontrivial amount of statistical expertise.

In his April 2018 congressional testimony, Mark Zuckerberg agreed that relying blindly on black boxes is not a good idea: “I don’t think that in 10 or 20 years, in the future that we all want to build, we want to end up with systems that people don’t understand how they’re making decisions.” Still, Zuckerberg used the aura, the enigma, the mystery that masks inconvenient truths, everything that has been associated with the hyped marriage of computers and statistical analysis, to ensure the public that the future will be great: “Over the long term, building AI tools is going to be the scalable way to identify and root out most of this harmful content.”

Facebook’s top AI researcher Yann LeCun is “less optimistic, and a lot less certain about how long it would take to improve AI tools.” In his assessment, “Our best AI systems have less common sense than a house cat.” An accurate description of today’s not very intelligent machines, and reminiscent of what Samuel said in his 1959 machine learning paper:

Warren S. McCulloch has compared the digital computer to the nervous system of a flatworm. To extend this comparison to the situation under discussion would be unfair to the worm since its nervous system is actually quite highly organized as compared to [the most advanced artificial neural networks of the day].

Over the past sixty years, artificial intelligence has advanced from being not as smart as a flatworm to having less common sense than a house cat.

Originally published on Forbes.com

Posted in AI, Computer History, deep learning, Machine Learning, Statistics | Tagged | Leave a comment

Data Will Continue Eating the World in the 2020s

It’s difficult to make predictions, especially about the future. But one fairly safe prediction is that data will continue eating the world in 2020 and the coming decade. The most important tech trend since the 1990s will no doubt accentuate its presence in our lives, for better or for worse.

Read more here

Posted in AI, Big Data Analytics, Data growth, Data is eating the world, Predictions | Leave a comment

Best of 2019: Data is Eating the World

[June 30, 2019] Data is eating the world. It has been the most important tech trend since the 1990s. But to paraphrase Robert Solow, “You can see data everywhere but in the Internet statistics.” The most astute and influential observers of the tech landscape have been counting and reporting on the number of devices, the number of users, the volume of eCommerce, the number of online ads, the number of apps, the number of images and videos, and so on. What’s largely been missing is data on data, on what drives the growth of all of these digital entities and what drives new businesses and business models and innovation and change.

Identifying “global trends that drive innovation and change” has been the calling card of one of the most influential observers of the tech landscape and a successful investor in and shaper of this landscape—Mary Meeker. Since 1995, Meeker has delivered a 30-minute presentation of more than 300 slides summarizing all the key stats that’s fit to present in a given year. This (almost) annual event never failed to become for at least a few days the talk of the global tech town and has served as a reliable and reliably updated online source of data on the state of the tech economy. The only data missing has been data on data.

According to various reports, while delivering her 2019 presentation on June 11, Meeker told her audience: “If it feels like we’re all drinking from a data firehose, it’s because we are.” Still, it’s only in the 5th section of the report, starting on slide #121, that she gets to “data growth.” Most of this section is devoted to an interesting discussion of how before 1995 businesses used human data and insights to improve customer experiences and after 1995 shifted to using digital data and insights to do the same. Meeker provides many examples of the startups (and their revenue growth) providing the tools allowing established businesses to improve customer experience and satisfaction. Only on slides #151-157 she provides data—from an IDC study—and observations on data growth, volume, and stewardship, before moving on (there are 333 slides in the 2019 edition) to discussing Internet usage numbers, the open Internet, cybersecurity, and other topics and trends.

The IDC study is also quoted in last year’s report, on slide #189, showing data’s “torrid growth” since 2006. It’s possible that this was the first time Meeker has used data on data from this study, although the study has been published annually since 2007 (I checked some but not all of the annual editions of Meeker’s Internet Trends report).

If it sounds like I’m criticizing Meeker, let me clarify: I sincerely believe that for almost a quarter of a century she has performed an important public service by sharing with us the data she has collected with the support of her well-heeled employers. Now that she has recently established her own $1+ billion growth fund, Bond, she is providing an archive of all her presentations since 1995—a treasure trove of historical data on the tech economy that will serve entrepreneurs, executives, and researchers for years to come.

And there has been data on data. In 2011, Martin Hilbert and Priscila Lopez published their study which estimated that in 1986, 99.2% of all storage capacity was analog, but in 2007, 94% of storage capacity was digital, a complete reversal of roles (in 2002, digital data storage surpassed non-digital for the first time).

It is this remarkably rapid shift from analog to digital that is encapsulated in “(digital) data is eating the world.” I thought I coined the phrase two years ago, when I wrote:

In eating the world, data has not only transformed the management of IT and the IT industry, it has also blurred previously rigid industry boundaries and destroyed the sharp distinction between what is considered “consumer” and what is considered “enterprise.” When everything looks like ones and zeros and you focus on collecting and mining as many ones and zeros as possible, old categories just fade away.

Alas, Google tells me the first (?) mention of “data is eating the world” was by Denny Britz in 2013. This was, of course, a brilliant take on Marc Andreessen’s 2011 observation that “software is eating the world.” More on this later, but first, a quick overview of other key attempts to come up with stats on data volume and growth (for a full account, see my A Very Short History of Big Data).

In October 2000, Peter Lyman and Hal Varian at UC Berkeley published “How Much Information?”—the first comprehensive study to quantify, in computer storage terms, the total amount of new and original information (not counting copies) created in the world annually (in 1999, the world produced 1.5 exabytes of original data).  In March 2007, John Gantz, David Reinsel and other researchers at IDC published the first study to estimate and forecast the amount of digital data created and replicated each year (161 exabytes in 2006, estimated to increase more than six-fold to 988 exabytes in 2010, or doubling every 18 months).

Full disclosure: I commissioned both studies and they were sponsored by my employer at the time, EMC (In recent years, the IDC study has been sponsored by Seagate). EMC was focused on just one segment of the industry—computer storage—and was a leading example of the re-structuring of the industry in the 1990s from a bunch of vertically integrated companies to a bunch of companies focused on just one “horizontal” IT layer (e.g., semiconductors, storage, operating systems). Cisco, focused on (and dominating) computer networking, was also interested in terabytes and exabytes and in June 2008 started releasing an annual “Visual Networking Index”—tracking and forecasting IP traffic, predicting that it will nearly double every two years through 2012, reaching half a zettabyte.

The interest in lots of bytes and the use of forecasting language reminiscent of Moore’s Law were no accident. Both EMC and Cisco represented not only the restructuring of the industry but also a move away from the dominant computer industry paradigm so well encapsulated by Moore’s Law. Faster and faster processors have been perceived by entrepreneurs, established IT companies, IT buyers, and investors/observers such as Mary Meeker as the single most important driver of growth and innovation ever since the term “data processing” was coined in 1954.

Data processing. It’s not that all industry participants ignored data. But data was perceived as the effect and not the cause, the result of having faster and faster devices processing more data and larger and larger containers (also enabled by Moore’s Law) to store it.

By 1995, when Mary Meeker first delivered her Internet Trends presentation, that paradigm was disrupted by data—it became the most important driver of growth and innovation. With his “software is eating the world” Andreessen tried to capture the move away from the processor-centric paradigm, from hardware at the center of everything. You don’t need increasingly powerful processors (hardware is now a commodity, Moore’s Law is slowing down, etc.) when you have powerful software.

“We are in the middle of a dramatic and broad technological and economic shift in which software companies are poised to take over large swathes of the economy,” wrote Andreessen. “The single most dramatic example of this phenomenon of software eating a traditional business,” was Amazon, according to Andreessen. Similarly, “today’s largest direct marketing platform is a software company — Google.”

I would argue that a more accurate description of Amazon and Google is that they are both data-driven companies. Of course, both companies have built their impressive market caps on the skills and creativity of their software engineers. Just like hardware, software is a very important foundation for the success of top tech companies and for a time, can serve as a competitive differentiator. But Amazon is a prime example of how the “amazing software engine for selling virtually everything online” (per Andreessen in 2011) can be replicated by competitors’ software engineers—see Walmart and Target, for example.

Software innovation has been very important in Amazon’s success, but more important have been its data mining skills and creativity. Having been born digital, living the online life, meant not only excelling in software development, but also innovating in the collection and analysis of the mountains of data produced by online transactions. Data has taken over from hardware and software as the center of everything, the lifeblood of tech companies. And increasingly, the lifeblood of any type of business.

Which is why for Jeff Bezos there are no “industries,” only ones and zeros to collect, to mine, and to move around, which explains why Amazon today is so much more than “the world’s largest bookseller” (as Andreessen described it in 2011), so much more than just an eCommerce company. Google has also used its data smarts to diversify beyond its origins as a “direct marketing company,” and the title of the most innovative in this category belongs today to another data-obsessed company—Facebook.

One of the early investors in Facebook (and other data-driven companies), Andreessen wrote “software is eating the world” almost 20 years after he made–with the Mosaic/Netscape browser–the second most important contribution to the big data big bang, the invention of the World Wide Web by Tim Berners-Lee.

The Web is a digital platform that makes the consuming, creating, and moving of data far easier than it has ever been, making any additional member in the Internet community (50% of the world’s population today, up from 20% ten years ago), a contributor to the exponential growth of data. Moreover, each additional link between people, devices, and other online entities (“things”), accelerates the rate of data growth.

A parallel development—which Meeker described in the Data Growth section of her 2019 presentation—in the early 1990s (and a key demand-side contributor to the re-structuring of the IT industry), was a radical change in business executives’ attitude towards data. They stopped throwing it away and started storing it for longer periods of time, sharing it among internal departments and even with suppliers and customers, and most important, analyzing it to improve various business activities, customer relations, and decision-making.

These two trends are merging today in the cloud, turning all businesses into data-driven businesses (a.k.a “digital transformation”). The most important recent tech development, what is generally known as “AI” and what is more accurately labeled “deep learning,” is sophisticated statistical analysis of lots and lots of data, the merging of software development and data mining skills (a.k.a “data science”).

Data is eating the world.

Originally published on Forbes.com

Posted in AI, Big Data Analytics, Data growth, Data is eating the world, Data Mining, Data Science | Tagged , | Leave a comment