82 Startups Disrupting the Healthcare Industry


CB Insights:

Care Planning: Companies creating tools to aid in the development of, and compliance with, treatment plans. An example is Dbaza Health, which built a digital solution to complement chronic disease management.

Supply Management: Companies developing digital tools to aid in handling the delivery and logistics of medical supplies within the hospital. Lab Sensor Solutions tracks the temperature and location of materials such as vaccines, blood, and pharmaceuticals.

Diagnostics: Companies developing diagnostic solutions that have a digital component. Examples include Lumiata, which has built a predictive analytics platform to aid the diagnosis and management of disease states, and Genalyte, developers of a rapid, point-of-care, blood diagnostics platform.

Communication: Companies developing tools to facilitate intra-hospital communication between healthcare workers or healthcare workers and patients. An example is Voalte, which developed a secure messaging service for nurses and physicians.

EMR/ Practice Management: Companies such as Modernizing Medicine focused on either replacing or complementing conventional health record systems.

Surgery: Companies developing digital tools designed to be used by surgeons or in the operating room. Gauss Surgical has developed a surgical blood loss monitoring system that runs on an iPad.

Referrals: Companies focused on platforms intended to aid physicians when choosing a specialist for the transfer of care. AristaMD, for example, has developed a software intelligence platform to help primary care physicians.

Care Coordination: Companies working to ensure all parties in the care process remain informed and engaged. HealthLoop developed a patient engagement platform intended to facilitate patient-physician communication throughout the care continuum.

Patient Experience: Companies such as NarrativeDx working to either measure or improve the patient experience within the hospital.

Infection Control: Companies developing tools to help maintain proper hygiene. One such company is Xenex which develops robots that use UV light to disinfect hospital rooms and consequently reduce hospital-acquired infection rates.

Readmissions/ Emergency Department: Companies such as AnalyticsMD working to optimize patient intake or experience in the emergency department.

Hospital Navigation: Companies such as Gozio Health which develop digital tools designed to help patients and staff better find their way around the hospital.

Medication Management: Companies working in inventory management, medication delivery, and/or prescription verification. An example is Talyst which, among other solutions, develops medication inventory software.

Patient Monitoring: Companies working on continuous bedside or remote monitoring of patient status. One such company is MediBeacon which offers real-time tracking of kidney function.

Radiology: Companies such as Trice Medical or Imagia developing advanced visualization and image analysis tools to aid physicians when making a diagnosis.

Posted in healthcare, startups | Tagged | 1 Comment

DeepBench from Baidu: Benchmarking Hardware for Deep Learning


Source: Greg Diamos and Sharan Narang, “The need for speed: Benchmarking deep learning workloads,” O’Reilly AI Conference

At the O’Reilly Artificial Intelligence conference, Baidu Research announced DeepBench, an open source benchmarking tool for evaluating the performance of deep learning operations on different hardware platforms. Greg Diamos and Sharan Narang of Baidu Research’s Silicon Valley AI Lab talked at the conference about the motivation for developing the benchmark and why faster computers are crucial to the continued success of deep learning.

The harbinger of the current AI Spring, deep learning is a machine learning method using “artificial neural networks,” moving vast amounts of data through many layers of hardware and software, each layer coming up with its own representation of the data and passing what it “learned” to the next layer. As a widely publicized deep learning project has demonstrated four years ago, feeding such an artificial neural network with images extracted from 10 million videos can result in the computer (in this case, an array of 16,000 processors) learning to identify and label correctly an image of a cat. One of the leaders of that “Google Brain” project was Andrew Ng, who is today the Chief Scientist at Baidu and the head of Baidu Research.

Research areas of interest to by Baidu Research include image recognition, speech recognition, natural language processing, robotics, and big data. Its Silicon Valley AI Lab has deep learning and systems research teams that work together “to explore the latest in deep learning algorithms as well as find innovative ways to accelerate AI research with new hardware and software technologies.”

DeepBench is an attempt to accelerate the development of the hardware foundation for deep learning, by helping hardware developers optimize their processors for deep learning applications, and specifically, for the “training” phase in which the system learns through trial and error. “There are many different types of applications in deep learning—if you are a hardware manufacturer, you may not understand how to build for them. We are providing a tool for people to help them see if a change to a processor [design] improves performance and how it affects the application,” says Diamos.  One of the exciting things about deep learning for him (and no doubt for many other researchers) is that “as the computer gets faster, the application gets better and the algorithms get smarter.”

Case in point is speech recognition. Or more specifically, DeepSpeech, Baidu Research’s “state-of-the-art speech recognition system developed using end-to-end deep learning.” The most important aspect of this system is its simplicity, says Diamos, with audio on one end, text on the other end, and a single learning algorithm (a recurring convolutional neural network), sitting in the middle. “We can take exactly the same architecture and apply it to both English and Mandarin with greater accuracy than systems we were building in the past,” says Diamos.

In Mandarin, the system is more accurate in transcribing audio to text than native speakers, as the latter may have difficulty understanding what is said because of noise level or accent. Indeed, the data set used by DeepSpeech is very large because it was created by mixing hours of synthetic noise with the raw audio, explains Narang. The largest publicly available data set is about 2000 hours of audio recordings while the one used by DeepSpeech clocks in at 100,000 hours or 10 terabytes of data.

The approach taken by the developers of DeepSpeech is superior to other approaches argue Narang and Diamos. Traditional speech recognition systems using a “hand-designed algorithm,” get more accurate with more data but eventually saturate, requiring a domain expert to develop a new algorithm. The hybrid approach adds a deep convolutional neural network. The result is better scaling but again the performance eventually saturates. DeepSpeech uses deep learning as the entire algorithm and achieves continuous improvement in performance (accuracy) with larger data sets and larger models (more and bigger layers).

Bigger is better. But to capitalize on this feature (pun intended) of deep learning, you need faster computers. “The biggest bottleneck,” says Narang, “is training the model.” He concludes: “Large data sets, a complex model with many layers, and the need to train the model many times is slowing down deep learning research. To make rapid progress, we need to reduce model training time. That’s why we need tools to benchmark the performance of deep learning training. DeepBench allows us to measure the time it takes to perform the underlying deep learning operation. It establishes a line in the sand that will encourage hardware developers to do better by focusing on the right issues.”

Originally published on Forbes.com

Posted in deep learning | Tagged | 1 Comment

Robots Take Over the World




Source: ABI/Massachusetts Technology Collaborative



Posted in Robotics, robots | Tagged | 1 Comment

The New Servant Class: 50 Startups Developing Home Robots


CB Insights:

Consumer robotics startups have bagged 25% of the global robotics deal share in the last 5 years. Around 40% of these consumer deals went to social robots like Anki, UBTECH, and Rokid. Educational robots that teach children how to code raised 7 deals each in 2014 and 2015, with deals projected to surpass that number this year at the current run rate. Over 10 deals went to service robots last year, from fewer than 5 in 2014.

  • Social: Social robots, including companion and entertainment robots for homes, have received the greatest share of consumer robotics deals. Humanoid robotics startup UBTECH joined the unicorn club this year after raising a $100M Series B round from CDH Investments, Goldstone Investment, and CITIC Securities International. China-based Turing Robot, which develops robotics operating systems, announced the launch of a companion robot for kids at the price of a smartphone. London-based Olly raised $10M last quarter from Alliance Capital Ventures and Lightning Capital. Another startup, Anki, raised $52.5M in Q2’16 from Index Ventures, Two Sigma Ventures, JPMorgan Chase & Co., and Andreessen Horowitz.
  • Personal drones: Intel Capital invested $60M in China-based Yuneec. You can read more about Intel, Google, Foxconn, and other corporations investing in private robotics startups here. Other personal drone startups, most of them in their early stages of funding, include xCraftUVifyOpenROV, and EHANG. Another company, 3D Robotics, which recently announced that it has exited hardware, has been excluded from the map. DJI Innovations, which also develops industrial robots, is included in this market map as it also manufactures consumer drones Mavec and Phanthom.
  • Educational: Robots that teach children to code have piqued the interest of investors in recent years. The most well-funded startup in this category is Wonder Workshop, which has raised $36M in equity funding from investors including CRV, Madrona Venture Group, and Google Ventures. Another startup, Modular Robotics, is backed by Foundry Group. Sequoia Capital China backed China-based Makeblock in a $6M Series A round in 2014.
  • Service Robots: This includes robotic arms, personal transportation robots, as well as those that perform household chores like cooking, vacuuming, bartending, and even cleaning your fishtank. So far this year, 5 companies have raised equity funds, including personal transportation robot Ninebot and desktop robotic arm Dobot.
Posted in robots | Tagged | 1 Comment

Internet of Things: To Fog or to Cloud?


BI Intelligence:

Cloud computing, usually just called “the cloud,” involves delivering data, applications, photos, videos, and more over the Internet to data centers. The Internet of Things, meanwhile, is the term for the connection of devices (other than the standard ones such as computers and smartphones) to the Internet. Automobiles, kitchen appliances, and even heart monitors could all be connected through the IoT. And as the Internet of Things explodes in the next few years, more types of devices will join that list.

Cloud computing and the IoT both serve to increase efficiency in our everyday tasks, and the two have a complimentary relationship. The IoT generates massive amounts of data, and cloud computing provides a pathway for that data to travel to its destination.

Some of the more popular IoT cloud platforms on the market include Amazon Web Services, GE Predix, Google Cloud IoT, Microsoft Azure IoT Suite, IBM Watson, and Salesforce IoT Cloud.

Fog computing is more than just a clever name. Also known as edge computing, it provides a way to gather and process data at local computing devices instead of in the cloud or at a remote data center. Under this model, sensors and other connected devices send data to a nearby edge computing device. This could be a gateway device, such as a switch or router, that processors and analyzes this data.

IoT Big Data & Analytics

Big data is exactly what it sounds like: it’s a lot of data. The Internet of Things is allowing us to generate more data than ever before, and the eye-popping numbers are still climbing. The “Internet of Everything,” which consists of all people and things connected to the Internet, will generate 507.5 zettabytes of data by 2019, according to Cisco. For context, one zettabyte = one trillion gigabytes.

BI Intelligence believes that fog computing will be instrumental in analyzing all of this data, as it offers several advantages that a cloud computing model simply does not have. These include quicker data analysis, reduced costs tied to data transmission, storage, and management, as well as enhanced network and application reliability.


Posted in Big Data Analytics, Cloud Computing, Internet of Things | Tagged | 1 Comment

Asimo The Humanoid Robot in Action


In 2000…

in 2016…

And on Saturday Night Live…

Posted in Robotics, robots | Leave a comment

Visually Linking AI, Machine Learning, Deep Learning, Big Data and Data Science



Source: Battle of the Data Science Venn Diagrams

HT: KDnuggets


What’s the Difference Between Artificial Intelligence, Machine Learning, and Deep Learning?

Over the past few years AI has exploded, and especially since 2015. Much of that has to do with the wide availability of GPUs that make parallel processing ever faster, cheaper, and more powerful. It also has to do with the simultaneous one-two punch of practically infinite storage and a flood of data of every stripe (that whole Big Data movement) – images, text, transactions, mapping data, you name it.


Posted in AI, Data Science, deep learning, Machine Learning | Tagged | 1 Comment