1 – Is DeepMind’s Health-Care App a Solution, or a Problem?

A new app from Google’s DeepMind called Streams will provide access to patients’ histories and test results to hospitals in the UK. After signing a five-year contract with UK’s National Health Service, DeepMind is now privy to 1.6 million + patients’ healthcare information registered with one of Royal Free NHS Trust’s three London hospitals. The contract and app is a trial run into helping streamline the healthcare provider and delivery system, and DeepMind claims that the system could save over half-a-million hours per year for healthcare administrators. Critics have been quick to turn an eye on the large amount of data that DeepMind would otherwise not have access to, but an analysis of this initial foray may prove to be imperative as machine and deep learning enter the prized but sheltered healthcare sector.

(Read the full article on MIT Technology Review)

2 – Deep Learning, Cloud Power Nvidia

Last week, Nvidia reported record quarterly sales of $2 billion, an increase of 54 percent from one year ago. Chief Executive Jen-Hsun Huang said:

“We had a breakout quarter – record revenue, record margins and record earnings were driven by strength across all product lines. Our new Pascal GPUs are fully ramped and enjoying great success in gaming, VR, self-driving cars and datacenter AI computing.”

Nvidia has recently provided its chipsets to big-name consumer product providers, like Nintendo (for its Switch game console graphics) and  Microsoft (for the Surface Studio desktop computer), as well as GPU-enabled servers for Amazon Web Servers, Microsoft, IBM, and Alibaba. In addition to Tesla agreeing to install the Nvidia Drive PX 2 computers into its newer autonomous cars, Nvidia has also formed a collaborative research partnership in advanced self-driving technology with New York University’s pioneering deep learning team. Nvidia’s (and others’) success in this domain seems to speak to the emerging infrastructure of cloud-based infrastructure integrated with AI technologies.

(Read the full article on Forbes and Nvidia News)

3 – Key Facebook Engineer Departs To Start Deep Learning Hardware Company

Facebook Engineer Serkan Piantino, who helped found the company’s AI research lab with Computer Scientist Yann LeCun, is leaving Facebook to launch Top 1 Networks. The startup will provide access to Nvidia’s GPUs as a cloud-based service, similar to Amazon and other related services. In regards to what makes his company’s services better than existing provider’s, Piantino said:

“They don’t have the latest and greatest Pascal cards from Nvidia. Once you begin to get one generation behind, the performance decreases drastically.”

Top 1 Networks is still in very early stages, with Piantino as its sole employee and funding the venture from his own pockets. To date, he’s built his first hardware prototype and is in talks with a potential first group of paying customers.

(Read the full article on Forbes)

4 – Minority Report-style AI Learns to Predict if People are Criminals from Their Facial Features

Two researchers from Jiao Tong University published a research paper last week that claims they’ve created an algorithm that can identify a convicted criminal based on facial features alone. Based on a set of 186 photos, the system performed with 90 percent accuracy; the system was trained on a database of more than 1,600 images, of whom half were convicted criminals. The implications of this research are obviously controversial, with some worried that China could add this technology to its AI-powered security initiatives (which already includes predictive policing). Dr Richard Tynan, a technologist at Privacy International, commented:

“It demonstrates the arbitrary and absurd correlations that algorithms, AI, and machine learning can find in tiny datasets. This is not the fault of these technologies but rather the danger of applying complex systems in inappropriate contexts.”

While Researchers Wu and Xiang claim that the algorithm discerns a higher degree of dissimilarity in the faces of criminals, there will likely need to be other research initiatives in this domain to justify validity.

(Read the full article on The Telegraph and research paper at arXiv.org)

5 – Zero-Shot Translation with Google’s Multilingual Neural Machine Translation System

This week, Google announced that its current Neural Machine Translation (GNMT) system has been extended to allow for translation between multiple languages, a step closer to scaling up to the initial 103 supported languages. Instead of changing the system to accommodate translation, GNMT uses a “token” at the beginning of the input sentence to specify the required language for translation. This method improves quality and also makes possible “zero-shot translation”, essentially allowing the system to translate between language pairs to which it has not yet been exposed. Google was able to look into the system as it performed its zero-shot translations to try and discern the underlying approach, and determined that GNMT “must be encoding something about the semantics of the sentence rather than simply memorizing phrase-to-phrase translations”. The system is currently being used to provide services for 10 of its released 16 language pairs.

(Read the full article on Google’s Research Blog)

 

Image credit: TechCrunch

MARKET RESEARCH x INDUSTRY TRENDS

TechEmergence conducts direct interviews and consensus analysis with leading experts in machine learning and artificial intelligence. Stay ahead with of the industry with charts, figures, and insights from our unparalleled network, including executives from Facebook, Google, Baidu, Yahoo!, MIT, Stanford and beyond: