1 – How Tech Giants Are Devising Real Ethics for Artificial Intelligence

A new report issued this week by Stanford University gives a 2016 update on the 100-year study of AI, pioneered by Stanford alumnus and Microsoft Researcher Eric Horvitz in 2009 and co-led by Bioengineering and Computer Science Professor Russ Altman. The report reflected the significance of the new efforts being taken by top tech companies in setting standards for AI ethics. According to those involved in the creation of the industry partnership – which at present includes researchers from Alphabet, Amazon, Facebook, IBM, and Microsoft – the aims of the efforts are centered on ensuring that AI is developed to help and not hurt human beings. There is a hush-hush atmosphere currently surrounding the industry group, of which the name has not yet been announced, though inside sources describe the group as modeled on Global Network Initiative, a human-rights organization advocating for freedom of expression and privacy rights.

(Read the full article on The New York Times and referenced AI report at Stanford)

2 – How a Japanese Cucumber Farmer is Using Deep Learning and TensorFlow

Makoto Kaike has found a novel use for Google’s TensorFlow – helping sort and categorize cucumbers grown by his parents’ cucumber farm in Japan. Makoto’s farm sorts cucumbers into nine different classes (based on variables that include color, texture, shape, scratches, etc.); before TensorFlow, Makoto’s mother and Makoto did all of the sorting, a skill that is not easily transferred to part-time or seasonal employees. Though working with a relatively limited data, Makoto was able to train the system to identify cucumber categories with a real-word accuracy rate of 70 percent, and he sees even more potential in applying Google’s lower-cost Cloud Machine Learning platform for developers. As computing power continues to increase and training model costs continue to come down, it’s likely Google will hear of many more non-ML engineers applying this technology to new projects.

(Read the full article on Google Cloud Platform)

3 – UC Berkeley Launches Center for Human-Compatible Artificial Intelligence

UC Berkeley is putting Professor and AI Expert Stuart Russell at the helm of its new Center for Human-Compatible Artificial Intelligence, which was launched this week, in tandem with co-principal investigators from within and outside of the university, including Berkeley Computer Scientist Pieter Abbeel and Cornell University’s Joseph Halpern. The center will collaborate and focus on developing AI systems beneficial to humans. Russell doesn’t worry about evil “sci-fi” robots, but he does express his concerns about robots learning human values, a complex arena (to say the least).

“AI systems must remain under human control, with suitable constraints on behavior, despite capabilities that may eventually exceed our own. This means we need cast-iron formal proofs, not just good intentions,” says Russell.

He has expressed interest in the idea of inverse reinforcement learning, in which robots learn values by observing human actions, a process that would undoubtedly serve as a reflective mirror in understanding our idealization of human behaviors and cultural similarities and differences.

(Read the full article on Berkeley News)

4 – Drive.ai Wants to Give Self-Driving Cars More Brainpower, Personality

On Tuesday, Mountain View-based startup Drive.ai released its self-driving car technology, centered on vehicle “personality” and communication with humans – both inside and outside of the car. Drive.ai’s Co-founder and President Carol Reiley stated,

“Vehicles of the future will communicate transparently with us, they’ll have personality, and they’ll make us feel welcome and safe, even without a human driver.”

The technology concept is a communication device, which will be sold in a self-driving kit and mounted on the roof of a car to express text (and emoji) messages and sentiments like “safe to cross”. Drive.ai, which was launched out of Stanford’s AI Lab in 2015, also announced the addition of new Board Member Steve Girsky, formerly an executive and board member with GM. Girsky describes Drive.ai as having “the vision and expertise to lead this new era” of transformational self-driving technology.

(Read the full article on SiliconBeat)

5 – Artificial Intelligence Is Helping Doctors Find Breast Cancer Risk 30 Times Faster

Researchers from Houston Methodist conducted a recent study that found AI software can interpret mammogram results 30 times faster than human doctors and with 99 percent accuracy. The study, which was published this week in the journal Cancer, used a limited set of mammogram and pathology reports from 500 patients, and was also trained on diagnostic features and other mammogram results. A task that took the AI a few hours equated to hundreds (or even thousands) of physician hours. This emerging technology could not only save doctors’ time, but also help better assess breast cancer risk from suspicious mammograms and prevent unnecessary biopsies, which is about 20 percent of the 1.6 million U.S. breast biopsies performed annually.

(Read the full article on Forbes and research article in Cancer)

Image credit: auditfutures.org