1 – Artificial Intelligence for Kids Is the Hot New Toy Sensation

Advancing artificial intelligence and speech recognition is rocking the toy industry, and is doing so in high-fashion with Mattel’s Hello Barbie. In collaboration with San Francisco-based ToyTalk, Mattel is scheduled to release its latest talking Barbie doll in November, just in time for the $6 billion holiday toy market. Children are especially susceptible to anthromorphizing ‘smart’ technology. The article quotes Doris Bergen, a professor of educational psychology at Miami University in Ohio, who states that it is very difficult for children to distinguish between what is real and what is not real. The era of AI-driven toys seems destined to blur this line in ways not before seen.

View Companies Related to Your Industry

Explore companies, products, and service providers on TechEmergence:

(Read the full article on Mother Jones)

2 – The Human Big Data Computer Vs. Machines With ‘Contextual’ Artificial Intelligence

Human beings are (we perceive) at the top of the chain when it comes to assessing and forming contextual scenarios based on ‘big data’. For example, if we’re in a dark alley late a tight, our minds immediately begin to formulate situational outcomes based on our perceived environmental surroundings. Computers might be better at analyzing big data, but up until now they have lacked this power in contextual awareness. This is starting to change with further progress in the area of contextual artificial intelligence, which combines software, hardware, networks and service-based functions into an algorithm that almost functions like a set of computer senses. Businesses such as Flybits, which recently introduced its Context-as-a-Service cloud-based solutions, are starting to jump on board.

(Read the full article on Forbes)

3 – Google’s Demis Hassabis – Misuse of Artificial Intelligence ‘Could Do Harm’

DeepMind Co-Founder Demis Hassabis gave an interview with BBCNews and spoke on the potential ramifications and benefits in developing smarter machines. Like many experts in the field, Hassabis acknowledges that we are at the very beginnings of artificial intelligence technology. AI unlocks huge and powerful opportunities, and naturally comes with just as immense levels of responsibility.

“I think artificial intelligence is like any powerful new technology…It has to be used responsibly. If it’s used irresponsibly it could do harm.”

Hassabis expressed the need for discussion on these responsibilities and an array of ethical concerns. He described their active relationships with leading universities in the field, including MIT and Oxford, and stated that Google is in the process of creating an ethics committee to examine the company’s project work. Nonetheless, Hassabis believes that because the technology, particularly in deep learning algorithms, is so new, it may be too soon to discuss specific regulations. He advocates more empirical work so that we can gain a better understanding of the anticipated and unanticipated effects and in turn make better informed decisions on goals-based systems.

(Read the full article on BBCNews)

4 – Now there’s an app for that – Recognizing Blindness as Complication of Diabetes

The California HealthCare Foundation (CHCF) saw a need for better and earlier diagnosis of diabetic retinopathy, a long-term effect of diabetes, but lacked the technical skills to develop a solution. Fortunately, they found an outlet for innovation through Kaggle, a site founded by Anthony Goldbloom that hosts competitions for data scientists. Five months after posting their project idea with a $100,000 prize attached, Statistician Benjamin Graham successfully engineered a deep-learning algorithm that agrees with a doctor’s opinion 85 percent of the time. Using algorithms as part of the diagnostic process is an attractive solution for early diagnoses, particularly in low-income communities where work hours and limited financials interfere with healthcare. As with any AI solution, there are accountability and regulatory issues to address. For now, the algorithm will be limited to determining if a retinal photograph has been properly captured.

(Read the full article on The Economist)

5 – Artificial Intelligence Is Taking Computer Chess Beyond Brute Force

Historically, chess-playing computers have used ‘brute force’, which in hacking lingo means to run every possibility of a solution until the program hones in on a best one, to beat their human companions. Matthew Lai, a Master’s candidate at Imperial College London, has worked to turn this technology on its head by training AI networks to play like masters. Lai used the level of a FIDE international master as a training reference point for his new software, which he dubbed ‘Giraffe’. In comparison to previous attempts that have been made in using machine learning for games like chess, Lai does not program “pattern recognizers” into his software; instead, the algorithm observes determined moves and then assesses their strength, and it does so automatically.

(Read the full article on Popular Science)