1 – Structure-Mapping Engine Enables Computers to Reason and Learn Like Humans, Including Solving Moral Dilemmas
A Northwestern University research team led by Ken Forbus and backed by DARPA is building a structure-mapping engine (SME) that may allow computers to make more complex decisions based on fewer data sets. The model builds on the use of analogies in problem solving, a cognitive tactic that humans use constantly, from reasoning about visual representations to solving moral conundrums. An SME-system is different from a deep-learning based system in that it can learn more quickly from just a handful of rich stories and examples, with new situations analyzed based on lessons learned from logged stories. Analogy-based AI could be particularly useful in the social sciences, and has already been experimented with in learning to solve advanced placement physics problems by the Educational Testing Service.
(Read the full article at Phys.org)
2 -Teaching Machines to Predict the Future
A research team from MIT’s CSAIL has developed an improved deep-learning based system that predicts some human actions from videos. Making predictions is a particularly subtle and complex task even with visual cues, and humans do so in part my making inferences based on many thousands of experiences. The system was trained frame by frame on television shows to make one of four categorized predictions of human action – kiss, hug, handshake, or high five. While the algorithm is only able to predict the correct action about 43 percent of the time, it’s an improvement over the previous best system’s rate of 36 percent; it’s also worth noting that, with an average score of 71 percent, even humans are not always experts at this task. Continuing to train the system and increase prediction rates could contribute to technology used in a range of applications, from robots that interact with people to security systems that alert an emergency contact when someone is getting ready to fall or break into a building
(Read the full article at MIT News)
3 – Increasing our (Twitter’s) Investment in Machine Learning
In a Monday blog post, Twitter’s CEO Jack Dorsey announced Twitter’s recent acquisition of Magic Pony Technology, a London-based startup specializing in machine learning for visual processing. The move follows multiple machine learning investments made by Twitter over the past few years, including the companies Madbits and Whetlab, as it attempts to develop its applications in this area. The Magic Pony team will be joining the Twitter Cortex team of engineers, data scientists and researchers as they build a product that incorporates a better understanding of imagery and “in which people can easily find new experiences to share and participate in,” said Dorsey.
(Read the full announcement on Twitter’s Blog)
4 – Musk-backed Nonprofit AI Research Company Developing Household Robot
The nonprofit OpenAI announced on Monday its goal of building a household robot to perform basic household chores. In their blog, OpenAI briefly described the current project as an “off-the-shelf” robot (not manufactured by the nonprofit) that makes use of its research team’s learning algorithms. One of the farther-reaching aims is to improve learning algorithms overall and eventually develop a more “general purpose” robot. There was no timeframe given, and the goal was one of an announced set that include building an agent with natural language understanding and solving a wide variety of games (inspired by DeepMind) using a single agent. OpenAI notes that all of their goals share core technology, and that progress on one is likely to invoke progress on all projects currently under way.
5 – Facebook Open-Sources Torchnet to Accelerate A.I. Research
Facebook published a paper and blog announcement on Thursday about its new open-source software Torchnet, designed to streamline deep learning technologies. The decision was made to build a new deep-learning framework on top of its existing open-source library Torch. Laurens van der Maaten, a research scientist in Facebook’s Artificial Intelligence Research (FAIR) lab, said:
“It makes it really easy to, for instance, completely hide the costs for I/O [input/output], which is something that a lot of people need if you want to train a practical large-scale deep learning system.”
Facebook’s decision to build a system on top of existing technology is different from what other tech players in the industry have done, including Amazon, Google, and Microsoft, all three of which have recently introduced brand new deep-learning networks. While the technology is currently limited to Torch, van der Maaten indicated that this may not always be the case, as it already has aspects that can be implemented in other frameworks like Caffe and Google’s TensorFlow. The Torchnet paper was presented by van der Maaten and colleagues on Thursday at the 2016 International Conference on Machine Learning in New York.
(Read the full article on VentureBeat)