1 – System Predicts 85 percent of Cyber-Attacks Using Input from Human Experts
Researchers from MIT’s Computer Science and Artificial Laboratory (CSAIL) alongside machine learning-startup PatternEx have created a new cybersecurity defense system that makes use of both unsupervised and supervised learning methods. AI2 merges artificial intelligence with analyst tuition by first clustering suspicious data through unsupervised learning, a relatively new area of application in machine learning. Human analysts are then presented with the data and given an opportunity to identify actual attacks, which are then fed back into the machine. The system learns and refines its accuracy over time. CSAIL research scientist Kalyan Veeramachaneni, one of AI,2‘s co-creators, described it this way:
“The more attacks the system detects, the more analyst feedback it receives, which, in turn, improves the accuracy of future predictions. That human-machine interaction creates a beautiful, cascading effect.”
AI2 has a better accuracy and efficiency rate (85%) than today’s systems, which are often either primarily in the human or the machine-learning bucket. Previous machine-learning systems rely on anomaly detection, and can too often trigger false positives. This new system has the potential to affect vast improvement in defense of fraud, service abuse, and account takeover attacks.
(Read the full article on MIT News)
2 – Microsoft and Google Want to Let Artificial Intelligence Loose on Our Most Private Data
Microsoft, Google, and other research teams are exploring ways to apply deep learning in information-sensitive areas, including the financial and medical sectors. While deep learning could be of tremendous value in helping to solve problems in this area, including in diagnostics, private information held by organizations is difficult to access due to legal and regulatory restrictions. Vitaly Shmatikov, a professor at Cornell Tech who studies privacy, and his colleague Reza Shokri are testing “privacy-preserving deep learning”, which could offer a solution to analyzing this data without actually sharing it.
Their research was partially funded by Google, who has similar research ideas and is in talks with Smatikov about their success in training deep learning algorithms to use data, such as images, from smartphones without transferring any data into the cloud. Microsoft’s research team is working on another related solution that it calls “Crypto Nets”, a deep learning software that can analyze encrypted data and output encrypted responses. This solution requires more computing power than is currently available on most mainstream computers, but Kristin Lauter, who leads Microsoft’s cryptography research group, believes the gap is small enough to be closed in the near-term future for initial use by organizations like hospitals, pharmacies, and financial firms.
(Read the full article on MIT Technology Review)
3 – Outwitting Poachers with Artificial Intelligence
Researchers from University of Southern California (USC) have developed an AI and and game theory approach to help prevent poaching, illegal logging, and other related issues worldwide. The team, which was partly funded by the National Science Foundation (NSF) and the Army Research Office, worked in collaboration with researchers and conservationists from the U.S., Singapore, the Netherlands, and Malaysia. Milind Tambe, a USC professor and director of the Teamcore Research Group on Agents and Multiagent Systems, said about the system:
“This research is a step in demonstrating that AI can have a really significant positive impact on society and allow us to assist humanity in solving some of the major challenges we face.”
The team recently combined an older system, PAWS (created in 2013) with a new tool called CAPTURE (Comprehensive Anti-Poaching Tool with Temporal and Observation Uncertainty Reasoning), which predicts ‘attacking probability’ with increased accuracy. Researchers are in talks with wildlife authorities to implement the system in Uganda later this year. Tambe’s team will also present their most recent findings at the 15th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2016) in May.
(Read the full article on National Science Foundation)
4 – Alphabet Q1 Misses, Google Shows Hiccups
Google released its first quarter 2016 financial results on Thursday, showing numbers that just missed predicted marks. The tech giant reported a net income of $5.248 billion ($6.02 per share). Revenue was up 17 percent from the previous year at $20.257 billion ($7.50 per share), but still fell short of the forecasted $20.38 billion ($7.97 per share). Alphabet showed $16.469 billion in revenue, just missing its target of $16.55 billion. Alphabet and Google’s CFO Ruth Porat showed little worry over the near misses, stating “we’re thoughtfully pursuing big bets and building exciting new technologies, in Google and our Other Bets, that position us well for long term growth.” The majority of Google’s Q1 sales came in from the enterprise cloud, software and data management products. Alphabet lost most of its’ revenue in the ‘Other Bets’ category, which include healthcare-focused initiatives and other ‘moonshot’ projects, with operating losses close to $802 million. While significant, the losses are less than the $3.5 billion lost in the same space last quarter. The company is banking on machine learning to help it continue to improve and make profitable its services and products.
(Read the full article on ZDNet)
5 – Baidu Announces New Self-Driving Car Team in Silicon Valley; Plans to Grow to 100+ in 2016
China’s Sunnyvale-based arm of Baidu announced on Friday the assembly of a self-driving car research and development team, part of the company’s newly-formed Autonomous Driving Unit (ADU). The ADU-US team, which is actively hiring, will include machine learning researchers, and a wide range of hardware and software engineers, from robotics to onboard computers and sensors. Initial focus will be on developing planning, perception, control and systems. Baidu’s autonomous vehicle strategy includes starting with small “autonomy-enabled” regions and designing autos that are clearly recognized as autonomous.
(Read the full article on MarketWired)
Image credit: MIT News