If it looks like a duck and quacks like a duck it’s probably a duck. The complexity of human learning is contained in that adage. But as the development of artificial intelligence has shown, the acts of recognition and learning is easier said than done.
In fact, one of the field’s biggest hurdles has been the challenge to replicate the process of human learning such that an AI system can recognize an object (e.g. a duck) from its features (e.g. its appearance and sound) or, conversely, recognize features (e.g. yellow bills, white feathers, and quacks) as associated with a particular object. Its an even bigger task to program that machine to recognize traits and features in one object and apply them to other objects that’s similar in style.
So, for example, it’s intuitive for a human to see a ducks feathers and see it fly, see it’s webbed feet and see it swim, and from these observations learn that other animals with feathers and webbed feet are similarly apt at flying and swimming. However, this intuitive task requires in-depth programming. In order for a machine to recognize, understand, and learn like a human, each feature of the object must be detailed as a mathematic formula.
Using networks of mathematical equations to recognize and analyze patterns within huge amounts of data, deep learning programs are currently capable of a degree of human learning. But these programs must observe tens to millions of examples in order recognize and learn – a feat which takes time, energy, and troves of information.
This commitment may some day be minimized per the work of scientists Brenden Lake, Ruslan Salakhuthinov, and Joshua Tenenbaum whose research, published in Science, purports to have programmed a human-level of learning into an algorithm. Through what they call Bayesian Program Learning, the researchers claim to have reduced the tens and millions of examples need by deep learning analysis to just one example. In other words, whereas a deep learning system would require a handful or more images of ducks in order to recognize a duck – or recognize the avian traits of ducks in other birds – a program equipped with Bayesian Program Learning would need just a single image of a duck to do the same.
The results of this research are far from that level of complexity though and don’t spell the end of deep learning nor necessarily the beginning of a new norm. Lake et al taught their algorithm to recognize and recreate relatively simple letters from ancient alphabets as opposed to complicated objects like the duck discussed above. Sill, the algorithm performed remarkably well.
When Amazon’s Mechanical Turk users were asked to distinguish between letter recreated by the machine from those recreated by human test subject, the Mechanical Turks chose the correct answer just 52 percent of the time. The success of this type of Turing test suggests that there may soon be a paradigm shift in the way scientists approach machine learning.