As a human, we can often understand the mood, intention, and future action of another person just by looking at them. We see their posture, their facial expression, where their eyes are focused, and we can get a decent understanding of what they might do next. The problem of computer vision for body language is a much harder problem to solve, but we are indeed making progress.

Our guest this week is Paul Kruszewski, an computer science PhD who’s spent nearly the last 20 years focused on 3D modeling and artificial intelligence. Today, he’s CEO of Wrnch, a Montreal-based AI company focused on reading and understanding human body language.

Paul explains how advances in 3D modeling and computer vision have allowed researchers to get machines to “understand” the posture, movements, and intentions of human being – and he also helps explore the future applications that this technology might have in security, retail, sports, and more.

Listen to the full interview below on Soundcloud:

itunes-podcast
itunes-podcast
stitcher-podcast
google-podcast
_______________

Guest: Paul Kruszewski, CEO at Wrnch

Expertise: Computer vision, 3D modeling, building technology companies, AI for video games

Brief recognition: Paul received both his Master’s and PhD in computer science from McGill University. In 2000 he founded AI.implant, an AI middleware used by video game companies, and sold the company to Engenuity. In 2007 he founded Grip Entertainment, an AI company for gaming applications, which he sold to Autodesk. Wrnch boasts Mark Cuban as one of its investors and advisors.

Big Idea

“Advanced computer vision technologies of the future will be driven by the need for more safety and more security”

Paul believe that many of the initial use-cases of smart computer vision for body movement and body language will be driven by safety and security needs. He lists a number of potential early applications:

  • A camera attached to an Alexa-like home device that can detect if someone in the house has fallen, or is walking / acting strange (this technology might allow adult children to better monitor their elderly parents)
  • Security cameras that not only detect movement or detect people, but detect the behavior of people (such as people stealing items from a store, or skulking quietly outside a house)
  • Cameras on on autonomous cars that can read the body language or facial expression of pedestrians or passengers (a car in the future should be able to detect if a pedestrian is looking at their phone and walking at the same time, because such people are more likely to cross the road at the wrong time)

(For readers with a more overt interest in physical security and infosecurity tech, check out our full article on artificial intelligence for security.)

Interview Highlights with Paul Kruszewski of Wrnch

Listed below are some of the main questions that were posed to Paul throughout the interview. Listeners can use the embedded podcast player (at the top of this post) to jump ahead to sections that they might be interested in:

  • (3:15) In simple language, how can computers be trained to understand body parts and posture?
  • (9:25) What are some of the computer vision applications were machines will need to understand human posture and intention? What are the business use-cases?
  • (11:00) How could technology like this be used to in-home safety – like an extension of Alexa?
  • (15:30) What are some of the applications of this technology in the autonomous vehicle or transportation field?
  • (20:00) In the next 5-10 technologies, what will be the most important implications of this kind of “smart” computer vision for businesses and governments?

MARKET RESEARCH x INDUSTRY TRENDS

TechEmergence conducts direct interviews and consensus analysis with leading experts in machine learning and artificial intelligence. Stay ahead with of the industry with charts, figures, and insights from our unparalleled network, including executives from Facebook, Google, Baidu, Yahoo!, MIT, Stanford and beyond: