Video games and artificial intelligence software have long been closely associated. Some games, like Halo and Mass Effect, featured virtual AI more advanced than any program in reality. Meanwhile the actual AI used in most games is rigid and fails to meet the standards of learning we’d classify as intelligent, the video game format has been used for years to test AI systems through simple software. Rather than training in games like League of Legends or Call of Duty, researchers would apply theses systems to games like checkers and poker to determine how the AI would behave.
As IMB’s Deep Blue demonstrated, chess was a conquerable game for an AI – the computer defeated reining world chess champion Garry Kasparov in 1997. Likewise, anyone who’s played against “level: hard” AI in simulated poker or black jack know that these systems are vastly superior to the ability to casual human players.
But AI have left the arcade and casino behind, picking up more complex, real time strategy games like StarCraft: Brood War which now offer an arena for AI to do battle. According to the Annual AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment (AIIDE), strategy games like StarCraft are still mainly commanded by humans. Within the game, “hidden information, vast state and action spaces, and the requirement to act quickly” mean that AI need to be multidimensional in order to compete. “The best human players still have the upper hand in RTS games,” AIIDE says on their website, “but in the years to come this will likely change, thanks to competitions like this one.”
In order to find a winner among the AI organizers pit 22 programs against each other, continuously, for two weeks. Nearly 40,000 total games were played until the top three programs stood out. Those three programs then took on on of the world’s top (human) StarCraft players – and although he beat the AI pretty well, the competition illustrated AI’s growing competence at RTS games.
The best AI of the bunch didn’t win a single game against the best humans but researchers at GoodAI of the Czech Republic and United Kingdom are working to challenge that. By teaching programs to learn from basic stimuli the company hopes to they’ll in turn learn how to perform more complex tasks that require more than just one step. Once these basic actions have been mastered, command of more complex behaviors will in turn emerge. At least that’s the hope.
Brain Simulator is open-source software released by GoodAI that can be used to train artificial neural networks to interact with game stimuli. Based on the outcomes of these interactions the program can learn how to perform simple tasks to essentially “play” simples games like Breakout. The artificial neural networks can then be combined to perform even more complicated tasks.
But these games are just the beginning. Similar to Google’s DeepMind project (see video above), GoodAI’s goal is to apply the AI developed through Brain Simulator to tasks beyond gaming. Neither project programs its AI with prior understanding of the game – requiring the systems to learn from scratch as they go. Though that blank slate makes it initially more difficult for the AI to successfully complete its tasks, the challenge may one day facilitate the development of programs that can compete with top tier gamers.
Image credit: Microsoft Studios