Introduction: In this episode of AI in Industry, we explore how artificial intelligence can be use to manipulate human behavior – in gaming and in business. We explore how game designers use psychology and machine learning to drive their own desired outcomes, leaving users to “feel” in control.

Dr. Charles Isbell teaches machine learning at Georgia Tech. He explores the manipulative elements of game design, and how some of the same AI approaches are likely being used at tech giants like Amazon and Facebook. In this episode you learn how businesses leverage the “illusion of choice” with subtly influential AI techniques. Charles also helps us understand which businesses will be most able to use AI to guide user behavior in the years ahead.

itunes-podcast
itunes-podcast
stitcher-podcast
google-podcast

Brief Recognition: Dr. Charles Isbell is Senior Associate Dean at Georgia Institute of Technology where he teaches machine learning. He received his PhD from MIT in 1998, and then spent four years as a Principal Research Scientist at AT&T Labs. His academic website explains his academic focus: “My fundamental research goal is to understand how to build autonomous agents that must live and interact with large numbers of other intelligent agents, some of whom may be human.”

Current Affiliations: Senior Associate Dean at Georgia Institute of Technology

Expertise: Statistical machine learning, artificial intelligence

Big Idea

Dr. Isbell believes that the the most competitive business will be the ones that create an ecosystem in which customers engage. Ecosystems (like the eCommerce ecosystem of Amazon, or the search and online tool ecosystem of Google) allow the companies of the future to continuously learn from user behavior, gradually delivering more and more on the user’s intent, and learning to bend their behavior in a way the benefits the company.

Charles details three unique example of AI “influence” in video games (we’ve covered AI applications in gaming in previous articles, for readers interested in that topic specifically). These examples below work well as inspiration for business applications as well:

1 – Suggest the Path You Want the Customer to Take

Dr. Isbell gives an example of a player coming across three doors in a video game. The game designer wants the player to go through door number one. Dr. Isbell suggests he or she can program a scream to come from behind door number one. That way, the player will feel curious to enter door number one and not the other doors while also feeling as though it was their own curiosity which led them to enter door number one, as opposed to the designer’s suggestion.

One way to implement a similar mechanism in digital marketing would be for an AI recommendation system to advertise a single product at a given price point, a bundle containing two of that product at a higher price point, and a bundle containing three of that product at an even higher price point. The marketer could add a label to the first bundle that says “most popular.” In this scenario, there are three doors from which a customer can select (an AI could be given a wide number of “incentives” of this kind to experiment with).

The “most popular” label would act as the scream behind door number one in Dr. Isbell’s gaming example. Although there’s a social proof element to our digital marketing example, the essence of Dr. Isbell’s stands: The customer may be more likely to select the first bundle than the singular product or the second bundle, and they may feel as though they came to that decision on their own.

2 – Provide the Customer with the Illusion of Choice

In Dr. Isbell’s second example involving the player’s encounter with a choice of three doors, he explains how sometimes when he’s designing games he’ll create a system in which no matter which door the player chooses, he or she will reach the same destination. If the player chooses to go through door number two instead of door number one, he or she will be redirected to the destination behind door number one.

With this, the player may feel that in choosing between the three doors they are in control of their destination, but in fact they are not – and the building map is simply reconfigured in a new way to ensure that the selected door leads to the desired first destination. This gives the illusion of choice – so that the player doesn’t feel like he or she is “on rails” (i.e. in a situation where they are being forced down a specific story path).

This effect can be leveraged in digital marketing. A customer might be able to click several advertisements or banners on the same page that all target different potential aspects of the customer, but those advertisements or banners may all link to the same long form sales page. The customer may feel as though they clicked on the ad or banner most relevant to them, but in reality they were going to reach that sales page if they clicked on any of them.

Companies like Persado are already using AI to adjust marketing copy on the fly – and in the future we can imagine advertisements programmatically generated per user. As in the example above, these ads might convey different or unique benefits or features, but ultimately lead the user to the same next conversation point (such as a product purchase, an upsell, an email opt-in, etc).

3 – Use Scarcity

Dr. Isbell gives an example of one technique that is likely familiar to many business owners: scarcity. He suggests that if something is scarce, the customer will decide for themselves that is is valuable. This, he says, is one of the best ways to ensure that the customer feels in control of their engagement with you. It’s possible that AI will be able to use scarcity as one of the many tools in its repertoire when creating advertisements and copy.

Turning Insight Into Action

Business owners may want to try implementing the three techniques for reinforcement learning listed above. They could try offering bundles of varying price on their next sales page and somehow suggest which one the customer should purchase without using the word “suggest” or “recommend” to provide the illusion of choice. They could perhaps run several advertisements or banners with varying content on the same page that all link to the same landing page. Or maybe it’s time to use scarcity to drive higher sales.

(Readers with a deeper interest in the ethical concerns around AI and behavior manipulation may be interested in our recent article in partnership with the IEEE, called “The Ethics of Artificial Intelligence for Business Leaders – Should Anyone Care?“)

Interview Highlights with Dr. Charles Isbell of Georgia Tech

Dan (2:18): Talk about how machine learning and reinforcement learning plays a role in getting people to do what you want without making them aware of it?

CI: What we get out of machine learning and AI is data. What one wants to figure out is the minimum amount of data that one needs to know about someone in order to be able to predict how they’re going to behave both in the short term and long term. Where influence plays into that is ones wants to use techniques that are well-understood about people in order to get them to not only take a particular action, but to adopt the goals one has for them; once they do that, they will do whatever one needs them to do to accomplish those goals.

Dan (3:57): What does this look like in gaming?

CI: There are lots of good examples of positive goal adoption. For example, quitting smoking. Once people believe they need to do this, they will often do what they need to do. In games…the goal would be to get them to have a great experience. Let’s take a murder mystery. The player is going to play through this world and figure out who committed a murder. If I want to give the player a sense of control, I have to give them the ability to explore. But if I do that, we might not have a murder mystery.

In a game like that, where i’m trying to get the player to have a long term investment in the experience and get them to go down certain paths, the goal is to get them to…try to get to this quest in the game or build a certain amount of points that will get them to the end [when they solve the murder mystery]. If I can just get the player to adopt the idea that it makes sense for them to learn this skill or read this, then that’s the kind of goal adoption we’re talking about. It’s a means to the end. The means to the end is spending more time learning something.

Dan (6:20): What are the subtle nuances that are used in gaming ecosystems are to get them to take the next step?

CI: There’s three ways you can do it. I could lock every door except the door I want the player to go down or not let the player jump off a cliff. That’s called being “on rails,” and it becomes obvious after a while. The second thing I could do is to change the environment a little to get the player to do what I want them to do.

For example, if I want the player to go into door number one, I’ll have someone scream behind the door. The player may ignore that, but they’ll probably be curious and want to go through that door. If the player does go through door number two, sometimes I’ll rearrange the game so that door number two is door number one. The third thing is to get the player to believe that a particular thing I want them to do is important for their very own reasons.

We know from psychology and marketing about a thing called scarcity. There’s something I want the player to have and they don’t want it until it’s going away. If I make something scarce, the player will decide it’s valuable, and they’ll decide they came up with the idea of it being valuable on their own. This has the most likelihood of making to get people to do the thing you want them to do in a way that makes them feel like it’s in their control

Dan (10:12): How does reinforcement learning and machine learning play into psychology marketing tools like scarcity?

CI: The first trick is to remember that machine learning is about data. It’s not about the next thing we want someone to do. What we care about is the thing three weeks or a month from now. The tools we normally use for machine learning are about what’s the next thing, but what we care about is what’s the thing that will get someone to be part of our ecosystem. Reinforcement learning is about delayed reward. Games have this structure where players are not just doing one thing and being done,they’re doing many things that will eventually lead them somewhere.

The people who are going to win in the immediate and long term are the ones who figure out how to get customers to move just a little in a certain direction. The more data one has, the more they’re going to be able to do with it.

Dan (16:32): So the better and newer ideas may be in the data.

CI: Actually, I’m not sure if I believe that. The problem with data is that it gets one to build a model and then one to believe that model and keep believing that model. In other words, the idea of taking that model and figuring out what’s the next new thing is going to be difficult because all of the data is going to tell us we should do this old thing. It’s difficult to find the difference between a new trend before someone else figures it out and something that will fail.

One of the reasons it’s difficult to do is that we pretend data is objective, but data is not objective. The data we choose to look at is subjective. We decide to filter something out. Maybe [what we decided to filter out to look at] is the wrong way we should be looking at people, We could potentially miss the grouping o f the world that would get us a hit on what the next thing is. Maybe in the future we’ll see the cleverness is people who are going to figure out what the group of data is they should be looking at.

Here’s an example. I’m going to keep track of every single atom in a person’s body: What it’s doing, what direction it’s going. That clearly doesn’t tell me anything interesting, even if it’s all the data in the world. The more data I’m willing to entertain, the harder it’s going to be to figure out what’s important.

Dan (20:11): Any closing thoughts on where you think the domain of influence will have the greatest impact?

CI: Once we get to the point of how to influence people, we’re no longer  just talking about numbers. We’re talking about culture and social organizations. Now we’re making social decisions that in the long term may or may not be the right ones to do for us to win.

If we take the examples of games seriously, we’ll realize games are a metaphor for ecosystems people want to spend time in. The biggest opportunities in the influence domain is for businesses that create an ecosystem where people feel they are a part of it and know why they want to be a part of it. Brand, for example. It seems to me that we’re moving toward a place where everyone is in that business whether they want to be or not.

itunes-podcast
itunes-podcast
stitcher-podcast
google-podcast

 

Header image credit: The Register (UK)

MARKET RESEARCH x INDUSTRY TRENDS

TechEmergence conducts direct interviews and consensus analysis with leading experts in machine learning and artificial intelligence. Stay ahead with of the industry with charts, figures, and insights from our unparalleled network, including executives from Facebook, Google, Baidu, Yahoo!, MIT, Stanford and beyond: