Episode Summary: Market research and trends is important when discussing AI and business, but it’s also worthwhile to contemplate the ethical and social implications further down the line. How will countries deal with potential unemployment problems? How might countries collaborate to hedge against the risks that AI poses to the future of work and other economic facets? A relatively small group is helping people do just that i.e. getting organizations and countries to think through how they could hedge against the grander risks inherent in a world powered by AI.
In this episode, we speak with Jerome Glenn, head of the Millennium Project, a global participatory think tank with 60 Nodes around the world that that focuses on research implementing the organizational means, operational priorities, and financing structures necessary to address 15 Global Challenges. Glenn talks about how he gets principalities of the world to bring their big industrial players and the public to talk through possible scenarios that are 30, 40, even 50 years in the future, and about ways we might potentially hedge against risks and make the most of the upsides of AI in a global economy. If you enjoyed listening to our recent podcast with OpenAI’s Ilya Sutskever on preparing for the future of AI, then you may find Glenn’s ideas and the mission of The Millennium Project to be an interesting and useful perspective on this issue as well.
Expertise: Global and business policy and research, artificial intelligence and emerging technologies
Brief Recognition: Jerome C. Glenn is the co-founder (1996) and CEO of The Millennium Project (on global futures research) and lead-author with Elizabeth Florescu, and The Millennium Project Team of the State of the Future reports of the Millennium Project for the past twenty years. He was the Washington, DC representative for the United Nations University as executive director of the American Council for the UNU 1988-2007.
Glenn has over 40 years of Futures Research experience working for governments, international organizations, and private industry in a range of domains, including (but not limited to) Science & Technology Policy, Environmental Security, Defense, and Decision Support Systems with the Committee for the Future, Hudson Institute, Future Options Room, and the Millennium Project. He has addressed or keynoted conferences for over 300 government departments, universities, NGOs, UN organizations, and/or corporations around the world on a variety of future-oriented topics.
Glenn has a BA in philosophy from American University, an MA in Teaching Social Science – Futuristics from Antioch Graduate School of Education (now Antioch University New England), and was a doctoral candidate in general futures research at the University of Massachusetts.
Current Affiliations: Co-founder and CEO of The Millennium Project
The following is a condensed version of the full audio interview, which is available in the above links on TechEmergence’s SoundCloud and iTunes stations.
(2:15) What are some of these scenarios that you’re exposing to nations to have them think through what AI could mean and be for these societies?
Jerome Glenn: We wrote three scenarios based on a global set of surveys and feedback systems around the world – we have 60 nodes, these are groups of individual and institutions that are future-oriented, governments, universities, corporations, NGOs, etc. – we pulled all this information together and produced these three scenarios, the first one is “it’s complicated, a mixed bag” – you imagine everything that can change and make it more complex…we assume in all three scenarios, even the negative, that AI narrow intelligence, what we have to today – we figure that by 2030 it’s a good bet that we get AI general intelligence, and that means all bets are off on how fast learning occurs, how fast it’s implemented around the world with the connection to the IoT, and replicating intelligence simultaneously, which means if fewer countries are prepared for economic transition, we’ll hit a brick wall and acceleration of unemployment – and fast, and that’s a bad picture…
…The first one (scenario) – meaning that like today, you have good decisions and you have bad decisions…you have some countries that prepared themselves and did okay, probably with some kind of guaranteed income to help the transition…some economies that don’t…you have a lot of refugees, both environmental refugees because they (countries) don’t pay attention to global warming properly, and so things are going to get serious by 2030, 2035, and by the time you get out to 2050 you have an extremely complex world, you’ve got enclaves of different kinds of people…but it sort of works out…there’s recessions, wars happen, just as the last 30, 40 50 years we had a variety of warns, and depression, and recessions …
The second scenario is where we don’t have good decisions and to some degree we don’t take AI seriously, and as a result the brick wall is hit in 2030; you have massive unemployment, governments can’t handle it, you have increasing fiefdoms of corporations taking over a lot of things, as well as organized crime…it fills the gap, by 2050 you have psychological despair in the world…
By 2030, if things work out nicely (scenario 3)…they (countries and organizations) consider how you can change the education system, so you can start to look for economic activities worldwide rather than just in your neighborhood…more and more people are self-employed in this world, this one we call “if humans were free”
…We figure money is not enough to do it today, so people will be talking, and are talking now – Finland is going to do something on guaranteed income, you have some stuff going on in Brazil, in India – one of the things your listeners should know is that the research is pretty clear – you give people some basic income, they tend not to be lazy, because you’re not giving them enough to be rich, you’re giving them enough so that they’re not out in the street starving to death, but that gives people a basic security upon which you can start to explore who you are…
Each scenario is about 33 pages, there’s lots of real details…but it implies an economic change in number 3, a self-actualizing economy, that people make a living out of self-actualizing…just as people made a living out of food, shelter and clothing – basic security needs – but as these get increasingly run by synthetic biology, AI, analytics, 3D printing, etc, then these basic things are taken care of, so then we are freer to invent the future, and that is a potentially extraordinary future, because it’s reasonable to assume that most people by 2050 can be an augmented genius – you have access to all this stuff worldwide, your basic needs are guaranteed..then how much creativity could you create?
…What happens when many people in the world, millions of people, have such kind of freedom, except they’re technologically augmented geniuses, that’s an interesting world.
…Your senior executives around the world, they clearly get this, because they understand their corporations are going to be automating and using AI…but if they all riot and go out to the streets and have revolutions, then they can’t sell their product, so serious large-scale businesses, they usually get it, and they need help, because this is a social change, a cultural change, a new kind of economics, and businesses can’t do it by themselves…we need to have a real conversation…
(10:13) What is the darker scenario that you make people think through?
JG: On the AI side of the house, individuals acting alone will be able to make and deploy weapons of mass destruction, you can imagine this all the way from synthetic biology…to nanotech small micro armies that get deployed that people can’t even see, to new forms of information warfare to manipulate the Internet system so that people don’t trust the communication systems, which then more or less make people paranoid behaviorally, people don’t trust each other, all the bad values come out, there’s fiefdoms that get created – it’s really quite a negative scenario…
…and the purpose of doing scenarios is you show cause and effect links going into the future so it’s a possible story…you can see how if we don’t do certain things, how it can spin out of control, because a lot of the support systems break down was well – so a lot of your plumbing, water, electrics…with the addition of the inability to trust information and communication systems that can be manipulated by people through both cyber warfare and information warfare
…we just had a workshop for NATO last month, and the focus was on this stuff, it’s interesting, one of the conclusions out of these talks about the AI and work stuff and the AI and related terrorist stuff turns out to be similar in the sense that we’re going to have to create a new social contract between governed and government, this isn’t something that gets fixed in one time…it’s a rolling discussion and changing your strategy and correcting…one of the things that was important in the NATO workshop is the the first line of defense on a single individual massively destructive (SIMAD) is a neighborhood, they’re the early warning system; now, how do we create a relationship with a neighborhood and the military and policing, etc. systems without turning the world into a gestapo?
…That’s why we have to do this as a conversation…it’s complex, its going to take us years of conversation…
(17:24) How does this eventually make its way into the big-game conversation…in what way do you think this conversation becomes proliferated?
JG: The scenarios are sent and translated in different languages to the 60 nodes around the wold, they create a steering committee, then they do a workshop, they send me the results, we put it all together, and we send it back to them; that’s one part of the strategy, another part is the international labor organization (ILO)…and we will work informally with them…mostly they’ve been looking at rights i.e. if everyone’s going to be in the gig economy, how are the labor rights going to be handled…so now we’re saying, they might not even have a job let alone rights…we’re starting with systems round the world, some will engage the UN system as well, so it’s beginning to move…
(22:06) At what point in your own vision for your own project…do you begin pooling the public’s perspective?
JG: That’s the purpose of these nodes that we have – the nodes hold conferences, write articles and newspapers, have workshops, they do government summer retreat training sessions…think of the nodes like an input-output device, we call it global-local, they’re the ones that engage more of the public…we’re also in the process of creating a collective intelligence system in the future of the world that people will be able to subscribe to, and they can actually put in their two sense, every single piece of information has a little icon for comment, so we can see their view in an organized, structured way.
(23:17) When you speak about these nodes…is this kind of like a meetup.com, or a very private email newsletter that goes out among them, how does this trickle out to people?
JG: We write a memorandum of understanding with some institution; it can be a ministry of science in a country, or a university, it can be even be a consulting company, as long as they have a mix of players in the game, in some places they meet once a month…and they interact among themselves…
…We call it sort of a trans-institution, it doesn’t exist yet in legal precedent…but the idea is it’s all of these institutional categories, but not a majority of any one, so that the people who do the work from those categories are not a majority of any one, the money comes from all categories, etc., so as a result you can act through government, through NGOs, through universities, and take the best of each institutional structure without dealing with the worst parts of its, it’s a different way of doing implementation.