Episode Summary: Over the last decade, many first-world militaries have developed, and in some cases deployed, autonomous “killer” robots. Some proponents believe that such robots will save human lives, but another side believes that an accidental arms race of this type would yield long-term detriments that outweigh any good. University of Sheffield’s Dr. Noel Sharkey stands by the latter argument.
As Cofounder for the International Committee for Robot Arms Control, he has spent a good part of the last decade trying to create an international ban on such robots. In this episode, he speaks about the developments in the domain of autonomous killer robots, as well as how groups of global leaders might come together to convince nations and other global policy platforms to adhere to such an agreement for the benefit of all humankind.
Guest: Dr. Noel Sharkey
Expertise: Robotics and Emerging Technologies
Recognition in Brief: Noel has moved freely across academic disciplines, lecturing in departments of engineering, philosophy, psychology, cognitive science, linguistics, artificial intelligence, computer science, robotics, ethics and law. He holds a Doctorate in Experimental Psychology (Exeter), an honorary Doctorate of Science (UU), and an honorary doctorate in Information Science (Skovde). He is a Chartered Electrical Engineer and Chartered information technology professional.
Current Affiliations: University of Sheffield; Co-director of The Foundation for Responsible Robotics; Chair and Co-founder of International Committee for Robot Arms Control; Director for Centre for Policy on Emerging Technologies (Europe); Founding Editor of the journal Connection Science; Board member of the journals Robotics and Autonomous Systems, Artificial Intelligence Review, and Journal of Behavioral Robotics; Fellow of the Institute of Engineering and Technology, the British Computer Society, the Royal Institute of Navigation, the Royal Society of Arts; member of both the Experimental Psychology Society and Equity (the actors union)
The State of Autonomous Robot Weapons
We might say that progress in the field of autonomous killer robots is an arguable term (maybe advancement is more contextually neutral). According to Dr. Noel Sharkey, automated weapon systems in the last 15 years have advanced dramatically. There are a plethora of examples from countries all over the world. In the U.S., there’s the phalanx on most U.S. fighter ships that can be switched to autonomous mode by a commander in the event of a swarm attack; in such an event, the weapon is being supervised but humans don’t give any input, unless they hit the off switch. In Germany, the Mantis Air Defense Protection System is in place to detect and shoot down projectiles near any target base.
Noel notes that there has been a leap in recent years from defense to mobile weapons, with the U.S. leading the pack (for the moment). One such development is the X47B, a small fighter jet with no room for a pilot. The X47B is currently in advanced testing, such as taking off and landing from aircraft carriers, and it has about 10 times the reach of an F25, says Sharkey. “As a roboticist, I look at this in awe, it’s an incredible piece of technology, but it’s purpose is not so good,” he says. The idea would be to move fleets in direct line of fire back and send such weapons into battle zones in their place.
An on-the-ground example of a mobile unmanned vehicle is the Crusher. Developed by Carnegie Mellon in response to the DARPA Grand Challenge, this 7.5 ton truck with a machine gun earned the nickname due to its ability to crush Cadillacs. The United Kingdom has the Taranis, which was built to fly over “white area” and look for targets (though at present is not weaponized). China has built its own version of an unmanned combat air craft and has a handful of other such aircrafts in development. Russia has developments that are not as transparent, but Sharkey says that there have been reports from Russian papers that suggest the government is using autonomous robot soldiers to patrol its ballistic missile bases. South Korea’s SGR-1 robots have been set up along the border with North Korea. These robots, which are similar to machine guns with sensors and a 2-mile target range, can be switched to fully-autonomous mode in the event of a swarm attack of North Korean soldiers. These are just a few of many examples developed worldwide.
Buoying an International Treaty
Which of these technologies is most troubling to Sharkey? “All of them,” he says, “they’re all a deadly threat to global security.” Noel illustrates just one aspect of the complex political challenges faced by countries by discussing U.S. defense guideline 3000.9. The guideline language states that the military “will always make sure appropriate level of human judgment” are used in matters autonomy in weapon systems. This sounds okay, says Sharkey, but when you read the rhetoric in military documents, you notice that if the military can’t complete a mission because someone or some country has jammed communications, then they can complete the mission on their own (presumably using whatever means they have available). What do they mean by appropriate judgment? Might this imply no judgment at all, in some cases? These are the types of insidious ethical details wound up in the issue.
As Noel sees it, the issue of no boundaries is very real, which is why we need an international agreement and treaty. “But the UK has said it won’t support a ban nor a moratorium because they’re not planning on using them without someone in the loop…if you’re not going to use them, why would you not want to ban people from using them?” questions Sharkey. Though there’s been much progress made in international discussions with the United Nations, not to mention international attention gained by the media, it seems relatively clear that most first-world countries are still leaning towards leaving their options open. The trouble of resolving the fact that no country wants to be the “weak” nation remains a precarious bridge to cross (and, as Entrepreneur relates in this article, tech leaders have already dubbed this issue the next potential global arms race).
If we were to create a ban, how would it roll out and be enforced? The first thing that happens, says Sharkey, is a group of governmental experts get together. “You can take the horse to water but it has to drink itself,” he quips, noting that there are of a lot of side events and lots of discussions happening in which the experts (like Sharkey) can offer advice and guidance. “You push these ambassadors, but ultimately they have to sit down and construct the treaty,” Noel says.
Of course, those activists who are pushing the formation of a treaty may not agree with the final draw up – it’s up to the government representitive of the nations themselves to define the treaty terms. “Once done, it’s going to be difficult to see if everyone’s complied, but that’s the case with a lot of weapons,” he says, “…but you can create incredible stigma.”