Nobody's Ready for the Killer Robot
Nobody's Ready for the Killer Robot
A Q&A with General Robert Latiff on the ethics of
warfare in the autonomous future.
By Tobin Harshaw December 30, 2017, 5:00 AM PST
It was another busy year for everybody's favorite
automotive-industry disruptor, space-travel visionary and potential James Bond
villain Elon Musk. Tesla surpassed Ford and General Motors in market
capitalization; the Gigafactory began churning out lithium-ion batteries; his
neighborhood roofing company began installing solar panels that aren't crimes
against architecture; he's sending two rockets to Mars; he started digging a
giant tunnel under Los Angeles; and he dissed President Donald Trump over the
Paris Climate Accord. (OK, he had a few misses too; just ask anybody on the
Model 3 waiting list.)
Given all this, you may have overlooked another of Musk's
2017 initiatives: saving humanity. Last summer, he and a bunch of other
tech-industry A-listers -- including Google's artificial-intelligence guru
Mustafa Suleyman -- wrote a letter to the United Nations urging a ban on killer
robots. The future dystopia they anticipate would make you nostalgic for
Skynet:
Lethal autonomous weapons threaten to become the third
revolution in warfare. Once developed, they will permit armed conflict to be
fought at a scale greater than ever, and at timescales faster than humans can
comprehend. These can be weapons of terror, weapons that despots and terrorists
use against innocent populations, and weapons hacked to behave in undesirable
ways. We do not have long to act. Once this Pandora’s Box is opened, it will be
hard to close.
Meanwhile, the Pentagon was also writing a letter -- its
2018 budget request asking Congress for $13 billion in science and technology
money and another $10 billion for space-based systems. With all respect to Mr.
Hyperloop, he's going to find the military-industrial complex a much tougher
foe than the dinosaurs of Detroit.
Nonetheless, as somebody who's a tad nervous to be alone
in the house with Alexa, I'd like to think there are folks involved with
autonomous weaponry who share Musk's concerns, if not his tired mythological
metaphors. So this week I found one such person: Robert H. Latiff, the author
of "Future War: Preparing for the New Global Battlefield."
Latiff retired from the Air Force as a major general in
2006, last serving as the director of advanced systems and technology at the
National Reconnaissance Office. Since then, he has worked in the defense
industry and as a lecturer at Notre Dame and George Mason universities. Here is
a lightly edited transcript of our discussion:
Tobin Harshaw: General, you left the military more than a
decade ago, so why come out with this book now? Did your experiences in the
private sector and academia inform your view of the "new global
battlefield"?
Robert Latiff: Well, I'd actually been thinking of this
since back around the time of the invasion of Iraq. When I saw some of the
things that were going on, not the least of which was Abu Ghraib, they bothered
me a lot. Then I went to work, as you might expect, as a defense contractor,
and it was not a bad job. But neither there nor while I was in the military did
I actually hear anyone ask whether we should be doing some of the research we
were doing. You know, some of it was a little scary -- I don't know that it was
necessarily unethical -- but nobody ever asked the question.
TH: Can you give an example or two?
RL: Generally, some of the things that had to do with
biology were a little frightening, things like synthetic biology where you
don't really know the ultimate implications. And some of the work with
electromagnetics was a little scary, particularly as it had to do with humans
and lethality.
TH: Got it. So you spent some time in the private sector,
and …
RL: When I finally left and began teaching and doing
consulting I had some time and these things were still bothering me, and I
contacted the people at Notre Dame. They jumped at the chance of having me put
together a course, which became hugely popular. Ultimately I was asked to write
a book nominally but not totally based on the course I was teaching.
TH: And how did the students at Notre Dame react to the
material?
RL: First of all it frightened them a little bit. I think
they were probably more surprised than anything else. Many of them had never
had any exposure to the military at all.
When you start talking to them about some of the newer
weapons, it kind of blew their minds. For many of them the first response was,
“Yeah we've got to do this.” But when I started asking them to think about the
ethical implications, they sort of stepped back from that, and I think they
really got a lot out of it.
TH: As long as there's been warfare, there have been
arguments about the ethics of warfare. Thomas Aquinas's “just war” theory is
probably the most famous. As we look at today's technologies, which ones raise
the biggest questions for you viewed in that long tradition?
RL: I think that artificial intelligence and autonomy
raise probably the most questions, and that is largely because humans are not
involved. So if you go back to Aquinas and to St. Augustine, they talk about
things like "right intention." Does the person who is doing the
killing have right intention? Is he even authorized to do it? Are we doing
things to protect the innocent? Are we doing things to prevent unnecessary
suffering? And with autonomy and artificial intelligence, I don't believe there's
anybody even in the business who can actually demonstrate that we can trust
that those systems are doing what they should be doing.
TH: Well, we know that a lot of people are worried about
this. Last year a bunch of tech-industry bigshots wrote a letter to the U.N.
urging a ban on killer robots. There was a group that held a meeting in
November, under the authority of the Convention on Certain Conventional Weapons
that tried to get the ball rolling on a global ban. To me, this honestly sounds
like a lot of do-gooder nonsense. But do you think an international effort can
be made to limit the sort of apocalyptic level of things that people worry
about?
RL: I do think that there is, and there should be.
Incidentally, the people who are supporting this weapons ban actually contacted
me at one point. And my response to them was that I don't think it's do-gooder
nonsense. I think it's the right intention but it's the wrong approach. And let
me explain. First, I don't think bans ever work. And second, I don't believe
developed nations are going to participate in a ban. And so whenever you ban
something, pretty much it just goes underground and you can't police it because
in international law there’s no policing.
I am a big fan of arms-control agreements and nonproliferation
agreements. Countries like the U.S. and Russia and China, they're going to do
this regardless. And if we could create some kind of arms-control agreement
where we could maybe limit the things that we do collectively and have some
kind of verification regime -- and more importantly, agree to try to at least
contain the proliferation -- that would be a step forward. Yes, places like
North Korea and other rogue nations and ISIS, they're ultimately going to get
some of this stuff. But it would be a lot easier to police and verify an
arms-control agreement than a ban.
TH: I'm also a big fan of nuclear nonproliferation
agreements. But with that, we're talking about major hardware and a vast
industrial base that you need to build the stuff, and then actual nuclear
material that is hard to produce and hard to hide. Whereas with AI and cyber
weapons, production is pretty easy to hide. How do you do verification?
RL: Well, that is the question of the day, and I don't
have any pithy answers, other than to the extent that we can create some sort
of verification agreements. The major powers are pretty good at that. With
today's agreements, we have overflights of Russia and they'll overfly us, and
we'll take satellites over there and they'll bring satellites to look at us.
So with new technologies, we'll need to get some
opportunity to just go into these laboratories and see what they're doing, and
we're pretty good at projecting what their real capabilities are. I don't know
exactly what a regime would look like, but it would clearly be better than just
having a ban and trying to find the needle in the haystack.
TH: A few months ago I interviewed former Deputy Defense
Secretary Bob Work, who oversaw the Barack Obama administration's cutting-edge
Pentagon modernization.. He calls this
initiative the "third offset," and says it is necessary because our
great-power competitors, Russia and China, are achieving parity with our
"second-offset" guided munitions and integrated battle networks. Do
you view the challenge of the future in that same way?
RL: I do and I agree with the secretary that they really
caught up with us. Just look at the recent demonstration of Russian cruise
missiles in Syria -- those were pretty good cruise missiles. With things like
autonomy and artificial intelligence and hypersonic weapons and electromagnetic
weapons, if they have not achieved parity, they're coming up really quite fast.
So we won't have the third offset for nearly as long as we had the second
offset.
TH: Right -- the advantages of the first two offsets,
first in nuclear technology and then in precision weapons, lasted decades. This
one is probably more short- to medium-term. So, given that, which of these new
technologies should the Pentagon be focused on?
RL: I think we still have an advantage in autonomy and
cyber. I think the one that worries me more than any of the others, and it
isn't clear to me that we actually have an advantage, is electronic warfare.
I'm not necessarily talking just about huge nuclear electromagnetic pulses, I'm
talking about everything from very small electronic warfare to great big
electronic warfare.
TH: Such as jamming systems?
RL: Well, jamming systems is one. Or electronic pulses
that could either destroy or interfere with some of our electronic systems. It's
a little bit like cyberwarfare, but it's using electromagnetic pulses. And then
battlefield weapons that incorporate microwaves and things like directed energy
beams.
TH: So, not just killer robots, but also "Star
Trek" phasers. Speaking of which, we know that outer space is, under U.N.
treaty, supposed to remain unweaponized. But I think it's on the verge of
getting weaponized pretty quickly. China, Russia and even countries like India
are putting a lot of money into space. Do you think we need a space
nonproliferation effort?
RL: I would personally welcome a space nonproliferation
effort. Again, I think the major powers are probably not anywhere near wanting
to do that. When you say "Star Wars" or "Star Trek," I
don't worry so much about that -- battles going on in space. I even don't worry
about the stationing of weapons in space that might have an effect on the
Earth. If you know anything about physics
and orbital mechanics, that's just way too expensive.
What I do worry about is that the U.S. -- and
increasingly China and Russia -- is extraordinarily dependent on space systems.
Everybody knows that. And so a ground-based anti-satellite system, or lasers or
electromagnetics that might interfere with the functioning of our very critical
space systems, or even on-orbit systems that might interfere with or ultimately
perhaps destroy one of our satellites -- these are all extremely worrisome.
TH: What else keeps you up at night?
RL: The whole approach that the DoD is taking to autonomy
worries me a lot. I’ll explain: They came out with a policy in 2012 that a real
human always has to be in the loop. Which was good. I am very much against
lethal autonomy. But unlike most of these policies, there was never any
implementing guidance. There was never any follow-up. A Defense Science Board
report came out recently that didn't make any recommendations on lethal
autonomy. In all, they are unusually quiet about this. And frankly, I think
that's because any thinking person recognizes that autonomy is going to sneak
up on us, and whether we agree that it's happening or not, it will be
happening. I kind of view it as a head-in-the-sand approach to the policies
surrounding lethal autonomous weapons, and it cries out for some clarification.
Comments
Post a Comment