upper waypoint

Are Artificially Intelligent Military Systems Worth the Risk?

Save ArticleSave Article
Failed to save article

Please try again

An MQ-9 Reaper during its first air show demonstration May 29, 2016, at Cannon Air Force Base, N.M. (U.S. Air Force/Master Sgt. Dennis J. Henry Jr.)

This post is part of KQED’s Do Now U project. Do Now U is a biweekly activity for students and the public to engage and respond to current issues using social media. Do Now U aims to build civic engagement and digital literacy for learners of all ages. This post was written by Thaddeus Ng, who completed his MS in Computer Science at Southern Connecticut State University.


Featured Media Resource
AUDIO: NPR

Weighing The Good And The Bad Of Autonomous Killer Robots In Battle
Hear about ethical concerns regarding artificial intelligence in military platforms and steps that have been taken to help maintain meaningful control over these systems.


Do Now U

Are artificially intelligent military systems worth the risk? #DoNowUAI


How to Do Now

To respond to the Do Now U, you can comment below or post your response on Twitter. Just be sure to include #DoNowUAI and @KQEDedspace in your posts.


Learn More About the Use of Artificial Intelligence in Military Systems

In the constant race to maintain the military advantage, defense research is often at the forefront of cutting-edge research across a wide variety of fields. This can have longstanding impacts even outside of the armed forces. From radar systems to microwaves, many inventions had their roots in military applications. One field of interest to the military is artificial intelligence (AI), especially in regards to fully autonomous systems.

The MQ-1 Predator unmanned aircraft
The MQ-1 Predator unmanned aircraft (U.S. Air Force/Lt Col Leslie Pratt))

Military technology has utilized autonomous weapons since World War II with guided munitions, however recent developments in technology have brought autonomous systems into the public eye. Drones and remote-controlled vehicles are regularly assigned tasks often considered too mundane or dangerous for humans, such as operating in the aftermath of Hurricane Katrina, and assessing and repairing the damage caused by the Deepwater Horizon Oil Spill off the Gulf of Mexico. Military drone systems, such as the MQ-9 Reaper, have been in broad operation by the CIA for intelligence gathering since 2000. The same drones were armed for military operations following the September 11 attacks. An armed unmanned Predator drone performed a targeted killing for the first time on February 4, 2002. Research and usage of unmanned vehicles surged in the early part of this century with the United States military deploying 7,000 unmanned aerial vehicles and 8,000 unmanned ground vehicles by 2012. While unmanned and capable of autonomous flight, there is a key distinction in that current drone systems operated by the U.S. are not fully autonomous. Drones are capable of independently patrolling routes and performing general information, however, platforms like the MQ-9 Reaper still require human authorization for certain actions such as tracking a suspect beyond the prescribed path or applications of lethal force.

Sponsored

Recent trends, however, are moving towards fully autonomous systems that eliminate human involvement. Many nations, in addition to the U.S., have already implemented automated weapons capable of tracking and, if authorized, firing on a target. These include South Korea’s Samsung SGR-A1 “Intelligent Surveillance and Security Guard Robot” deployed for perimeter defense at military installations, and Israel’s See-Shoot border defense system capable of establishing a mile-deep kill zone along the Palestinian border.

Integrating AI weapon systems into military platforms has a broad range of applications, but numerous concerns, both practical and ethical. Autonomous weapon platforms have the potential to significantly reduce the manpower required to performing a myriad of tasks. This is important given that some of these tasks like patrolling and mine clearing, are exceptionally dirty, dull, or dangerous. Autonomous weapon systems can perform the same task for longer durations more reliably. They lack human limitations such as fatigue, boredom and injury. Furthermore, when placed in a combat situation, an autonomous system would have the ability to rapidly analyze data and react, without human limitations, including panic and injury.

The terminator, a popular symbol of unchecked artificial intelligence
The terminator, a popular symbol of unchecked artificial intelligence (Flickr/Dick Thomas Johnson)

Paradoxically, many of the same benefits can be interpreted as causes of concern. War is financially expensive and imposes a significant human cost upon all parties involved. There are concerns that by using autonomous weapon systems that reduce the human cost of war, war becomes an increasingly more viable option instead of a choice of last resort. Furthermore, there are concerns about responsibility in the case of an accident. Traditionally, there is a chain of command in which all actions, especially regarding usage of lethal force, are given strict guidelines or are explicitly approved by the person in charge. Failure to adhere to these military laws are punishable by the military justice system. However, AI operates outside of this command structure by design. An artificially controlled weapon analyzes a situation and, based on existing data available to the computer, responds as it best sees fit. If an AI-controlled platform acts disproportionately, there is a lack of oversight, as the platform does not seek external validation for its behavior. The issue of responsibility is further exacerbated when a fully autonomous weapon platform has jurisdiction over the applications of lethal force. In Iraq, there have been cases where insurgents crouched in an alleyway were assessed as a threat until closer investigation showed they were either tending to the wounded or the deceased. The distinctions available to define a target as dangerous by AI are not always clear. While a person might realize something was different, which would cause them to hold fire, there are concerns whether a computer would be as discriminating, especially when it comes to lethal force.

Artificial intelligence can empower our automated platforms, simultaneously reducing the human costs of operation, and expanding the capabilities and tasks these systems are qualified to perform. However, with the numerous legal and ethical concerns about the impact of AI on warfare, is the military benefit gained from AI worth replacing the human behind the trigger?


More Resources

Video: PBS NewsHour
How Smart Is Today’s Artificial Intelligence?
Artificial intelligence is already in our everyday lives and continues to push boundaries. How big of a threat is AI currently? And how much of a threat will it be in the future, especially in autonomous weapons?

Video: Lockheed Martin
The Future of Artificial Intelligence
Hear about future possibilities for research and exploration using robots and artificial intelligence.

Article: MIT Technology Review
Military Robots: Armed, but How Dangerous?
An open letter that calls for a ban on “offensive autonomous weapons beyond meaningful human control” has been signed by thousands of scientists and technologists, but experts are divided on the issue of using AI to control lethal weapons.

Article: NPR
Researchers Warn Against ‘Autonomous Weapons’ Arms Race
A broad look at the dangers posed by fully autonomous systems and the positive changes artificial intelligence can bring to our daily lives.


Find best practices for using Do Now, using Twitter for teaching, and using other digital tools.


Sponsored

KQED Do Now U is a biweekly activity in collaboration with SENCER. SENCER is a community of transformation that consists of educators and administrators in the higher and informal education sectors. SENCER aims to create an intelligent, educated, and empowered citizenry through advancing knowledge in the STEM fields and beyond. SENCER courses show students the direct connections between subject content and the real world issues they care about, and invite students to use these connections to solve today’s most pressing problems.

lower waypoint
next waypoint