Are Artificially Intelligent Military Systems Worth the Risk?

An MQ-9 Reaper during its first air show demonstration May 29, 2016, at Cannon Air Force Base, N.M. (U.S. Air Force/Master Sgt. Dennis J. Henry Jr.)

This post is part of KQED’s Do Now U project. Do Now U is a biweekly activity for students and the public to engage and respond to current issues using social media. Do Now U aims to build civic engagement and digital literacy for learners of all ages. This post was written by Thaddeus Ng, who completed his MS in Computer Science at Southern Connecticut State University.


Featured Media Resource
AUDIO: NPR

Weighing The Good And The Bad Of Autonomous Killer Robots In Battle
Hear about ethical concerns regarding artificial intelligence in military platforms and steps that have been taken to help maintain meaningful control over these systems.


Do Now U

Are artificially intelligent military systems worth the risk? #DoNowUAI


How to Do Now

To respond to the Do Now U, you can comment below or post your response on Twitter. Just be sure to include #DoNowUAI and @KQEDedspace in your posts.


Learn More About the Use of Artificial Intelligence in Military Systems

In the constant race to maintain the military advantage, defense research is often at the forefront of cutting-edge research across a wide variety of fields. This can have longstanding impacts even outside of the armed forces. From radar systems to microwaves, many inventions had their roots in military applications. One field of interest to the military is artificial intelligence (AI), especially in regards to fully autonomous systems.

The MQ-1 Predator unmanned aircraft
The MQ-1 Predator unmanned aircraft (U.S. Air Force/Lt Col Leslie Pratt))

Military technology has utilized autonomous weapons since World War II with guided munitions, however recent developments in technology have brought autonomous systems into the public eye. Drones and remote-controlled vehicles are regularly assigned tasks often considered too mundane or dangerous for humans, such as operating in the aftermath of Hurricane Katrina, and assessing and repairing the damage caused by the Deepwater Horizon Oil Spill off the Gulf of Mexico. Military drone systems, such as the MQ-9 Reaper, have been in broad operation by the CIA for intelligence gathering since 2000. The same drones were armed for military operations following the September 11 attacks. An armed unmanned Predator drone performed a targeted killing for the first time on February 4, 2002. Research and usage of unmanned vehicles surged in the early part of this century with the United States military deploying 7,000 unmanned aerial vehicles and 8,000 unmanned ground vehicles by 2012. While unmanned and capable of autonomous flight, there is a key distinction in that current drone systems operated by the U.S. are not fully autonomous. Drones are capable of independently patrolling routes and performing general information, however, platforms like the MQ-9 Reaper still require human authorization for certain actions such as tracking a suspect beyond the prescribed path or applications of lethal force.

Recent trends, however, are moving towards fully autonomous systems that eliminate human involvement. Many nations, in addition to the U.S., have already implemented automated weapons capable of tracking and, if authorized, firing on a target. These include South Korea’s Samsung SGR-A1 “Intelligent Surveillance and Security Guard Robot” deployed for perimeter defense at military installations, and Israel’s See-Shoot border defense system capable of establishing a mile-deep kill zone along the Palestinian border.

Integrating AI weapon systems into military platforms has a broad range of applications, but numerous concerns, both practical and ethical. Autonomous weapon platforms have the potential to significantly reduce the manpower required to performing a myriad of tasks. This is important given that some of these tasks like patrolling and mine clearing, are exceptionally dirty, dull, or dangerous. Autonomous weapon systems can perform the same task for longer durations more reliably. They lack human limitations such as fatigue, boredom and injury. Furthermore, when placed in a combat situation, an autonomous system would have the ability to rapidly analyze data and react, without human limitations, including panic and injury.

The terminator, a popular symbol of unchecked artificial intelligence
The terminator, a popular symbol of unchecked artificial intelligence (Flickr/Dick Thomas Johnson)

Paradoxically, many of the same benefits can be interpreted as causes of concern. War is financially expensive and imposes a significant human cost upon all parties involved. There are concerns that by using autonomous weapon systems that reduce the human cost of war, war becomes an increasingly more viable option instead of a choice of last resort. Furthermore, there are concerns about responsibility in the case of an accident. Traditionally, there is a chain of command in which all actions, especially regarding usage of lethal force, are given strict guidelines or are explicitly approved by the person in charge. Failure to adhere to these military laws are punishable by the military justice system. However, AI operates outside of this command structure by design. An artificially controlled weapon analyzes a situation and, based on existing data available to the computer, responds as it best sees fit. If an AI-controlled platform acts disproportionately, there is a lack of oversight, as the platform does not seek external validation for its behavior. The issue of responsibility is further exacerbated when a fully autonomous weapon platform has jurisdiction over the applications of lethal force. In Iraq, there have been cases where insurgents crouched in an alleyway were assessed as a threat until closer investigation showed they were either tending to the wounded or the deceased. The distinctions available to define a target as dangerous by AI are not always clear. While a person might realize something was different, which would cause them to hold fire, there are concerns whether a computer would be as discriminating, especially when it comes to lethal force.

Artificial intelligence can empower our automated platforms, simultaneously reducing the human costs of operation, and expanding the capabilities and tasks these systems are qualified to perform. However, with the numerous legal and ethical concerns about the impact of AI on warfare, is the military benefit gained from AI worth replacing the human behind the trigger?


More Resources

Video: PBS NewsHour
How Smart Is Today’s Artificial Intelligence?
Artificial intelligence is already in our everyday lives and continues to push boundaries. How big of a threat is AI currently? And how much of a threat will it be in the future, especially in autonomous weapons?

Video: Lockheed Martin
The Future of Artificial Intelligence
Hear about future possibilities for research and exploration using robots and artificial intelligence.

Article: MIT Technology Review
Military Robots: Armed, but How Dangerous?
An open letter that calls for a ban on “offensive autonomous weapons beyond meaningful human control” has been signed by thousands of scientists and technologists, but experts are divided on the issue of using AI to control lethal weapons.

Article: NPR
Researchers Warn Against ‘Autonomous Weapons’ Arms Race
A broad look at the dangers posed by fully autonomous systems and the positive changes artificial intelligence can bring to our daily lives.


Find best practices for using Do Now, using Twitter for teaching, and using other digital tools.


KQED Do Now U is a biweekly activity in collaboration with SENCER. SENCER is a community of transformation that consists of educators and administrators in the higher and informal education sectors. SENCER aims to create an intelligent, educated, and empowered citizenry through advancing knowledge in the STEM fields and beyond. SENCER courses show students the direct connections between subject content and the real world issues they care about, and invite students to use these connections to solve today’s most pressing problems.

Are Artificially Intelligent Military Systems Worth the Risk? 8 March,2017SENCER

  • hayleyhibbens

    I think that this topic is very controversial because I do see the reasoning behind both sides. I agree that using this advance technology would be beneficial because it would take the place of actual human soldiers, and keep them out of harms way. However, with this new technology we might start to resort to war faster knowing that our soldiers would be safe regardless. Here is article about motion sensor bullets that I feel connects to this topic. http://www.cs.unc.edu/~tracker/media/pdf/cga02_welch_tracking.pdf #MyCMSTArgs #DoNowUAI @KQEDedspace

    • Lauren Rhude

      I totally agree! My thoughts on the matter are quite similar to yours!

  • Lauren Rhude

    I think this topic has the potential to be controversial because of one main reason; each side has different reasoning but also logical reasoning. There is reason behind both sides and that’s why I think it will be controversial. I agree that using artificial intelligence could potentially eliminate the risk of putting actual human soldiers in harms way and that would be so beneficial to our society but there could also be some negatives aspects when considering this artificial intelligence. One main problem that could arise is relying on the advanced technology, other wise known as artificial intelligence, more so then our soldiers. And in knowing our soldiers were safe, we as a country would be more willing to take more risks, or lets say go to war. There are really good aspects to either side of this topic but maybe this will make you gravitate to one side more then another. Here is an article about some more negatives and positives of artificial intelligence.
    http://futureoflife.org/background/benefits-risks-of-artificial-intelligence/ #MyCMSTArgs #DoNowUAI @KQEDedspace

    • Alisha

      I agree with what you said about how it would get rid of some risks for human soldiers. It’s interesting, though, that this is the goal of these devices, yet they also are still hurting people. I feel like they need to be even more advanced to get rid of the risks they hold. #MyCMSTArgs

  • Diana Avila

    Reading the article I was able to take on both sides and at one point agree with both sides, but mainly for the more artificial intelligence side of the story (despite the economic problems that it could hold). Towards the end of the article, when it starts talking about the ethical side of the story, and the fact that the computer could confuse or create bias for someone that is simply healing a wound or “tending to the deceased”, was more captivating to me. I’m not for wars and if the computer can’t distinguish, and will kill anything in it’s way than I, per say, am not for it.

    • Alisha

      When I got towards the end of the article, the same thing happened to me! I was startled at the fact that the machine is able to create bias. I don’t think this is a good thing that the machine is able to kill when they are unsure or not doing it credibly. #MyCMSTArgs

  • Alisha

    Using artificial intelligence can be potentially more dangerous, as of right now (https://www.technologyreview.com/s/539876/military-robots-armed-but-how-dangerous/). I think that, while this is something scientists have been trying to do for decades, the artificially intelligent military defenses need better technology so that way we can be sure it is safe. Moreover, if these devices become too big too handle, there is a possibility of greater, worse wars because of them which is another aspect that needs to be considered. #MyCMSTArgs

    • TJ

      I agree the potential for escalation is very great. When we remove the human component of war it just creates greater incentive to “send in the robots” without having to bear the weight of the consequences.

  • Loren Drake

    I think technology will be advance so much that robots will be the next major thing. If they can carry human like senses and characteristics that can find tings and do things we cant without a high risk of dying, why not preserve a life and let a robot take the sacrifice?

  • Josh Mackey

    After reading this article, I am understanding the potential for both sides, but I can see more on why it is dangerous then good. so I would have to go against the claim after reading the article.

  • Darrick Webster

    Darrick- I am all for the military doing this. I believe we need more time to do some research, for instance we should completely be in control. We need to be 100% sure that the robots are under our command. If this saves lives, why shouldn’t we be for it? I understand the fact that the robots could turn on the human race, but that’s why I said more research shall be done. We should make some robots, and constantly be testing them and if they don’t do anything that was unplanned for lets say 10 years, I think its safe to say they are under our control. #

  • Kaitlyn

    I believe that artificially intelligent robots are not what we need for today. I agree it sounds like it would be very helpful since they could help do work or build houses, etc. Robots are not the same as humans. If they wanted to kill, they could with no problem. Humans have hearts and feelings. I believe that robots would not help because they would not have the same viewpoint as a human would. It could lead to a huge mistake and could kill the innocent. Plus, all technology has mistakes or breaks at some point. If something is powerful enough to kill when it is working correctly, there is no telling what it would be capable of doing when it messes up.

    • Maddie Barraza

      I agree with you completely. There is a certain type of humanity when a soldier sees, and kills his enemy, if we were to use IAs to eliminate our enemies, it would ake killing far too simople, and we would lose that aspect of war that I believe is really important. We should be focusing our energy to use robots for only good. Like what you and the video said, ex. building houses. #MyCMSTArgs

  • Diamond R

    #envi110/5 ; I believe that using advanced technology would be nice because we could possibly monitor our soldiers and see everything that is going on and if they are in danger then we know exactly where they are because of the monitors.- Diamond R.

    • Zac Adler

      We already have ways of recon and monitoring our soldiers to ensure their safety. This article has to do with the use of artificial intelligence to neutralize threats in action. What do you think about a robot having the choice, based on data to take a human life?

  • Benjamin Buckhalter

    Humans are becoming extremely lazing in everything we do. Sadly including war! Technology should not go so far where humans arent fighting humans anymore. Or “robots” and humans for that matter. Plus this technology is very expensive. So it is a lose-lose situation for everyone.

    • Keaton Hill

      I tend to agree with this. Laziness seems to be a contributing factor in a lot of technological developments in recent times, and this is no exception. Technology should not go this far, and should instead be used to further the good of mankind. Who knows what medical breakthroughs we could have if we were putting our technological resources into that field instead of into war. #DoNowUAI #MyCMSTArgs

  • Ashley B.

    On the surface, this seems like a good idea. We could potentially save thousands of human lives. However, I believe that in the long run it will be a negative thing. Artificial intelligence is too risky to be good idea right now. #envi110-5 #donowuai

  • danialynae

    I think I understand where both sides are coming from but I would have to go against this claim as well. I can see the pro’s and cons of both but for this I do not agree.

  • brian

    I feel that it could go both ways in terms of artificial intelligence on the battle field the positive side is that as technology increases the ability to carry out missions safe for the allies and humane for the enemies, on the negative side the robot might make a mistake or have an error in the system and turn on friendly soldiers or innocent bystanders.

    • Zac Adler

      I agree with you. i dont see a problem with using AI to perform extremely tedious tasks, or even ones that prove to be highly dangerous for humans. But i just dont think we are technologically advanced enough yet to put the decision of whether or not to target and neutralize a “threat” (whatever that may entail) in the hands of a robot.

    • Keaton Hill

      I agree that there are really two strong arguments on both sides of this debate. Technology could either make us safer of malfunction and cause more problems when it was designed to fix problems. I also agree that it’s hard to put trust into a machine, considering there’s a possibility it could turn on you at any second. #DoNowUAI #MyCMSTArgs

  • Jenny Chien

    I don’t really agree with using robots to open the war. Although robots are made by human, but they are lives, too. Therefore, why can we use them for opening wars or our own purposes without thinking about their situation and feeling?
    Moreover, using robots in the battle might increase the times for war. Now, we will against to have war, because we are afraid of losing our families. However, if the characters are replaced by the robots, will we still care about the harm of the battle?
    Yu-Chen
    ENVI110-005

    • TJ

      Robots don’t have feelings, nor does their existence count as a “life”. However I fully agree that wars with robots would likely increase our taste for war. When we don’t have to worry about loss of our soldiers we can justify our engagement in combat without having to explain away the casualties.

  • Zac Adler

    Since ‘Autonomous weapon platforms have the potential to significantly reduce the manpower required to performing a myriad of tasks.’ such as screening terrain for mines and removing threats, i believe autonomous robots are highly beneficial to our military, but only for certain tasks. The article mentions that we have come to a point where AI could be in charge of taking lethal force against threats. I believe that while robots being used in combat may conserve life on our end, it isn’t exactly ethical in terms of war. The ever changing conditions of combat need to handled by a trained human professional.

    • Maddie Barraza

      I agree with you. I believe that IA used to protect lives by screening for mines and what not are the way to go. But putting IAs on the battle field is dangerous; the risk of having an accident or malfunction is a risk we shouldn’t take.

  • Kevin Hackett

    I believe robots are definitely the future of technology. Although, artificial intelligence can be a very dangerous aspect to go into for this situation. Exploring and testing many new things can possibly lead to chaotic events in the future inspired from computers, which is something we don’t need happening.

    • Diana Avila

      Technology should be used for good things, not killing innocent random people because they can’t determine if they’re good or bad. In addition soldiers should be kept doing their job, not because that’s tradition but because people would lose their jobs and it’s just not right in my opinion. #MyCMSTArgs

  • Maddie Barraza

    I believe that artificial technology should not be allowed in warfare. As technology advances, the act of war has changed. The talk about drones changing the way war is fought has a popular topic for the past couple years. I believe the arguments people make about drone strikes is the same, or very similar to the one we’re having right now. Having these drone strikes are making killing far to easy. For people to kill hundreds of people both innocent and guilty without having to look them in the eye defeats the purpose of war in my opinion. There’s a sort of art when it comes to war, as ugly, and somber as it may be. The act of men killing, and dying for their country is considered one of the bravest, most honorable thing a man or woman could do. If we have robots fight our fights for us solders will lose that honor and respect that they had in the past. Having artificial intelligence doing other dirty work will desensitize war, and gives people the freedom of killing people without even having to see them, or even know about them. #DoNowUAI #MyCMSTArgs

    • Diana Avila

      I really like how you say that soldiers are honored for doing their job and are well respected. I agree with you, that drone killing is unfit for our society and killing innocent people is just not okay, and that is why I don’t agree with drones.

    • Yashar C.

      I admire your points. Drone strikes will definitely desensitize nations’ armies even more. Along with such, disregarding the lives of innocents becomes easier too. Like you expressed, I feel that machines are too cold and will have no remorse in killing anyone. #MyCMSTArgs #DoNowUAI

  • TJ

    I have a lot of mixed feelings about AI in warfare. While it does make sense in terms of reducing the human costs of warfare, it also dehumanizes warfare. War is the most painful and destructive thing we do as humans. To pass that burden onto artificial intelligence means we conduct destructive engagements without anybody having to bear the moral costs. We have already seen what happens on the internet when people are allowed to be disconnected from the consequences of their actions. While I support the freedom of the internet despite the disgusting consequences, the same cannot be said for war between countries, between peoples, and between peoples and a state. The potential for one country to bomb a people without anybody having to pull a trigger is as disconnected as it gets. It means nobody has to question themselves in war. Nobody has to dwell on the fact that people are dying. Those people are unknown and unseen. Artificial intelligence doesn’t question orders… doesn’t sympathize… doesn’t wonder about possible casualties.

    Another point made by the NPR story accompanying this piece (http://www.npr.org/sections/thetwo-way/2015/07/28/427189235/researchers-warn-against-autonomous-weapons-arms-race) points out that drones, and thus AI, are not expensive, rare, or difficult to obtain.

    “autonomous weapons aren’t like nuclear weapons, since they “require no costly or hard-to-obtain raw materials … they will become ubiquitous and cheap for all significant military powers to mass-produce.”

    This means even if we were to establish strict guidelines and uphold the highest moral standards, AI warfare is not limited to what we do. Imagine if China, Russia, North Korea, Iran, Saudi Arabia, or any of a number of hard line states were to engage in AI warfare. The results could be disastrous and culpability would be unenforceable. While I can appreciate the fact that AI relieves the burden on soldiers, I cannot peaceably accept AI warfare as just a new standard.

    • Bryan Shin

      Yes! I completely agree with you on how allocating the burden of war onto AI will depersonalize it. Your quote, “It means nobody has to question themselves in war,” makes me imagine that a war fought with AI would be incredibly more vicious and bloody as humans forget about the morality of taking lives.

  • Grant Neterer

    I’m not a big fan of AI warfare. Although, these robots are made by humans and designed to help humans, something can always go wrong. It does make sense in the terms of saving humans lives by limiting the amount of people needed to go into battle. But, by sending in the AI units, this is taking the burden of all the travesies

    • Yashar C.

      I completely agree with you.. To be honest, I think it would be a different story if nations found a way to establish war only consisting of AI machinery in a restricted battle zone with no humans involved. Not to belittle the issue, but that sounds like a movie.. I’m only concerned with the casualties and lives of innocents, and I rather see no one die. #DoNowUAI #MyCMSTArgs

    • bgirl272

      I think that since AI warfare would eliminate the amount of people that have to go in to battle is the very reason why we need it so much. Instead of thousands of people dying, war could become a battle of who truly develops the best technology. While it obviously isn’t at that level yet, if it is worked on more it could be very valuable to humanity and actually eliminate a lot of civilians being in danger.

  • TN

    The controversy with AI is we’re dealing with something that by definition operates outside of our control, but can perform tasks within our control for us we don’t or can’t do ourselves. It cheapens the conflict because it takes the human cost out of warfare, (quite literally send in the drones), takes any oversight out of the equation when you start having an AI that makes judgement calls on its own. That being said in war time there are high risk roles like EOD (explosive ordinance disposal), mine clearing, etc that need to be undertaken, So while it may cheapen the conflict is there an obligation to take every possible step to preserve our troops’ safety and well being? We have robots designed for EOD disposal to reduce the risk, so AI that are capable of performing these tasks would be a logical conclusion, where the human controller is taken out of the equation. Again both sides of the coin can be argued from this point of view, as nuclear weapons in japan (a deeply controversial topic for similar reasons when you look at the drastic casualties and long lasting wounds inflicting on the japanese contrasted to the steep losses our troops would have suffered in a traditional land invasion) were deployed for the same reason, to preserve the safety and well being of our men and women in uniform. The middle ground will be there will be a way to inherently fetter AI to persistently maintain a man in the loop with executive oversight over the AI’s decision making. But at that point you open up an ethical issue about whether we are creating a “slave race” subservient the human controllers. Further more there is no guarantee that all AI would have these restrictions in place at which point we are potentially looking at the next evolution of warfare, much like how gun powder rendered the bow and arrow obsolete at which point the possibility of an arms race skyrockets as every nation state aims to equip themselves with the latest and greatest AI to augment or replace their armed forces.

    • Bryan Shin

      You make a great point with the “next evolution of warfare.” I fear a war fought with much more advanced weaponry in the form of AI, and the massive effects it would have on casualties and destruction. I thoroughly enjoyed your comment.

  • Yashar C.

    https://www.engadget.com/2016/08/18/robots-of-war-ai-and-the-future-of-combat/ Using AI warfare should be banned. Their appliance will never be worth the risk, because too much can go wrong as explained in the article above. Considering such, the constant use of them will make conflicting nations more reliant on them. With that said, accidental deaths of innocents will increase. Drones are programmed to eliminate targets in the field. What if the targets are around children?… The sheer fact that nations are starting to apply AI warfare more is actually scary enough. Despite all those Sci-Fi films foretelling machines evolving and taking over, those in higher power still wish to employ these methods of battle. I know that referencing the Sci-Fi genre seems a little drastic, but we will not be too far from such a reality if the nations grow too dependent on them. Imagine if the U.S. law enforcement started using AI drones on its own citizens. #DoNowUAI #MyCMSTArgs

    • bgirl272

      I think that there are many other things like this is the past that may have been considered complete dangerous as well. This includes the internet today. Of course there is a risk at first, but they can definitely fine tune it enough so it won’t be a risk.

  • bgirl272

    http://futureoflife.org/open-letter-autonomous-weapons/ Using AI’s could seriously lower the amount of civilians that are killed during warfare, as well as the servicemen themselves. Even though it isn’t where it needs to be yet, where it would be safe enough, it can get there. Technology takes time, but the end result would be worth it, and though the idea of them running my themselves seems scaring, in the end it is us who created them.

  • Lorena Z

    I have a lot of mix feelings on artificial intelligence because it could be both helpful and dangerous. Artificial intelligence could be helpful when it comes down with war. It could help fight in the battles instead of millions of people dying. While this may sound like a good thing, something can always happen that might make it wrong. For example, it could kill or hurt the wrong person, and might go out of control. If it goes out of control, then many more people could die than if we did not have them in the first place.

  • Bryan Shin

    This was a very thought-provoking read.

    I understand that AI can save money, resources, and the lives of soldiers as well, but to allow a machine to judge whether or not it takes human lives seems immoral.If a machine incorrectly kills a perceived target there is no one to take responsibility for it. Ultimately it will be trivialized as software error, reconfigured, and yet the same thing may reoccur. It seems that AI technology may depersonalize war. Killing is a moral issue, something machines are not capable of, so I believe that AI military technology should be limited to surveillance and data collection, and unable to use lethal force solely on its own.

    Bryan Shin, #muhon2020

  • Megan Sanford

    I have mixed feelings concerning the use of artificial intelligence technology. On the one hand, I agree that its use could result in far less casualties of both soldiers and civilians. However, on the other hand, I feel that its use desensitizes the concept of war. A war made from technology would almost pose an “easy way out.” With AI technology, war would become something of a habit, and people would be able escape the burden of war consequences. War could easily become technological warfare, which could become really dangerous really quickly. For war to be of any value, I feel as though it is important to keep human fighters involved. Without human involvement, I feel that war could not unite a country behind a common cause, people would not honor the idea of war, and war would become much more frequent.

    • Maddie Napier

      I agree I think that AI will lead to technological warfare which would be extremely dangerous and impersonal causing people to not take to heart what the outcomes of their actions are. #MyCMSTArgs

  • Keaton Hill

    I’m not entirely sure how I feel about using artificial intelligence in war. I think that, in some ways, it could be a good thing and would reduce human involvement in war. It could also give us a winning edge in a given military conflict. However, in a lot of ways, using AI for such a purpose seems morally wrong. Letting a machine do the killing that was meant for humans almost seems to project a lack of respect for human life. In addition to this, if a drone were to make a mistake somehow and kill the wrong person, or even multiple wrong people, who would take responsibility for it? I also think that having machines fight wars would really alienate the public and create a lack of support at home for any war. #DoNowUAI #MyCMSTArgs

    • Maddie Napier

      I agree war is meant to be personal and using drones will reduce a personal connection to the fight, the power AI provides could go to their heads #MyCMSTArgs

  • Maddie Napier

    I think that using AI for military use is kind of scary, there is a lot of power behind the technology and it the wrong hands could be disastrous. Even though AI does reduce potential deaths on the side of the person using the AI but it would increase the deaths on the other side. I think using AI is a way to make war impersonal which could result in terrible tragedies. #MyCMSTArgs

Sponsored by

Become a KQED sponsor