You are on page 1of 13

Annotated Source List

AEGIS weapon system. (2016, January 5). Retrieved January 7, 2016, from America's
Navy website: http://www.navy.mil/navydata/fact_display.asp?cid=2100&tid=200&ct=2
Summary:
This article (really not sure what to count it as) describes the autonomous system known
as the AEGIS system. It has been slowly evolving and been expanded upon since it was first
introduced as a system on the test ship USS Norton Sound. The Navy uses this system on their
ships to for multi-mission operations. This system also uses many different pieces of technology
to function like a multi-function phased-array radar. This radar can search, track, and guide
missiles all at the same time. These ships were designed and made over several years with new
attempts to improve and come out with a better set of ships happening all the time. There are
currently 84 US ships equipped with the AEGIS weapon system.
Applications of Research:
This gives me good background information on one type of weapon and weapon system
that autonomous weapons could take the form of. This also gives me a good place to base
examples and ideas on.
Anderson, K., & Waxman, M. (n.d.). Law and ethics for robot soldiers. Columbia Public
Law & Legal Theory. Retrieved from Social Science Research Network database.
(Accession No. 2046375)
Summary:
This Journal talks about the entire issue surrounding Autonomous weapons and the
ethical values of the them. It starts by discussing how the development of these weapons would
be incremental with each new discovery made about these robots being integrated over time.
Then it went on to go over the possibility of these machines creating an arms race over the
components and artificial intelligence. It talked about the requirements these weapons would
have to be able to go over for its actions to be legal and/or ethical. These requirements include
and are not limited to distinction and proportionality. During this the journal went over how the
incremental development would change the public's opinion on it as it is slowly implemented
more and more. Finally it went over the four big objections which are the possibilities of
programming proportionality and distinction, taking out the moral agent (the human), removal of
accountability, and cause more war by decreasing the cost of war making it a more attractive
option.
Applications of Research:
This is a very good overview of my entire topic examining the pros and cons of
autonomous weapons and their possible applications and implications.
Artificial intelligence. (2007). In World of computer science. Retrieved from Gale
Science in Context database. (Accession No. GALE|CV2424500044)
Summary:

This document, from the database Gale Science in Context, gives a brief general history
of the field of artificial intelligence then continues on to outline the basis of the field. Artificial
intelligence was created as a field during a conference at Dartmouth College in the summer of
1956 by Marvin Minsky, John McCarthy, Herbert Simon, and Allen Newall. The article
continued on to describe how the definition of intelligence is loosely defined, even with the
standing Turing Test (created by Alan Turing), which finds out if a system is intelligent by
making it make answer questions in written response and if a human cannot tell if it is a robot
then it fails. The document continues on to describe the different methods of creating an AI and
how each method is applicable. The methods that it mentions are using partial solution if-then
statements to create a system based around deductive reasoning, using distributed algorithms to
create a kind of neural network, and something that intrudes on the field of genetic algorithms to
mimic biological evolution.
Application of Research:
This research gives a very clear idea of where the field came from and how it has
progressed since its creation. I can use this article as a lot of my background information and as a
set of groundwork to base the rest of my research upon.
Ban 'Killer robots' before it's too late. (2012, November 19). Retrieved November 30,
2015, from Human Rights Watch website: https://www.hrw.org/news/2012/11/19/bankiller-robots-its-too-late
Summary:
This article describes the major up front flaws of autonomous weapons. The weapons,
for one, would not be controlled by humans causing a lack of human qualities and emotions. It
also brings up a 50 page report that talks exclusively on the problem of autonomous weapons.
Experts predict that as little as twenty years we may start seeing these fully autonomous weapons
not just unmanned drones.
Applications of Research:
This article gives me another good resource to pull from for later reading reports. The 50
page paper will be good for many, many reading reports later in the year.
Bolton, M. (2013, April 24). US must impose moratorium and seek global ban on killer
robots. Retrieved November 9, 2015, from http://thehill.com/blogs/congressblog/technology/295807-us-must-impost-moratorium-and-seek-global-ban-on-killerrobots
Summary:
This article, from the news website The Hill, is one big analogy about why
autonomous weapons should not be used in war. Would you like the people in congress to be
replaced by robots who submit votes to congress through opinion polls and generalized data
instead of a human with a conscious. It goes on to talk about how the Department of Defense
released a statement cautioning the usage of autonomous weapons in the military. It warns
against the ethical values of letting AI making decisions for us.
Applications of Research:

This article gives a strong argument against the use of Artificial intelligence and
autonomous weapons in the military. It also creates a better understanding of the ethical issues
behind the us of these weapons.

Carr, E. (Ed.). (2012, June 2). Morals and the machine. Retrieved September 28, 2015,
from http://www.economist.com/node/21556234
Summary:
This article, from the website the Economist, is about the morals and laws that should be
made surrounding the creation and actions of artificial intelligence. The article discusses the
different legal situations like who would be responsible for a car accident if the car was
controlled by an artificial intelligence. Is it the manufacturers fault or is it the fault of the drivers?
It briefly talks about the laws set forth by science fiction writer Isaac Asimov. The laws he sets
forth commands the robot to first protect humans, second obey humans, and finally preserve
itself. Each law notwithstanding the one before it. The article finally talks about what must be
done to prevent too many issues with the law. First there must be a law that distinguished who is
at fault for failures in systems. Secondly a generalized moral code deemed fit by a general
consensus of people. And lastly collaboration made between lawyers, engineers, and
policymakers.
Applications of Research:
This research was useful for finding positive points about artificial intelligence and how
it can help humanity. It also gives me some foundation for researching into the legal side of this
issue.
Duffy, M. (2009, August 22). Weapons of war - poison gas. Retrieved October 23, 2015,
from http://www.firstworldwar.com/weaponry/gas.htm
Summary:
This article from the website, Firstworldwar, is about the use the Poison gas during the
World War I. The first use was by the French, however the first major use of poison gas was by
the Germans who also started the first major studies and experiments on different kinds of gas.
The other nations quickly condemned the use of poison gases. This did not stop the escalation
and use of them though. Germany and the Allied Forces still studied and employed the use of
them. There many problems in its use because it could only be used in very specific situations.
Poison gas caused 9% of the casualties on both side from may 1915. it was internationally
banned in the use of war in 1925.
Applications of Research:
I will use this information to compare the use of poison gas (remembered as a terrible
weapon of pain and suffering) to the use of Autonomous weapons. This will help to emphasize
the negative points of autonomous weapons.

Garcia, D. (2015, April 25). Killer robots: Towards the loss of humanity. Retrieved
November 9, 2015, from http://www.ethicsandinternationalaffairs.org/2015/killer-robotstoward-the-loss-of-humanity/
Summary:
This article, from the website Ethics and International Affairs, gives a general outline
of the discussion and arguments over the use of autonomous weapons. It goes over how the
organization Campaign to Stop Killer Robots is for Human Intervention in these matters of
killing. Allowing humans to make the final decision on whether somebody lives or dies. It then
talks about how this argument has gained a lot of holding within the United Nations. People who
do support the use of autonomous weapons argue that that ethical autonomy can be
programmed into the robots to make them follow the ethical standards of war while still being
autonomous and no need for monitoring. There is a flaw in this argument though because no
known program can do this kind of computation or reasoning. They also argue that these
autonomous weapons would lower the casualties on the field of battle while also negating the
unknown factor of human emotion (fear, excitement, anger, etc.). It also brings up the question of
how is it possible to stop malfunctions in this code, no program is infallible.
Applications of Research:
This article gives amazing arguments for and against autonomous weapons. It also gives
me other organizations and articles to look into and do research on that can support and disprove
different points.
Hagerott, M. (2015, October 19). RE: Independent research project [E-mail to the author].
Autonomous weapons: An open letter from AI & robotics researchers. (2015, July 28).
Retrieved October 23, 2015, from
http://futureoflife.org/AI/open_letter_autonomous_weapons#signatories
Summary:
In this email, that Professor Hagerott sent to me, he suggested a few things that I could
start looking up. First he told me I could analyze how different weapons introduced in the World
Wars and how these could relate to the topic of autonomous weapons. He also provided a website
for a group against the development of autonomous weapons and on that website I found an open
letter stating a few reasons why AI controlled weapons are a negative: war becomes cheaper to
wage because parts are very inexpensive for a powerful government to obtain, easier way for
dictators to control people, and backlash on the field of AI.
Applications of Research:
This provides good points to start researching the effects on casualties by looking into
how other new weapons affected the casualties of previous wars. This also helps by giving me
some more negatives to the use of autonomous weapons in the military.
Hagerott, M. (2016, January 3). [Personal interview by the author].
Summary:
In this interview I got a lot of information that needed answering and I got the opinion of
an expert in the field. First off he thought that the use of autonomous weapons are an

evolutionary step in war and they probably will come about in some kind of capacity in the next
while.He also had some experience on a ship that had the AEGIS weapons system and he had no
problem with software glitches although they were monitored at all times. He did mention a few
stories about the some weapons glitching and shooting their own men. He felt that weapons
should not be allowed to have empathy/sympathy due to the ethical and more logistical problems
surrounding these weapons. He often cited sources that I could use if I have not yet found them
that I could use as sources in later reading reports.
Applications of Research:
I can use this interview to as a stepping stone to find good sources to look into as he did
give three or four sources in the interview itself. He even mentioned a few people that I could
look up and find articles on. He answered questions I had about the likeliness of glitches and
how they could affect a machine.
HumanRightsWatch. (2012, November 11). Pull the plug on killer robots [Video file].
Retrieved from https://www.youtube.com/watch?v=AlRIcZRoLq8
Summary:
This video talks about how autonomous robots would completely eliminate the human
factor from any conflict. It briefly mentions that countries like the United States are very excited
about this development and are looking into this option. It shows professionals who talk about
the negatives of Autonomous weapons. One of the professionals talked about how a robot may
not be able to tell a soldier pointing a gun at it and a little girl pointing ice cream at it (a bit
extreme I know). They focus a lot on the point of no human interference and the robot being able
to make decisions on its own. It also brings up the point of who is at fault and what if the
machine goes haywire.
Applications of Research:
Gives a list of negatives that are fair points that should be considered and not disregarded.
Gives the views of professionals in the field of robotics.
LaGrandeur, K. (2015). Emotion, artificial intelligence, and ethics. In J. Romportl, J.
Kelemen, & E. Zackova (Eds.), Beyond artificial intelligence (pp. 97-106). Springer,
Switzerland: Springer International.
Summary:
This chapter from the book, Beyond Artificial Intelligence discusses the safety issues of
robots used in the military and the dangers of the bonds that can be formed between a person and
a robot or artificial intelligence. The two types of robots that garner the most use and attention
from the public and the corporations are personal robots and military robots. Some key issues of
safety are the autonomy of the robots especially the ones designed to use weapons. The concern
over them distinguishing between civilians, ally soldiers, and enemies can turn into an extremely
large issue especially after a smart anti aircraft weapon turned on the soldier operating it in
South Africa. Ronald Arkin created a program that could create a very rough semblance of guilt
in a robot. This program would be more like as the chapter said diagnostic troubleshooting

than real guilt or empathy. It then goes on to point out that empathy is a potential known to
mostly to living things and is based on experiences such as pain and happiness. Then asks the
question is it moral to make robots and AI feel pain. There is also an inbred danger of companion
robots where they are designed to create an empathetic bond but this bond is only one way
because the robot cannot become attached to anything emotionally. This can lead to manipulation
by corporations who can program the robot to convince the user to buy more of its kind (not
exactly diabolical IMO). The greater danger comes from emotional attachment to life saving
robots where if they become busted people may become reluctant to give up such items.
Applications of Research:
This research yielded great results. The information gave a lot of key terms and other
ideas that I can use as follow up research. It also gives me a few of the negative points of AI and
maybe even a few key points that I can focus on while researching and creating my project.
Lewis, T. (2014, December 4). A brief history of artificial intelligence. Retrieved October
13, 2015, from http://www.livescience.com/49007-history-of-artificial-intelligence.html
Summary:
This article gives a brief history of artificial intelligence. It describes how in ancient
Greek times there were myths of robots and that the ancient Egyptians and Chinese built
automatons. These were the first times the concept of AI was ever thought about. This was later
conceptualized by classical philosophers described human thinking as a system. However the
true beginning of Artificial Intelligence was in 1956 when at a conference at Dartmouth College
the term artificial intelligence was made. The field of artificial intelligence was extremely
overestimated and would almost go in and out of fashion especially in terms of funding by
countries until 1993 where it has continued to be popular and studied. Three great
accomplishments of AI was when one (controversially) passed the Turing test, one won
Jeopardy, and when one beat a grandmaster of chess.
Applications of Research:
This article gives me good background information for the field of AI, which is what I
need for the project (especially the hypothesis presentation).
Losing humanity: The case against killer robots. (2012, November 19). Retrieved November 15,
2015, from https://www.hrw.org/report/2012/11/19/losing-humanity/case-against-killerrobots
Summary:
This article covers the issue that autonomous weapons have with the concept of
proportionality, military necessity, and the Martens clause. Proportionality is the concept that a
military attack must have benefits that outweigh the cost of lives be it civilian or otherwise.The
problem autonomous weapons would have with this is that they would be unable to compute
enough and have a strict enough of a program to cover such a subjective thing like that. The
same problem comes up with military necessity. Where it may not be able to tell the difference
between someone seriously injured or just faking like a person could be able to tell. This
implementation of autonomous weapons would also change the what qualifies as military

necessity and make autonomous weapons one. Finally the Martens law would be contradicted
because autonomous weapons go against the public conscience because when surveyed there was
a majority against the use of them in public opinion.
Applications of Research:
This gives me more laws to show how autonomous weapons would struggle to meet the
requirements of them. It is a good basis for an argument against the use of autonomous weapons.

Losing humanity: The case against killer robots. (2012, November 19). Retrieved
November 15, 2015, from https://www.hrw.org/report/2012/11/19/losing-humanity/caseagainst-killer-robots
Summary:
This article is about how there is no way to really assign blame for a robot killing an
innocent civilian. It goes in depth about why it would be unfair or impossible to hold the
manufacturer, robot, programmer, or commander responsible for the robots killing of innocent
civilians. If the manufacturer could be held responsible then there would be no point to a
manufacturer even making these weapons because they could cause more legal trouble than they
are worth. Plus families of these victims probably would not be in a position to take legal action.
Programmers or commanders should not be held responsible for the same reason. They cannot
foresee or stop what the robots might do so it is unfair to penalize them for making the decisions
or programs that seemed right at the time. Then comes the question of how can you punish a
robot because that is the only option left.
Applications of Research:
This article gives me good reasons and conclusions to the ethical problem of who is to
blame for the death of innocent civilians.
Losing humanity: The case against killer robots. (2012, November 19). Retrieved
November 15, 2015, from https://www.hrw.org/report/2012/11/19/losing-humanity/caseagainst-killer-robots
Summary:
This article goes over how Article 36 of the Geneva Conventions of Additional Protocol I
to the Geneva Conventions is applied to the use of autonomous weapons. These weapons would
have to be reviewed and constantly monitored during its development and any modifications
done after the final fabrication of these weapons. This monitoring/review is to make sure that the
weapon in question follows international law. Many countries have agreed to this monitoring
including the United States which are not even in the Geneva Conventions. It goes on to talk
about what these monitorings and reviews have to look out for including parts that do not seem
inherently harmful but could become weaponized. Article 36 also prohibits weapons that may
be expected to cause incidental loss of civilian life, injury to civilians, damage to civilian objects,
or a combination thereof, which would be excessive in relation to the concrete and direct military
advantage anticipated.

Applications of Research:
The article gives me the way that international law is applied to the creation of these new
autonomous weapons . It gives me a good idea for the kind of green lighting a weapon needs to
be approved internationally.
Losing humanity: The case against killer robots. (2012, November 19). Retrieved
November 15, 2015, from https://www.hrw.org/report/2012/11/19/losing-humanity/caseagainst-killer-robots
Summary:
This article has many parts to it and so I am taking from one small section named
Making War Easier and Shifting the Burden to Civilians for this report. This section talks about
how the introduction of these weapons could and will definitely lower casualties for one side of
the conflict but this decrease in casualties could be more negative than positive. This could be
more negative than positive because this makes war a more appealing option to a country
because they would not have to sacrifice their own men while waging war. This would increase
the number of wars in the world. This prospect would also devalue the civilian loss of life.
Another issue that this article brought up was the how the robots may have trouble distinguishing
between civilians and soldiers.
Applications of Research:
I got a very different perspective on how a lowered casualty number may affect a
battlefield and wars in general. I never thought that less casualties would lead to more war
because political leaders would not be afraid to lose troops.
Marchant, G., Allenby, B., Arkin, R., Barett, E., Borenstein, J., Gaudet, L., . . . Silberman,
J. (2011). International governance of autonomous military robots. Science and
Technology Law Review, 12, 275-281.
Summary:
This excerpt from the journal, Science and Technology Law Review, gives good
background information about the current use and predicted statistics of robots in the United
States military. The positives of AI controlled weapons and robots are that they have faster
processing capabilities than any human could have, and that you would not have to send in
human operatives into dangerous situations thereby not risking any human life (on the side using
the robots) in these dangerous missions. The journal goes on to argue that the AI would be better
at identifying threats because it would not be subject to emotions like fear or panic. Another
point it brings up is that it could act with self-preservation as a lower priority than most humans
would.
Application of Research:
This research gives good information on the positives of how AI could work in a better
more moral way than soldiers while preserving human life. However I cannot help but think
How much chaos could a rogue AI cause and how likely is it for a glitch in the software to
happen causing a robot to go haywire?

Markoff, J. (2009, Jul 25). Scientists worry machines may outsmart man. Retrieved September 9,
2015, from New York Times website:
http://www.nytimes.com/2009/07/26/science/26robot.html?_r=0
Summary:
This article, from the website The New York Times, is about where Artificial Intelligence
is headed. It describes the negatives and positives (mostly negatives) of AI. It also sights
different conferences where people discussed the matter of AI and how far it can go and how far
it should go. The sight also explores the idea of what would happen if already smart artificial
intelligence got to the point that they could create smarter robots than themselves and the cycle
could continue on and on. Another idea explored by the article is some benefits like creating an
artificial intelligence that could come up with responses to cheer up depressed subjects and the
response s could be more than a cut and paste response like it will all get better.
Application of Research:
This article is an extremely good start for studying the ethics of true artificial intelligence.
It gives good background information on them and the kind of point that the current debate is at.
It also raises questions like: should artificial intelligence be allowed to continue any farther or
should it be allowed to continue unchecked or should researchers continue forward with careful
curiosity?
McCormick, T. (2014, January 24). [Lethal autonomy: A brief history]. Retrieved
December 14, 2015, from http://foreignpolicy.com/2014/01/24/lethal-autonomy-a-shorthistory/
Summary:
This article outlines a brief history of artificial intelligence from laser guided systems to
predator drones and the concern over the issue of autonomous weapons. It starts with Leonardo
da Vinci's sketch of an automaton in the late 1500s. then to remote control systems and laser
guided missiles. Then to the funding of a computer science and artificial intelligence program at
MIT. Then on to semi-automatic systems from the Iraq War that shoot down a commercial
airliner. And finally to the possibility of truly autonomous weapons being of a big enough
concern to be picked up and brought to attention by the United Nations.
Applications of Research:
This gives a good look at the history and development of autonomous weapons and the
precursors that lead to the point of development that they are at right now.
McFarland, M. (2016, January 12). Google opens up about when its self-driving cars
have nearly crashed. Washington Post. Retrieved from
https://www.washingtonpost.com/news/innovations/wp/2016/01/12/google-opens-upabout-when-its-self-driving-cars-have-nearly-crashed/
Summary:
This article describes the reports coming in from Google about their self-driving car.
These reports describe how many times these cars have nearly-crashed for they have not yet.
These near crashes are qualified as a person having to step in to prevent a crash. According to the
report most of the time a person had to step in to prevent a crash due to a misperception of
orientation by the software, discrepancies in software (generally having to do with sensors), or

unwanted maneuvering. These incidents appear to be happening less often however that could be
because of data manipulation.
Applications of Research:
I can use this article to talk about the possibility of glitches in software and permanent
need for constant surveillance by officers and other people to keep the machines in check.
Padmini. (n.d.). Types of software errors and bugs | most common software bugs. Retrieved
December 20, 2015, from http://www.softwaretestingtimes.com/2010/04/types-ofsoftware-errors-and-bugs-most.html
Summary:
This article is about the most common types of bugs and how they show up in code and
how to fix them. It even talks about the symptoms of each of the bugs errors that they bring up
and cause. The types of bugs it talks about ranges from boundary issues to handling errors.
Applications of Research:
I plan on using this research to demonstrate the commonality of bugs in code and just
what kind of repercussions they bring up. Bugs in code can make the code do the complete
opposite from intended, so something meant to keep someone safe could turn on them and hurt
them.
Patnaude, A. (n.d.). Machine guns. Wall Street Journal.
Summary:
This newspaper article, from the Wall Street Journal, is about the introduction of
machine guns into World War I and their effect on it. It describes how the machine un tore
through the enemy giving no effort to it. I describes how the soldiers were amazed and terrified
at how easily they could just kill soldier after soldier without trying.
Applications of Research:
I can use this article to start likening the machine gun and its mass destruction to how
autonomous weapons could absolutely ruin the battlefield if introduced into war.
Pogue, D. (2015, June 14). The future of robotics and artificial intelligence [Video file].
Retrieved from http://www.cbsnews.com/videos/the-future-of-robots-and-artificialintelligence/
Summary:
The video is about the future of robotics and, more importantly for me, artificial
intelligence. The speaker in the video talked to several experts in the field of artificial
intelligence like the creators of Siri. They discussed how the media and other popular things in
pop culture create a very large misunderstanding about the field of artificial intelligence. One
man talked about how there is a difference between intelligence and sentience. Intelligence
according to him is the capability of something to do its agenda effectively and efficiently, while
sentience is the ability to think, feel, and perceive subjectively. This fact is important because it
is implausible for the AI to develop emotions even while being having its intelligence. The

person did mention, though, that the danger behind artificial intelligence is that the AI can make
itself more intelligent by creating new software and that its agenda may not be for the interest of
humanity.
Applications of Research:
The research gives good reasoning for the more philosophical side of artificial
intelligence. It also gives me insight on what experts in the field think about the future of it and
whether we should fear where AI is headed
Powers, R. (2011, February 5). What is artificial intelligence? Retrieved September 14,
2015, from The New York Times website:
http://www.nytimes.com/2011/02/06/opinion/06powers.html
Summary:
This article, from the website The New York Times, is about an artificial intelligence that
can compete with some of the best at the game show Jeopardy. It mentions how the AI sorts
through possible answers for the questions of Jeopardy. These questions are hard to answer for
AI because they come as answers and you have to frame the answer as a question. It runs
through tons of databases to connect possible answers together using different sorting methods
and algorithms to find and cross check for the right answer. This AI is a major breakthrough in
the community because of how much processing power it can use and the fact of how quickly it
can sort through thousands of libraries worth of information for one little fact.
Application of Research:
This research gives a good idea of what one of the real world applications of an AI is
used for which is the sorting and searching of different types of information and answers. This
also raises a bit of a question of what qualifies as an AI? Does it just have to be a computer set to
do an algorithm and use the answer for something?
Prelipcea, G., Moisescu, F., & Boscoianu, M. (2010). Artificial intelligence can improve
military decision making. World Scientific and Engineering Academy and Society, 3437. Retrieved from
http://ic.galegroup.com/ic/ovic/ViewpointsDetailsPage/ViewpointsDetailsWindow?
failOverType=&query=&prodId=OVIC&windowstate=normal&contentModules=&dis
playquery=&mode=view&displayGroupName=Viewpoints&limiter=&u=hcpub_hebron&c
urrPage=&disableHighlighting=false&displayGroups=&sortBy=&source=&search_wit
hin_results=&p=OVIC&action=e&catId=&activityType=&scanId=&documentId=GAL
E%7CEJ3010771225
Summary:
This article from the Opposing Point of Views database is about how artificial
intelligence can help commanders on the battlefield. The way put forward on how artificial
intelligence can help commanders is that it can help to compile important information for the
decision makers to look over and make decisions based on information given. This helps a lot
because the battle field is a hectic place and the ability to gather information and organize it into

important and unimportant information, while creating a set of options that can be done to attack
and counter-attack is key.
Applications of Research:
This research gives me a pretty detailed positive to what artificial intelligence can do on a
battlefield to minimize casualties on the field of battle.
Russell, S. (2015, May 28). Take a stand on AI weapons. Comment, 521, 415-416.
Summary:
This article from the magazine, Comments, is about the political situation of war
machines controlled by artificial Intelligence, like lethal autonomous weapons systems (LAWS).
It describes how there is a debate going on in the United Nations about how we should control
the use of these weapons in war and if they should even be allowed. Currently there are no laws
controlling these weapons, however countries like Germany and Japan have either said flat out
no towards any kind of weapon like this or have decided to not build them. Countries like the
United States, Britain, and Israel, who are heading development of these weapons, say they
should be allowed, although all countries are in agreement that there should always be a person
in the equation. One major problem is that if they were truly autonomous is that they could shoot
anybody exhibiting aggressive behavior. The restraints could, for these weapons, only come
from physics and not the code that created the intelligence.
Applications of Research:
The article gave me a good look at the exact issue surrounding AI in war. It gave me a
look into how the United Nations is treating this issue. It can be used very well for building a
ground work around my thesis and arguments.
Russell, S. (2015, July 28). Tech experts warn of artificial intelligence arms race in open
letter (Interview by A. Cornish) [Transcript]. Retrieved September 30, 2015, from
http://www.npr.org/2015/07/28/427178289/tech-experts-warn-of-artifical-intelligencearms-race-in-open-letter
Summary:
This interview is between NPRs Audie Cornish and Stuart Russell, a leading researcher
in the field of artificial intelligence, discussing the way artificial intelligence is being used in war
right now and how it may cause an artificial intelligence arms race. Currently artificial
intelligence is used more for targeting enemies with long range weapons and having a human
deciding whether to kill the person or it deciding to kill the target on its own depending on the
mode it is in. Cornish mentions that some people say that this is a way of minimizing casualties.
Russell counters with that there are two arguments there with what if an AI is created who is
better at distinguishing threats and that the more an arms race goes on for the less such
technologies will cost. Russell says that to prevent an arms race there would have to be the
creation of an international treaty to prevent the use of AI, which is in the process of being
created by the UN. He also worries over the connection that pop culture creates between AI and
robots with violence.

Applications of Research:
This research is very useful with connecting issues in the political world with AI. It also
gives me an idea of where the UN feels on AI being used in the military. This is a great primary
resource with reasons why we should use AI and why we shouldnt.
Stop Killer Robots. (n.d.). The problem. Retrieved October 18, 2015, from
http://www.stopkillerrobots.org/the-problem/
Summary:
This article from the website, Stop Killer Robots, outlines some of the major ethical
issues and reasons why some people are so vehemently against autonomous robots being able to
kill people. The points that this article brings up is that the cost of less people going to war for
one country could make it easier for that country to decide to go to war when it is actually
unnecessary. There is also an accountability gap between the robots victim and who should take
the blame for the victims injury and/or death. Another argument that it brings up is that since it
lacks human judgement it cannot make proper ethical choices during dynamic situations on a
battlefield. Lacking these qualities would make the robots unable to fulfill the requirements of
the laws of war.
Application of Research:
This gives a good outline to the negative supports towards AI and robots being used
during war. It also raises the question Do humans even meet the Laws of War when it comes to
making ethical decisions while on the stress of the battlefield.

You might also like