SSN Basic Facts

Ethical Issues Raised by the Increasing Deployment of Autonomous, Computer-Driven Military Technologies and Weapons

Policy field

Connect with the author

United States Military Academy at West Point

In the United States and beyond, militaries are starting to develop and field autonomous systems of weapons, software, and other capabilities that can (according to one formal definition from the Multinational Capability Development Campaign) “complete a task without human intervention, using behaviors resulting from the interaction of computer programming with the external environment.” The possibility of widespread use of autonomous weapons by militaries led the United Nations to agree to discuss the issue in 2017. For this discussion to be productive, it must address the ethical benefits and concerns as well as practical issues of cost and effectiveness.

Ethical Benefits of Autonomous Systems

Autonomous military systems have the potential to reduce risks to both military members and civilians, thus furthering the core value of protecting human life.

  • The use of autonomously driven military vehicles may eliminate errors due to human driver fatigue or distraction, thus reducing risks to bystanders, passengers, and the drivers themselves.
  • Autonomous systems may improve certain aspects of humanitarian missions, benefiting the civilians who are being assisted and reducing risks to military members. Using autonomous systems to search dangerous areas or perform a high-risk task, such as bomb disposal or clearing a house, eliminates risk for military personnel.
  • Autonomous weapons systems have the potential to more accurately target combatants, reducing the risk to nearby civilians even as risks are also reduced for the military members who use the weapons. Autonomous military systems, like other autonomous systems, are not affected by fatigue and do not feel emotion when soldiers are hurt – two factors that occasionally lead to civilian harm, unintentionally or intentionally.

Additional possible benefits have to do with costs.  Over time, the use of autonomous military systems may offer significant cost savings, if missions can be conducted at similar or higher levels of efficiency compared to reliance on non-autonomous systems. This is an important consideration, because defense costs consume significant portions of national budgets, and cost savings could be directed to other priorities. Cost reductions can happen because of reduced training expenses or reductions in vehicle accidents.  What is more, if autonomous weapons are able to more accurately target opposing military forces and their assets, their deployment could reduce costly harm to civilians and infrastructure destruction.

Ethical Concerns

Despite possible ethical benefits, many also worry about the use of autonomous military systems – especially autonomous weapons. A primary concerns is that allowing a machine to “decide” to kill a human being undermines the value of human life. From this perspective, human life is of such significant value that it is inappropriate for a machine to ever “decide” to end a life. Additionally, many critics doubt that autonomous systems can accurately and reliably discriminate between combatants and noncombatants. Even if such weapon systems are able to make such distinctions, critics question whether an autonomous weapon system can appropriately calculate whether an attack is proportionate – that is, whether the attack will lead to unnecessary suffering. Many argue that autonomous weapons inherently lack the capabilities for moral reasoning necessary to make judgments related to taking lives, whether directly or as a matter of damage collateral to an otherwise appropriate attack.

Autonomous military systems can be hacked – a vulnerability that could make even vehicles dangerous if opposing forces were to gain control of them. Obviously, even more significant damage could happen if opposing forces were able to wrest control of autonomous weapons.

Human side-effects have also attracted ethical attention. What happens if members of the military become accustomed to engaging targets only from a technologically determined distance?  This is not an entirely new concern, because modern weapons such as long-range missiles and drones have allowed military forces to engage targets from afar. Nevertheless, autonomous technology creates the possibility for further disengagement because humans are not making decisions to engage targets. Greater disengagement could increase unethical conduct by military personnel – and make governments more willing to launch military actions.  

Autonomous systems raise new issues about responsibility for unintentional harms. With earlier technologies, individual equipment operators could be held liable for mistakes such as careless truck driving that harms civilians or actions leading to too much collateral damage. When autonomous military systems are deployed, it becomes less clear how to apportion responsibility.

Policy Recommendations

In the following ways, ethical considerations must be front and center in the development and use of autonomous military systems, especially weapons.

  • In discussions based on agreed-upon definitions of autonomy, artificial intelligence, agency, intention, unmanned systems, and other key terms, officials should seek consensus about the technical advantages and cost savings (if any) offered by autonomous systems. Progress toward consensus will require transparency of results from testing these systems.
  • Ethical clarity requires specifying which systems actually make decisions – as opposed to others that simply allow military members to operate from greater distances.
  • Determine who is responsible for accidental harm or property damage caused by autonomous military systems.
  • Include rigorous examination of the value of human life and whether the taking of lives by autonomous systems is ethically acceptable.