Healthcare professionals oppose lethal autonomous weapons systems
As advancements in machine learning and artificial intelligence continue at an ever-increasing rate, there are growing concerns over the potential development of lethal autonomous weapons systems (LAWS), commonly known as ‘killer robots’. Such systems are defined as any weapon capable of targeting and initiating the use of potentially lethal force without direct human supervision and direct human involvement in lethal decision-making. Several countries, including the UK, are developing these weapons for military use, setting the stage for an imminent arms race. The emergence of these technologies would represent the complete automation of lethal harm, which AI experts fear would mark a third revolution in warfare, following gunpowder and nuclear weapons. LAWS would radically violate the ethical principles and moral code that are integral to our profession, necessitating urgent and collective action from the entire healthcare community.
Concerns around LAWS
The prospect of a world with LAWS generates ethical, legal and diplomatic apprehensions. These technologies would bring dire humanitarian consequences and geopolitical destabilisation. They would make possible the anonymous selection of human targets in the absence of human oversight, exacerbating the encoded human biases while excluding human morality. Targets would be chosen by their perceived age, gender, ethnicity, facial features, dress code, gait pattern, social media use, or even their home address or place of worship.
Once in existence, LAWS could be produced rapidly, cheaply, and at scale, marking the advent of a novel Weapon of Mass Destruction that could be widely stockpiled and deployed in great numbers. Their digital nature would render them vulnerable to cyber hacking and erroneous malfunction, while the absence of human input makes the legal accountability of their actions highly unclear. They would be vulnerable to acquisition on the black market, use in assassination and ethnic cleansing, and integrated into non-militarised state activities such as law enforcement and boarder control. Without a human ‘in the loop,’ LAWS would dehumanise conflict and lower the threshold to entering warfare, while their lethality would endlessly increase as the AI’s recursively self-improving algorithm acquires more and more data through repeated cycles of search-identify-engage.
Healthcare’s history of advocacy against inhumane weapons
The healthcare community has played a key role in establishing the bans on chemical, biological and certain conventional weapons (landmines) that are currently in force. More recently, the collective voice of global healthcare, led by the International Physicians for the Prevention of Nuclear War, played a pivotal role in the campaign to ban nuclear weapons, culminating in the 2017 Treaty on the Prohibition of Nuclear Weapons. The success of our advocacy derives from our moral authority and professional credibility on the devastating humanitarian consequences of warfare and inhumane weapons.
Why healthcare must oppose LAWS
In line with our commitment to ‘do no harm’, the healthcare community must denounce the development of LAWS on grounds of their moral abhorrence. As healthcare professionals, we believe that scientific progress should only be used to benefit society and should not be used to automate harm. Humans should never entirely hand over to machines any decision regarding human life, especially a decision to end it.
We are becoming increasingly familiar with the role of AI in healthcare. These technologies highlight the importance of human involvement in clinical decision-making, especially in situations with context and ambiguity, to reduce the risk of unintentional bias and iatrogenic harm. We therefore use AI to augment, rather than replace, human decision-making that impacts human life. In doing so, we maintain that humans cannot be replaced by algorithms in the prevention of harm, a position in stark contrast to the mandate of LAWS, which are purposefully designed to replace human judgment in the decision to inflict harm. We are, therefore, morally obliged to resist any world in which LAWS exist.
Why this is urgent
To date, no lethal autonomous weapons system has been developed. The technology to do so, however, is advancing at breakneck speed. The world stands on the brink of an arms race, with the UK, US and Russia amongst the wealthy nations poised in the starting blocks. It will be significantly more challenging to ensure safety standards and legal regulations around these weapons if we enter an arms race scenario.
A call to action
The healthcare community has a history of successful advocacy for weapons bans, is well positioned to describe the humanitarian effects of weapon use, understands the risk of automation in decision-making, and is experienced in promoting preventative action. Despite this, our profession has been conspicuously absent from the conversation around LAWS. This is not due to indifference nor lack of moral outrage, but rather a general unawareness of the situation at hand. Meanwhile, the UK government not only opposes a ban on autonomous weapons, but actively contributes to their generation by ploughing public money into next-generation digital autonomy, machine learning, and artificial intelligence8.
As a collective voice the healthcare community must strongly support the UN’s efforts to pre-emptively ban killer robots. Our representative bodies and royal colleges must urgently declare their formal standpoints and lobby the UK government to act on this issue of existential importance. With support from across the healthcare community our collective voice will not go unheard. Please sign this petition to add in your own.
Campaign organiser: Dr Rich Armitage (Twitter: @drricharmitage)