Artificial intelligence: Stop Killer Robots
They will be “weapons of terror, used by terrorists and rogue states against civilian populations. Unlike human soldiers, they will follow any orders however evil,” says Toby Walsh, professor of artificial intelligence about the future robots, at the University of New South Wales, Australia.
“These will be weapons of mass destruction. One programmer and a 3D printer can do what previously took an army of people. They will industrialize war, changing the speed and duration of how we can fight. They will be able to kill 24-7 and they will kill faster than humans can act to defend themselves.”
Governments are meeting at the UN in Geneva on Monday for the fifth time to discuss whether and how to regulate lethal autonomous weapons systems (Laws). Also known as killer robots, these AI-powered ships, tanks, planes and guns could fight the wars of the future without any human intervention.
The Campaign to Stop Killer Robots, backed by Tesla’s Elon Musk and Alphabet’s Mustafa Suleyman, is calling for a preemptive ban on Laws, since the window of opportunity for credible preventative action is fast closing and an arms race is already in full swing.
“In 2015, we warned that there would be an arms race to develop lethal autonomous weapons,” says Walsh. “We can see that race has started. In every theatre of war – in the air, on the sea, under the sea and on the land – there are prototype autonomous weapons under development.”
Ahead of the meeting, the US has argued that rather than trying to “stigmatise or ban” Laws, innovation should be encouraged, and that use of the technology could actually reduce the risk of civilian casualties in war. Experts, however, fear the systems will not be able to distinguish between combatants and civilians and act proportionally to the threat.
France and Germany have been accused of shying away from tough rules by activists who say Europe should be leading the charge for a ban. A group of non-aligned states, led by Venezuela, is calling for the negotiation of a new international law to regulate or ban killer robots. The group seeks general agreement from states that “all weapons, including those with autonomous functions, must remain under the direct control and supervision of humans at all times”.
The new global arms race
Fully autonomous weapons do not yet exist, but high-ranking military officials have said the use of robots will be widespread in warfare in a matter of years. At least 381 partly autonomous weapon and military robotics systems have been deployed or are under development in 12 states, including China, France, Israel, the UK and the US.
Automatic systems, such as Israel’s Iron Dome and mechanised sentries in the Korean demilitarised zone, have already been deployed but cannot act fully autonomously. Research by the International Data Corporation suggests global spending on robotics will double from $91.5bn in 2016 to $188bn in 2020, bringing full autonomy closer to realisation.
The US, the frontrunner in the research and development of Laws, has cited autonomy as a cornerstone of its plan to modernise its army and ensure its strategic superiority across the globe. This has caused other major military powers to increase their investment in AI and robotics, as well as in autonomy, according to the Stockholm International Peace Research Institute.
“We very much acknowledge that we’re in a competition with countries like China and Russia,” says Steven Walker, director of the US Defense Advanced Research Projects Agency, which develops emerging technologies and whose 2018 budget was increased by 27% last year.
The US is currently working on the prototype for a tail-less, unmanned X-47B aircraft, which will be able to land and take off in extreme weather conditions and refuel in mid-air. The country has also completed testing of an autonomous anti-submarine vessel, Sea Hunter, that can stay at sea for months without a single person onboard and is able to sink other submarines and ships. A 6,000kg autonomous tank, Crusher, is capable of navigating incredibly difficult terrain and is advertised as being able to “tackle almost any mission imaginable”.
The UK is developing its own unmanned vehicles, which could be weaponised in the future. Taranis, an unmanned aerial combat vehicle drone named after the Celtic god of thunder, can avoid radar detection and fly in autonomous mode.
Russia, meanwhile, is amassing an arsenal of unmanned vehicles, both in the air and on the ground; commentators say the country sees this as a way to compensate for its conventional military inferiority compared with the US. “Whoever leads in AI will rule the world,” said Vladimir Putin, the recently re-elected Russian president, last year. “Artificial intelligence is the future, not only for Russia, but for all humankind.”
Russia has developed a robot tank, Nerehta, which can be fitted with a machine gun or a grenade launcher, while its semi-autonomous tank, the T-14, will soon be fully autonomous. Kalashnikov, the Russian arms manufacturer, has developed a fully automated, high-calibre gun that uses artificial neural networks to choose targets.
China has various similar semi-autonomous tanks and is developing aircraft and seaborne swarms, but information on these projects is tightly guarded. “As people are still preparing for a high-tech war, the old and new are becoming intertwined to become a new form of hidden complex ‘hybrid war’,” wrote Wang Weixing, a Chinese military research director, last year.
“Unmanned combat is gradually emerging. While people have their heads buried in the sand trying to close the gap with the world’s military powers in terms of traditional weapons, technology-driven ‘light warfare’ is about to take the stage.”
Pandora’s box
According to the Campaign to Stop Killer Robots, these systems threaten to become the “third revolution in warfare”, after the invention of gunpowder and nuclear bombs, and once the Pandora’s box is opened it will be difficult to close.
“Inanimate machines cannot understand or respect the value of life, yet they would have the power to determine when to take it away,” says Mary Wareham, the campaign coordinator. “Our campaign believes that machines should never be permitted to take human life on the battlefield or in policing, border control, or any circumstances.”
Supporters of a ban say fully autonomous weapons are unlikely to be able to fully comply with the complex and subjective rules of international humanitarian and human rights law, which require human understanding and judgment as well as compassion.
Pointing to the 1997 ban on landmines, now one of the most widely accepted treaties in international law, and the ban on cluster munitions, which has 120 signatories, Wareham says: “History shows how responsible governments have found it necessary in the past to supplement the limits already provided in the international legal framework due to the significant threat posed to civilians.”
It is believed that the weaponisation of artificial intelligence could bring the world closer to apocalypse than ever before. “Imagine swarms of autonomous tanks and jet fighters meeting on a border and one of them fires in error or because it has been hacked,” says Noel Sharkey, professor of artificial intelligence and robots at the University of Sheffield, who first wrote about the reality of robot war in 2007.
“This could automatically invoke a battle that no human could understand or untangle. It is not even possible for us to know how the systems would interact in conflict. It could all be over in minutes with mass devastation and loss of life.”
SOURCE: The Guardian