The attempted murder of a former Russian spy is rightly condemned. Yet Britain advocates the execution of its own citizens in the Middle East. It’s sheer hypocrisy
In 2015 a British student from Cardiff, Reyaad Khan, was killed in Syria by an RAF drone bomb, presumably “piloted” from Lincolnshire. A House of Commons report later accepted that he was “orchestrating and inciting” terrorist attacks in Britain, but could not discover how imminent the attacks were or the legal basis for his killing. His British associate, Junaid Hussain, was killed by an American drone. Two years later, Hussain’s widow, Sally Jones, and their 12-year-old son were similarly wiped out. No trial preceded these executions of British citizens on foreign soil. They died by executive action for being a threat to national security. If we assume someone in Moscow took the same view of Russian spy Sergei Skripal, what is the difference?
The foreign secretary, Boris Johnson, implied this week that Britain was now a victim of Russian “acts of war”, notably cyber-attacks. He implied that if the Skripal case was traced to the Kremlin, he would “look again at sanctions”. He is right that the murder of anyone on a British street is terrible and, if sanctioned by a foreign power, is, in diplomatic jargon, “unacceptable”. But murder is a criminal act against individuals. It is silly to merge it into an act of war.
DoD’s Project Maven uses tech firm’s TensorFlow artificial intelligence systems, prompting debate both inside and outside company
Google’s artificial intelligence technologies are being used by the US military for one of its drone projects, causing controversy both inside and outside the company.
Google’s TensorFlow AI systems are being used by the US Department of Defense’s (DoD) Project Maven, which was established in July last year to use machine learning and artificial intelligence to analyse the vast amount of footage shot by US drones. The initial intention is to have AI analyse the video, detect objects of interest and flag them for a human analyst to review.
Experts say action must be taken to control artificial intelligence tech
Wanton proliferation of artificial intelligence technologies could enable new forms of cybercrime, political disruption and even physical attacks within five years, a group of 26 experts from around the world have warned.
In a new report, the academic, industry and the charitable sector experts, describe AI as a “dual use technology” with potential military and civilian uses, akin to nuclear power, explosives and hacking tools.
Countering a multi-drone strike, like that seen recently in Syria, should be prioritised in protecting civilian and military air space
Russia responded on 5 January to an attack by a swarm of drones targeting a Russian airbase in north-western Syria and a naval station on the Mediterranean Sea. The multi-drone attack, which is suspected to have been launched by militants, is the first of its kind, representing a new threat from terrorist groups.
The use of a swarm attack demonstrates a militant capability, which was previously limited to states, to simultaneously control and coordinate several commercial drones at one time using a GPS unit. This development may send viewers of the science-fiction series Black Mirror into hiding, but it should prompt professional militaries to double down on countermeasures, specifically the creation of electronic jamming tech.