Last week was not a good week last week for gun-related crime. Recent fatal shootings of black men in the nation sparked a peaceful protest in Dallas, and then, during the protest, a 25-year-old opened fire on police officers, killing five of them. That sniper, Micah X. Johnson, was taken out by a bomb robot that was jury-rigged to kill a suspect, which is the first time this was ever done by the police in the United States. The issue is that this is the beginning of robots being used to kill a suspect, and this could be the beginning.
When the officers and the former Army veteran sniper attempted negotiations, Dallas Police Chief David Brown sent in a bomb disarming robot in. These robots such as the MARCbot are not intended to be used as bomb-delivery vehicles, so that which is defensive has become offensive. This surveillance robot carried a bomb on its extending arm, and detonated a bomb in the sniper's bunker, and Johnson died in the blast.
According to Engadget, the Dallas PD's actions are the first time that a robot or a drone has been used to kill a civilian. It comes at a terrible time as the police are being scrutinized for military-style tactics. Last year, the police officer shot and killed Michael Brown, an 18-year-old black man in Ferguson, Missouri.
Dr. Peter Singer, a futurist and defense technology expert, has stated that troops in Iraq have deployed robots and strategies similar to those used in Dallas. Most people are not critical of the lethal force used to take out the sniper, but the method of it seems like it could open the door for technology being used for killing, in civilian lands.
According to Gizmodo, Eugene Volokh, a law professor at UCLA, said that the use of robots also doesn't affect the decision process police should go through when deciding to use deadly force. I mean, drone technology has really increased in the past few years, and those can be armed.
The issue is that the robot was not automated, as they had a human being controlling it. What is strange is how robots are getting more automated, and how what we are developing robots that feel like they are out of science fiction.
I mean, if Google is creating cars that can drive themselves, then this means that the Artificial Intelligence (A.I.) is being constructed so that decisions can be made behind the wheel. That is, somehow the car will know what to do in a situation when a person runs out in the road.
Now, this is the issue: if a robot can drive a car, can one be taught to fight a battle? Boston Dynamics is working on the SpotMini, which is a four-legged robot that looks like a dog with a manipulator arm. There is no reason to believe that this arm couldn't hold a gun. The issue is will it have the A.I. to know when to fire and when not to fire that gun?