Armed
with a budget of over $700 billion for the coming year – which will
likely continue to grow over the course of Trump’s
Pentagon-controlled presidency — the Pentagon’s dystopian vision
for the future of the military is quickly becoming a question not of
if but when.
by
Whitney Webb
Part
3 - Ethical killer robots
While
those “unintended consequences” may not keep DARPA higher-ups
like Goldblatt awake at night, concerns about the Pentagon’s plan
to embrace a mechanized future have been common outside of the
military. In attempts to quell those concerns, the Pentagon has
repeatedly assured that humans will always remain in control when it
comes to making life-and-death decisions and that they are putting
special care into preventing robots from engaging in unintended
attacks or falling prey to hackers.
They
have even enlisted experts to help add “ethics” so its robotic
soldiers will not violate the Geneva Conventions and will “perform
more ethically than human soldiers.” This may seem a rather low bar
to those who are aware of the Pentagon’s egregious record of human
rights violations and war crimes. If the Pentagon’s use of drones
is any indication, the military’s “ethical” use of automated
killing machines is indeed suspect.
Previous
reporting has shown that those who doubt the Pentagon’s professed
concern over preventing “unethical” consequences resulting from
its development of a robot army are right to do so. As journalist
Nafeez Ahmed reported in 2016, official U.S. military documents
reveal that humans in charge of overseeing the actions of military
robots will soon be replaced by “self-aware” interconnected
robots, “who” will both design and conduct operations against
targets chosen by artificial-intelligence systems. Not only that, but
these same documents show that by 2030 the Pentagon plans to delegate
mission planning, target selection and the deployment of lethal force
across air, land, and sea entirely to autonomous weapon systems based
on an advanced artificial intelligence system.
If that
weren’t concerning enough, the Pentagon’s AI system for threat
assessment is set to be populated by massive data sets that include
blogs, websites, and public social media posts such as those found on
sites like Twitter, Facebook and Instagram. This AI system will
employ such data in order to carry out predictive actions, such as
the predictive-policing AI system already developed by major Pentagon
contractor Palantir. The planned system that will control the
Pentagon’s autonomous army will also seek to “predict human
responses to our actions.” As Ahmed notes, the ultimate idea – as
revealed by the Department of Defense’s own documents — is to
identify potential targets — i.e,. persons of interest, and their
social connections, in real-time by using social media as
“intelligence.”
Source,
links:
Comments
Post a Comment