On Tuesday, a privacy and security report published by Gizmodo revealed that Google and the Pentagon are collaborating on developing drones. Known as Project Maven, the Department of Defense pilot project involves analyzing, combing through, defining, and categorizing visual data amassed by aerial drones. It wouldn’t be too far off to say the project would function as the Pentagon’s all-seeing eye.
According to Greg Allen, a Center for a New American Society adjunct fellow, the current amount of obtained footage is so vast it isn't possible for human analysts at the defense agency to sift through it and correctly define objects in the footage. As it stands, the United States’ drone strike program is already criticized by human rights groups like Reprieve for reportedly killing hundreds of civilians in Pakistan, Afghanistan, Yemen, and beyond in spite of claims of "surgical” precision from former CIA director John Brennan in 2011. With the help of Google's artificial intelligence resources, the Defense Department will apparently have the opportunity to correctly process the footage obtained by drones. Think vehicles, buildings and human beings.
But there's no guarantee machine learning will always correctly identify the objects it is tasked with seeing. In mathematician Cathy O'Neil's book, "Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy" and investigative reports from ProPublica on algorithms used in American police departments to supposedly “predict” recidivism, it becomes distressingly clear that relying on machine learning for security purposes can be more harmful than beneficial to vulnerable people.
Technically, the collaboration is reportedly based on APIs (application program interfaces). A Google spokesperson told Gizmodo that the company would be allocating TensorFlow APIs for the Department of Defense's Project Maven. TensorFlow APIs are meant to optimize machine learning by correctly receiving and processing input like basic requests from users. The spokesperson noted that Google was working on developing “policies and safeguards” with concern to possible military use.
In the statement, the Google spokesperson noted, “We have long worked with government agencies to provide technology solutions. This specific project is a pilot with the Department of Defense, to provide open source TensorFlow APIs that can assist in object recognition on unclassified data. The technology flags images for human review, and is for non-offensive uses only. Military use of machine learning naturally raises valid concerns. We’re actively discussing this important topic internally and with others as we continue to develop policies and safeguards around the development and use of our machine learning technologies.”
While observers outside of Google and the Pentagon seem to be concerned about the nexus of tech and the American military industrial complex, Gizmodo reported that a few anonymous employees within Google also had qualms about the collaboration first revealed in an “internal mailing list.”
That said, it won’t be the first time Google came under scrutiny for offering services to a federal agency. A 2017 report in Quartz shed light on the origins of Google and how a significant amount of funding for the company came from the CIA and NSA for mass surveillance purposes. Time and again, Google's funding raises questions. In 2013, a Guardian report highlighted Google's acquisition of the robotics company Boston Dynamics, and noted that most of the projects were funded by the Defense Advanced Research Projects Agency (DARPA).
As the Pentagon sits atop a $700 billion 2018 budget approved by the Trump administration, using up more resources than any other agency, concerns about its collaboration with Google have been abundant on social media. As several Twitter users cynically asked, “What could possibly go wrong?”