Google Employee Opposition Derails Military AI Project
Google will not seek another contract for its controversial work providing artificial intelligence to the U.S. Department of Defense for analyzing drone footage after its current contract expires.
Google Cloud CEO Diane Greene announced the decision at a meeting with employees Friday morning, three sources told Gizmodo. The current contract expires in 2019 and there will not be a follow-up contract, Greene said. The meeting, dubbed Weather Report, is a weekly update on Google Cloud’s business.
Google would not choose to pursue Maven today because the backlash has been terrible for the company, Greene said, adding that the decision was made at a time when Google was more aggressively pursuing military work. The company plans to unveil new ethical principles about its use of AI next week. A Google spokesperson did not immediately respond to questions about Greene’s comments.
Google’s decision to provide artificial intelligence to the Defense Department for the analysis of drone footage has prompted backlash from Google employees and academics. Thousands of employees have signed a petition asking Google to cancel its contract for the project, nicknamed Project Maven, and dozens of employees have resigned in protest.
Google, meanwhile, defended its work on Project Maven, with senior executives noting that the contract is of relatively little value and that its contribution amounts merely to providing the Defense Department with open-source software.
But internal emails reviewed by Gizmodo show that executives viewed Project Maven as a golden opportunity that would open doors for business with the military and intelligence agencies. The emails also show that Google and its partners worked extensively to develop machine learning algorithms for the Pentagon, with the goal of creating a sophisticated system that could surveil entire cities.
The two sets of emails reveal that Google’s senior leadership was enthusiastically supportive of Project Maven—especially because it would set Google Cloud on the path to win larger Pentagon contracts—but deeply concerned about how the company’s involvement would be perceived. The emails also outline Google’s internal timeline and goals for Project Maven.
In order to work on Project Maven, Google Cloud faced a challenge. The company would need to use footage gathered by military drones to build its machine learning models, but it lacked the official government authorization to hold that kind of sensitive data in its cloud.
That authorization, known as FedRAMP, establishes security standards for cloud services that contract with the government. But Google didn’t have it—so it had to rely on other geospatial imagery for its early work on Project Maven. According to an email written by Aileen Black, an executive director overseeing Google’s business with the U.S. government, Project Maven sponsored Google’s application for higher levels of FedRAMP authorization, Security Requirements Guide 4 and 5. “They are really fast tracking our SRG4 ATO (security cert),” she wrote. “This is priceless.”
In late March of this year, Google announced that it had been granted provisional FedRAMP 4 authorization to operate, or ATO. “With this ATO, Google Cloud Platform has demonstrated its commitment to extend to government customers,” Suzanne Frey, Google’s director of trust, security, privacy, and compliance, told reporters during a press call.
Obtaining this authorization was crucial not just for Project Maven, but for Google’s future pursuit of other government contracts. Google is reportedly competing for a Pentagon cloud computing contract worth $10 billion.
Greene had told concerned employees during meetings that Google’s contact with the Department of Defense was worth only $9 million, Gizmodo first reported—a relatively small figure as far as government contracts go.
However, internal emails reviewed by Gizmodo show that the initial contract was worth at least $15 million, and that the budget for the project was expected to grow as high as $250 million. This set of emails, first reported by the New York Times, show senior executives in Google Cloud worrying about how Google’s involvement in Project Maven would be perceived once it became public.
In another set of emails not previously made public, Google employees working on Project Maven described meeting with Lieutenant General Jack Shanahan, who has spearheaded the Maven initiative, and other government representatives at Google’s offices. These emails describe technical milestones for Maven and describe Google’s in-depth efforts to develop the technology.
Google secured the Project Maven contract in late September, the emails reveal, after competing for months against several other “AI heavyweights” for the work. IBM was in the running, as Gizmodo reported last month, along with Amazon and Microsoft. One of the terms of Google’s contract with the Defense Department was that Google’s involvement not be mentioned without the company’s permission, the emails state.
“It gives me great pleasure to announce that the US Undersecretary of Defense for Intelligence—USD(I)—has awarded Google and our partners a contract for $28M, $15M of which is for Google ASI, GCP, and PSO,” Scott Frohman, a defense and intelligence sales lead at Google, wrote in a September 29, 2017 email. “Maven is a large government program that will result in improved safety for citizens and nations through faster identification of evils such as violent extremist activities and human right abuses. The scale and magic of GCP [Google Cloud Platform], the power of Google ML [machine learning], and the wisdom and strength of our people will bring about multi-order-of-magnitude improvements in safety and security for the world.”
Other emails describe meetings in late 2017 with Pentagon representatives at Google’s Mountain View and Sunnyvale offices. “Customer considers Cloud AI team the core of the MAVEN program, where everything else will be built to test and deploy our ML models,” one message reads. Google planned to deliver the product of its work at the end of March, and continue refining it through June.
The company also assigned more than 10 of its employees to work on Project Maven. When Gizmodo reported Google’s involvement in the project earlier this year, Google downplayed its work, saying it had merely provided its open-source TensorFlow software to the Pentagon.
However, Google intended to build a “Google-earth-like” surveillance system that would allow Pentagon analysts to “click on a building and see everything associated with it” and build graphs of objects like vehicles, people, land features, and large crowds for “the entire city,” states one email recapping a Maven kickoff meeting with Pentagon representatives. Google’s artificial intelligence would bring “an exquisite capability” for “near-real time analysis,” the email said.
By December, Google had already demonstrated a high accuracy rate in classifying images for Project Maven. Working with imagery provided by a geospatial imagery firm, DigitalGlobe, and data labeling provided by an artificial intelligence firm, CrowdFlower, Google was able to build a system that could detect vehicles missed by expert image labelers.
“Customer’s leadership team was extremely happy with your work, your active participation, and the early results we demonstrated using validation dataset,” Reza Ghanadan, a senior engineering program manager at Google, wrote. “Among other things, the results showed several cases that with 90+% confidence the model detected vehicles which were missed by expert labelers.”
Despite the excitement over Google’s performance on Project Maven, executives worried about keeping the project under wraps. “It’s so exciting that we’re close to getting MAVEN! That would be a great win,” Fei-Fei Li, chief scientist for AI at Google Cloud, wrote in a September 24, 2017 email. “I think we should do a good PR on the story of DoD collaborating with GCP from a vanilla cloud technology angle (storage, network, security, etc.), but avoid at ALL COSTS any mention or implication of AI.”
“Google is already battling with privacy issues when it comes to AI and data; I don’t know what would happen if the media starts picking up a theme that Google is secretly building AI weapons or AI technologies to enable weapons for the Defense industry,” she added.
Greene told employees today that the conversation about the ethical use of artificial intelligence is huge and that Google is at the forefront of that conversation. “It is incumbent on us to show leadership,” Greene said, according to a source.
How The CIA Made Google, Nafeez Ahmed
Assange: Google Is Not What It Seems, Julian Assange