Shifting From Tasers To AI, Axon Wants To Use Terabytes Of Data To Automate Police Records And Redactions
Above Photo: From Muckrock.com
Axon, one of the most prominent suppliers of tech tools to law enforcement, is shifting from selling its signature Tasers to embedding artificial intelligence in police departments around the country. The company changed its name from Taser in 2017, part of a plan outlined by CEO Rick Smith to de-emphasize the electrical stun guns in favor of body cameras, data storage and management products.
Axon claims as customers a majority of the largest police departments, with access to a self-reported 40 petabytes of law enforcement data. Now it is using AI to develop ways of capitalizing on that trove, offering systems to help law enforcement combine data from camera footage, records, and weapons.
The effort, according to the company, is aimed at streamlining the record management process, allowing agencies to plumb their data more easily, then collecting and analyzing it in real time.
“Our goal is to disrupt the entire manual data entry process,” Axon CEO Rick Smith told shareholders during a November earnings call. “We see the real value of records is in the data, not in the form filling software. We have the largest dataset in public safety,” Smith said. The goal is to integrate data ultimately allowing for automatic extraction of information to create an incident report or make redactions before a public release.
Because of its vast data trove – Smith has said it is the largest compilation of law enforcement data in the nation – the company has potential to inform and influence AI models for policing. But as questions are raised about the potential for bias and infringement on civil liberties in the use of AI, Axon has moved to establish an ethics panel. The Axon AI and Policing Technology Ethics Board is supposed to review company decisions and examine ethical questions about the way the company is using artificial intelligence. The board’s creation was announced in April. It is comprised of academics, digital rights advocates, and law enforcement officers and meets twice a year.
Jeremy Gillula, who works at Electronic Frontier Foundation and serves as an Ethics Board member in his personal capacity,, told MuckRock the ethics board’s role is advisory and will not be in a position to directly affect policy.
“The ethics board does not have the power to force, as far as I know, anyone at Axon to do anything. But, again, more our power is the ability to speak,” he said. He said communities should establish local policies for guiding adoption of surveillance technologies. “I agree that it’s not Axon’s place to set what those policies should be, but I would strongly encourage Axon to, when selling this technology, strongly encourage their customers, maybe in training material to say, ‘Have you thought about what your policy is? Have you posted it publicly?’”
Data collection and algorithmic decision making in law enforcement are increasingly facing scrutiny from civil rights advocates, who have questioned whether there are built-in biases in facial recognition and other AI systems that unfairly affect minorities and marginalized groups.
Among tech companies, there have been efforts to show some internal accountability for the ethical challenges posed by their technologies. In September 2016, a collection of companies – including Amazon, Facebook, IBM, and Microsoft as founding members – announced the Partnership on AI, a self-described commitment to advancing AI and assessing its downsides and impact on civil society. Google and Microsoft, which has also called for federal oversight of AI, already had established internal AI ethics boards but so far have not provided much information about the groups’ membership or what they actually do. Even Amazon, which had defended its work on AI and sales of facial recognition technology to governments, released a statement recently in support of legislative regulations of AI.
It is not uncommon for companies in particular sectors to ask the government for guidelines and regulations as a way to preempt possible legal issues.
“A lot of times when we look at regulation, we think this is regulation that companies will oppose. But, in fact, industries often ask for regulation when it becomes clear that there are difficulties with the way that they’re doing business and when it’s difficult for them to coordinate among themselves without running afoul of the antitrust laws or something like that,” Professor Barry Friedman, Director of New York University’s Policing Project and an Axon AI Ethics Board member. “I think it’s responsible of companies like Amazon and Microsoft to realize that this is a burgeoning problem and to realize that they don’t want to be forced into competition with companies that are not as concerned about ethics and the impact on the real world and to therefore call for regulation.”
Axon has been packaging its products to encourage customers to get comfortable with its cloud storage, software, and data management offerings. When it announced its 2017 name change, the company offered body cameras at no charge to any U.S. police agency, along with a yearlong subscription to Evidence.com, which is where Axon stores the footage. For Axon, the benefit is a subscription-based model generating recurring revenue and even more data and video footage to help it refine its facial recognition software and other programs that rely on AI.
“What we’ve realized is that these cameras could automate all the information flow of policing,” Axon CEO Rick Smith told The New Yorker last year.
Axon has been upfront about the increasingly-large share of its revenue from cloud computing offerings, demand for which has increased as more police departments use cameras in daily operations and are required to retain and sometimes quickly search the footage.
“Axon is focused on developing AI that will simplify the writing of police reports based on body-worn camera footage, and make the footage that is already stored within the Axon Evidence (Evidence.com) platform searchable based on objects and actions that our software detects within the video,” Axon spokesperson Carley Partridge wrote via email. “The primary goal is to build the algorithms in such a way so that the system can learn how to perform the manual, labor-intensive behaviors like categorizing videos or producing timelines that chronologically sequence events within videos.”
In February 2017, Axon acquired Dextro and Misfit, two video analysis companies, with the explicit plan to focus on AI applications in analysis of the data collected by police departments. In 2018, it acquired Vievu, a company focused on cameras and cloud-based evidence management for law enforcement. In addition to its collection of customer agency data, Axon is also building its workflow AI using an AI training center it opened in 2018.
“Regulating AI is one thing; regulating the aggregation and the sharing of databases is another,” Friedman said. “And that’s going to be a hugely critical issue to all of our lives in the future – what are the controls put on aggregation in terms of who has access and under what circumstances or even whether aggregation should happen at all.”
Axon currently offers speech transcription, gunshot detection, vehicle recognition, and “critical event recognition,” where sudden changes in speed or environment can trigger alerts. In particular, Axon has been working on its ability to redact faces from video, a technology that relies on facial detection but isn’t intended for identifying criminal complicity.
Partridge said the Ethics Board has met twice so far and that a sub-group is currently considering issues specific to facial recognition.
“This group is working through hypothetical use cases where Face Matching, or identifying faces against a database, could potentially benefit law enforcement, while working through the ethical and privacy implications of implementing the technology, performing a cost/benefit analysis, and discussing safeguards that can prevent abuse of the technology,” she wrote, echoing concerns made by the company’s CEO in August, saying that “facial recognition technology is not yet accurate enough to be used in a law enforcement application.”
“We are, however, conducting research and analysis around Face Matching to determine whether and how the technology should be implemented in the future. This working group will report back to the entire Board at our April meeting.”