SAN FRANCISCO – Axon Enterprise's Draft One product, which uses generative artificial intelligence to write police report narratives based on body-worn camera audio, seems designed to stymie any attempts at auditing, transparency, and accountability, an Electronic Frontier Foundation (EFF) investigation has found.
The investigation – based on public records obtained from dozens of police agencies already using Draft One, Axon user manuals, and other materials – found the product offers meager oversight features, and the result is that when a police report includes biased language, inaccuracies, misinterpretations, or lies, there’s no record showing whether the culprit was the officer or the AI. This makes it extremely difficult if not impossible to assess how the system affects justice outcomes over time.
“Police should not be using AI to write police reports,” said EFF Senior Policy Analyst Matthew Guariglia. “There are just too many questions left unanswered about how AI would translate the audio of situations, whether police will actually edit those drafts, and whether the public will ever be able to tell what was written by a person and what was written by a computer. This is before we even get to the question of how these reports might lead to problems in an already unfair and untransparent criminal justice system.”
Axon’ Tasers, body-worn cameras and surveillance technology bundles are used by thousands of police agencies, and the company is using those existing relationships to heavily promote Draft One. Many more cities are expected to deploy this AI in the next few years.
EFF’s investigation found that Draft One does not save the draft it generates, nor any subsequent edited versions. Rather, an officer copies the AI draft text and pastes it to the police report, and the AI draft disappears as soon as the window closes.
Nothing remains that would let judges, defense attorneys, or the public review know what part, if any, of a report was written by AI and which portions were written by the officer, except for the officer's own personal recollection. And if an officer generated a Draft One report multiple times over, there's no way to tell whether the AI interpreted the audio differently each time.
Axon has promoted its “audit log” function as the primary transparency measure. However, after obtaining examples of this data, EFF found that it shed little light on the usage of the technology. Nevertheless, EFF also has released a guide on what records may be obtainable under public records laws.
“As AI technology proliferates in policing, it’s crucial that journalists, researchers, and advocates try to get these records to not only identify poor police practices, but also to highlight the structural gaps in accountability,” EFF Director of Investigations Dave Maass said.
For EFF’s Axon Draft One investigation: https://www.eff.org/deeplinks/2025/07/axons-draft-one-designed-defy-transparency
For EFF’s guide to requesting public records about Axon Draft One: https://www.eff.org/deeplinks/2025/07/effs-guide-getting-records-about-axons-ai-generated-police-reports