Predictive policing is dangerous and yet its use among law enforcement agencies is growing. Predictive policing advocates, and companies that make millions selling technology to police departments, like to say the technology is based on “data” and therefore it cannot be racially biased. But this technology will disproportionately hurt Black and other overpoliced communities, because the data was created by a criminal punishment system that is racially biased. For example, a data set of arrests, even if they are nominally devoid of any racial information, can still be dangerous by virtue of the fact that police make a disparately high number of arrests in Black neighborhoods.
Technology can never predict crime. Rather, it can invite police to regard with suspicion those people who were victims of crime, or live and work in places where crime has been committed in the past.
For all these reasons and more, EFF has argued that the technology should be banned from being used by law enforcement agencies, and some cities across the United States have already begun to do so.
Now, a group of our federal elected officials is raising concerns on the dangers of predictive policing. Sen. Ron Wyden penned a probing letter to Attorney General Garland asking about how the technology is used. He is joined by Rep. Yvette Clarke, Sen. Ed Markey, Sen. Elizabeth Warren, Sen. Jeffery Merkley, Sen. Alex Padilla, Sen. Raphael Warnock, and Rep. Sheila Jackson Lee..
They ask, among other things, whether the U.S. Department of Justice (DOJ) has done any legal analysis to see if the use of Predictive Policing complies with the 1964 Civil Rights Act. It’s clear that the Senators and Representatives are concerned with the harmful legitimizing effects “data” can have on racially biased policing: “These algorithms, which automate police decisions, not only suffer from a lack of meaningful oversight over whether they actually improve public safety, but are also likely to amplify prejudices against historically marginalized groups."
The elected officials are also concerned about how many jurisdictions the DOJ has helped to fund predictive policing and the data collection requisite to run such programs, as well as whether these programs are measured in any real way for efficacy, reliability, and validity. This is important considering that many of the algorithms being used are withheld from public scrutiny on the assertion that they are proprietary and operated by private companies. Recently, an audit by the state of Utah found that the the state had contracted with a company for surveillance, data analysis, and predictive AI, yet the company actually had no functioning AI and was able to hide that fact inside the black box of proprietary secrets.
You can read more of the questions the elected officials asked of the Attorney General in the full letter, which you can find below.