Police in Ogden, Utah and small cities around the US are using these surveillance technologies | MIT Technology Review
Police departments want to know as much as they legally can. But does ever-greater surveillance technology serve the public interest?
At a conference in New Orleans in 2007, Jon Greiner, then the chief of police in Ogden, Utah, heard a presentation by the New York City Police Department about a sophisticated new data hub called a “real time crime center.” Reams of information rendered in red and green splotches, dotted lines, and tiny yellow icons appeared as overlays on an interactive map of New York City: Murders. Shootings. Road closures.
In the early 1990s, the NYPD had pioneered a system called CompStat that aimed to discern patterns in crime data, since widely adopted by large police departments around the country. With the real time crime center, the idea was to go a step further: What if dispatchers could use the department’s vast trove of data to inform the police response to incidents as they occurred?
In 2021, it might be simpler to ask what can’t be mapped. Law enforcement agencies today have access to powerful new engines of data processing and association. Police agencies in major cities are already using facial recognition to identify suspects—sometimes falsely—and deploying predictive policing to define patrol routes.
Around the country, the expansion of police technology has followed a similar pattern, driven more by conversations between police agencies and their vendors than between police and the public they serve. The question is: where do we draw the line? And who gets to decide?