Big Data, Cops, and Criminality

In the short story and movie “Minority Report,” police in the future are authorized to arrest, and prosecutors to convict, people for the crimes they will (according to predictive technology) commit in the future. Although the thought of arresting, trying and convicting individuals for what they have not yet done seems far-fetched, the basic structural and technological prerequisites for such a world are already in place. What does this mean?  

Over the past few years, crime prevention through big data has undergone a transition from interfacing with data to using it predicatively. Up until now, law enforcement could check individual samples against DNA and fingerprint databases, arrest records, that kind of thing. But now, there’s an emerging norm towards “predictive analytics algorithms to identify broader trends.” And yes, this is a step closer to a “Minority Report” world, where police predict where crime is likely to occur based on what we might call “moving” or trending data rather than snapshots of particular markers and records like fingerprints and DNA. 

Areas, where predictive software is used, have experienced remarkable results: a “33% reduction in burglaries, 21% reduction in violent crimes and a 12% reduction in property crime.” But there’s even more at stake here. In that such policing is preventative rather than reactive, what less crime also means is less confrontation between police and suspects–including innocent suspects erroneously targeted by police in the old world where cops wait to see suspicious behavior and then act on it in the midst of uncertainty. Of course, there are other implications, for detective work, for work dealing with serial offenders, in assessing the need for future resources. Big data won’t necessarily improve relations between residents and police outright, but smarter policing may translate into less confrontational policing and an increase in public perception of police effectiveness. 

And so a lot of police forces see the biggest challenge now to be teaching police how to use the technology the right way. A recent British report on “big data’s use in policing published by the Royal United Services Institute for Defence and Security Studies (RUSI) said British forces already have access to huge amounts of data but lack the capability to use it.” This is unfortunate because, at least as researchers and developers see it, in the words of Alexander Babuta, who did the British research, “The software itself is actually quite simple – using crime type, crime location and date and time – and then based on past crime data it generates a hotspot map identifying areas where crime is most likely to happen.” 

So we might be led to think that the only challenge left is training police forces to effectively use big data predictably, that doing so will decrease crime without resorting to aggressive policing and the regressive and socially negative “broken windows” policing of Giuliani-era New York, where policymakers attempted to harness individual acts of police intimidation in the service of an overall social perception of a crime-free city. Data accuracy is also critical—when working with client Accurate Append, we find that demographic and email and other contact data are missing and incomplete in data files across industries. 

The jury is not unanimous on the use of algorithmic big data as a crime prevention tool. To begin with, predictive policing can be perceived as just as oppressive as reactive policing. Predicting that certain areas are prone to future crime almost certainly means putting up video cameras, possibly with controversial facial recognition technology, in these “risky” areas. But the construct of the “high-risk area” by data interpretation risks being just as laden with racist or other assumptions as policing itself can often be. 

After all, we know that big data is not immune to racism or to other stereotyping of its human keepers. And what if, in an effort to politically manipulate the landscape of city policing, politicians and appointees manipulate or misinterpret the conclusions of long-term trends, or short-term spikes in crime, to continue the over-policing of oppressed communities? This is an emerging concern among civil liberties advocates in the UK and the U.S.

Another concern, expressed by the editorial staff at the British paper The Guardian,  is that in addition to predicting trends in particular areas, police are also using this interpretive technology “on individuals in order to predict their likelihood of reoffending” — which gets us even closer to “Minority Report” status. At the very least, “it is easy to see that the use of such software can perpetuate and entrench patterns of unjust discrimination.” Or worse, many fear. And, to make perhaps an obvious but necessary point, “the idea that algorithms could substitute for probation officers or the traditional human intelligence of police officers is absurd and wrong. Of course such human judgments are fallible and sometimes biased. But training an algorithm on the results of previous mistakes merely means they can be made without human intervention in the future . . . Machines can make human misjudgments very much worse.”

Perhaps the most interesting take on the social dangers of big data use in policing comes from  Janet Chan, professor of law at the University of New South Wales, in an academic paper for Criminology and Criminal Justice. Chen writes that “data visualization, by removing the visibility of real people or events and aestheticizing the representations in the race for attention, can achieve further distancing and deadening of conscience in situations where graphic photographic images might at least garner initial emotional impact.” In other words, seeing only data instead of the faces of victims and offenders, or the social and neighborhood contexts of property crimes, or the living dynamics of domestic violence cases, risks making law enforcement, and perhaps policymakers, less engaged and empathetic towards the public. Chen cites other scholars’ work to suggest that visual representation and face-to-face encounters, however imperfect, are necessary forms of social engagement.

Constituent Communication Research: A Snapshot from Long Past

Political culture has changed a great deal, and this is not a “get off my lawn” post. In fact, as alienating and uncivil as much current political discourse seems, there’s a level of directness and candidness that earlier eras lacked, giving them a feel of artificiality and stuffy elitism. 

Take, for example, a research article published back in a 1969 issue of the Journal of Politics. Titled “The Missing Links in Legislative Politics: Attentive Constituents,” the article by G. R. Boynton, Samuel C. Patterson, and Ronald D. Hedlund sought to describe a kind of constituent that was a cut above the rest, part of the “thin stratum” between the masses and “the upper layer of the political elite,” and seen as a critical sub-elite maintaining democratic dialogue. Curiously beginning in what they admitted was “a particular intra-elite context,” the scholars observed that both “attentive constituents and legislators differed markedly from the general adult population in terms of occupational status . . .” and that these special constituents were in constant communication with legislators and even recruited people to run for office.

Today, even if we acknowledge that some citizens are more engaged than others, that people benefiting from education and stable material lives can share their privileges by proactively participating in political and civic life, we are rightly hesitant to paint such citizens as part of superior substrata. We know that poor and working class people engage too when they can, that community engagement is often (though admittedly not often enough) facilitated by civic, religious and political interest groups across a wider range of economics and demographics than was supposed fifty years ago. 

We also know that high-level involvement doesn’t automatically correlate to helpfulness or the strengthening of democracy. We know that elite groups often engineer a great deal of spin, and that both privileged and disadvantaged populations are vulnerable to misinformation. Involvement and access are more complicated than Bounton, et al’s worldview reflected.  

Political science and communication scholars carried different assumptions back then — and even began with different questions. Today, much of the research is geared towards identifying bad hierarchies, undesirable ways in which constituent access is blocked or limitations are set on how communication may occur between people and the leaders they elect. This may include how letters and emails are processed, such as in Matthew J. Geras and Michael H. Crespin’s study, published this year, concluding that high-ranking staffers answer socially powerful constituents, while “[l]etters from women and families . . . are more likely to be answered by lower-ranked staffers. These results are important,” the authors conclude, “because they reveal that even something as simple as constituent correspondence enters a type of power hierarchy within the legislative branch where some individuals are advantaged over others.” Mia Costa’s dissertation, published last year, gives an interesting corollary conclusion: Not only are female constituents devalued, but female legislators are held to an unfairly high standard by their own supporters, including supporters who believe more women in elected office would be desirable. “In fact,” Costa argues, “it is individuals that hold the most positive views of women that then penalize them when they do not provide quality responsiveness to constituents” — a fascinating conclusion that invites further study.

Current research also suggests that elected officials have a sore spot when dealing with constituents who engage in what James N. Druckman and Julia Valdes call “private politics,” or what others would call “direct action” or attempts to influence change outside of the legislative process — things like boycotts, strikes, other direct or demonstrative tactics. Druckman and Valdes report finding that “a constituent communication that references private politics vitiates legislative responsiveness . . . reference to private politics decreases the likelihood of constituent engagement among both Republican and Democratic legislators.” The authors think that these findings call for collective, foundational “conversations about how democracies work” since elected officials ought to appreciate, rather than be intimidated or irritated by, extra-electoral constituent action. 

And through all of this data, as the OpenGov Foundation Study suggests, much of Congress still uses very old communication management technology. One researcher says it’s like “entering a time machine.” Beyond not looking at the power hierarchies of gender and class, the 1969 study also didn’t look at the challenges of staffing in a world of scarce resources. “When advocacy groups target thousands of calls or emails at a single member of Congress, it’s these low-level and in some cases unpaid interns and junior staffers they inundate.” Simply put, it is a nightmare to handle these communications without having CRM software built specifically for the government.

The questions today’s constituent communication researchers ask are thus very different from whether some special elite civic group exists to influence political leadership and how educated and well-connected such constituents are. Today’s research strikes at the heart of material and cultural power imbalances. Until those imbalances are corrected, we need scholars and advocates to continue asking tough questions about practical democracy.