If we have learned anything in the last decade about our criminal justice system, it is how astonishingly dysfunctional it is. Extensive investigations have revealed persistent racial disparities at every stage, a different kind of justice for the haves and the have nots, and a system that neither rehabilitates individuals nor ensures public safety. In short, the system is in crisis.
Rather than scrapping everything and starting anew, many criminal justice stakeholders have turned to technology to repair the breach through “risk assessment tools.” Also labeled artificial intelligence, automated decision-making, or predictive analytics, these tools have been touted as carrying with them the potential to save a broken system, and they now play a role at nearly every critical stage of the criminal justice process. If we’re not careful, however, these tools may exacerbate the same problems they are ostensibly meant to help solve.
It begins on the front lines of the criminal justice system with policing. Law enforcement has embraced predictive analytics — which can pinpoint areas allegedly prone to criminal activity by examining historical patterns — and then deploy officers to those areas. In Chicago, for example, the predictive tools analyze complex social networks through publicly accessible data in an attempt to forecast likely perpetrators and victims of violent crime.
Once an individual is arrested, they are likely to be subjected to a pre-trial risk assessment tool. Such tools are used to inform the thinking of a judge who must decide whether to incarcerate that person pending trial or release them. Pre-trial risk assessments attempt to predict which of the accused will fail to appear in court or will be rearrested. Some states have used these pre-trial tools at the sentencing and parole stage, in an attempt to predict the likelihood that someone will commit a new offense if released from prison.
While all of this technology may seem to hold great promise, it also can come with staggering costs. The potential for bias to creep into the deployment of the tools is enormous. Simply put, the devil is in the data. All risk assessment tools generally rely on historical, actuarial data. Often, that data relates to the behavior of a class of people — like individuals with criminal records. Sometimes it relates to the characteristics of a neighborhood. That information is run through an algorithm — a set of instructions that tell a computer model what to do. In the case of risk assessment tools, the model produces a forecast of the probability that an individual will engage in some particular behavior.
Static/Scam-99R comes to mind.
You can fool some people ALL of the time…!