These tools are typically presented as questionnaires, ranging from simple queries like a person's age, education level and substance abuse history, to more complex evaluations of personality and judgement. The responses are then measured against a database of past offenders to determine the likelihood of a person committing future crimes.
Advocates of these tools argue that in certain instances they can decide the fate of convicted criminals more fairly and than can judges and parole boards, whose decisions are often swayed by personal biases. Computer-based analyses, some argue, are colorblind and absent of prejudice. They can also help reduce prison populations by identifying low-risk offenders who are unlikely to commit future crimes and should therefore receive shorter sentences or forgo incarceration altogether.
A number of states and local jurisdictions are already using these tools to help determine everything from when parole should be granted to appropriate sentence lengths.
Ohio, for example, developed a set of statewide risk assessment tools used throughout the criminal process, from pretrial to parole. A number of private companies also develop and sell them to various jurisdictions, including California, which has used a system called COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) for nearly a decade to inform decisions about rehabilitation programs for prisoners and parolees.
But those skeptical of risk assessment techniques note that they are too often implemented without sufficient independent review or validation. In some cases, the companies making the tools are also the ones evaluating how good they are at predicting criminal behavior. Major flaws in the design, critics argue, can produce tools that are extremely influential but ultimately inaccurate, with detrimental consequences for the people whose fate they ultimately determine. A number of recent investigations have also questioned whether these tools lessen racial bias in criminal justice decision making, or in some cases actually perpetuate it.
ProPublica, for one, analyzed a COMPAS tool used in Broward County, Fla. in 2013 and 2014, and found that it produced results that showed significant bias against black defendants. The investigation looked at the risk assessment scores of 7,000 people arrested in the county in 2013 and 2014, and analyzed the accuracy of the scores in predicting who was charged with a crime two years later. Black defendants, it found, were incorrectly flagged as future criminals at almost twice the rate of white defendants.
Northpointe, the Michigan company that created the tool, was quick to rebut ProPublica's analysis, defending its system as fair and evidence-based. And a follow-up Washington Post analysis suggested that the issue was less clear-cut than ProPublica made it seem, noting that "at the heart of their disagreement is a subtle ethical question: What does it mean for an algorithm to be fair?"
In other words, the jury's still out.