upper waypoint

Can Algorithms Help Make the U.S. Criminal Justice System Less Biased?

Save ArticleSave Article
Failed to save article

Please try again


The U.S. has a seriously bloated prison population. We lock up people at a  higher rate than any other country in the world, with roughly 1 out of 140 Americans are currently behind bars.

And that rate grows even steeper for certain minority populations, like African Americans, who are incarcerated at five times the rate of whites, according to The Sentencing Project.

Jailing that many people isn't cheap. It costs roughly $80 billion a year, far more than what we spend on many other essential public services.  California, for instance, shells out more than $70,000 per inmate but less than $12,000 per k-12 student.

The ethical and financial dilemmas inherent in the current state of mass incarceration have spurred recent pushes at reform in the hopes of addressing racial disparities and reducing the sheer number of people we lock up.

One strategy that's gained traction in recent years is the use of risk assessment tools.  Similar to popular video and social platforms like Netflix and Facebook, these tools rely on computer algorithms to make predictions about future behavior. But whereas the Netflix algorithm analyzes user data to predict what videos a viewer might want  to watch next, risk assessment tools analyze data to predict the likelihood of someone committing a future crime.

Sponsored

These tools are typically presented as questionnaires, ranging from simple queries like a person's age, education level and substance abuse history, to more complex evaluations of personality and judgement. The responses are then measured against a database of past offenders to determine the likelihood of a person committing future crimes.

Advocates of these tools argue that in certain instances they can decide the fate of convicted criminals more fairly and than can judges and parole boards, whose decisions are often swayed by personal biases. Computer-based analyses, some argue, are colorblind and absent of prejudice. They can also help reduce prison populations by identifying low-risk offenders who are unlikely to commit future crimes and should therefore receive shorter sentences or forgo incarceration altogether.

A number of states and local jurisdictions are already using these tools to help determine everything from when parole should be granted to appropriate sentence lengths.

Ohio, for example, developed a set of statewide risk assessment tools used throughout the criminal process, from pretrial to parole. A number of private companies also develop and sell them to various jurisdictions, including California, which has used a system called COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) for nearly a decade to inform decisions about rehabilitation programs for prisoners and parolees.

But those skeptical of risk assessment techniques note that they are too often implemented without sufficient independent review or validation.  In some cases, the companies making the tools are also the ones evaluating how good they are at predicting criminal behavior. Major flaws in the design, critics argue, can produce tools that are extremely influential but ultimately inaccurate, with detrimental consequences for the people whose fate they ultimately determine. A number of recent investigations have also questioned whether these tools lessen racial bias in criminal justice decision making, or in some cases actually perpetuate it.

ProPublica, for one, analyzed a COMPAS tool used in Broward County, Fla. in 2013 and 2014, and found that it produced results that showed significant bias against black defendants. The investigation looked at the risk assessment scores of 7,000 people arrested in the county in 2013 and 2014, and analyzed the accuracy of the scores in predicting who was charged with a crime two years later.  Black defendants, it found, were incorrectly flagged as future criminals at almost twice the rate of white defendants.

Northpointe, the Michigan company that created the tool, was quick to rebut ProPublica's analysis, defending its system as fair and evidence-based.  And a follow-up Washington Post analysis suggested that the issue was less clear-cut than ProPublica made  it seem, noting that "at the heart of their disagreement is a subtle ethical question: What does it mean for an algorithm to be fair?"

In other words, the jury's still out.

lower waypoint
next waypoint
Why So Many Central Americans Are Seeking Asylum in the U.S.Real-Time Interactive Earthquake Map: Get to Know Your Local FaultsIt's Really Happening! This Is What KQED's Youth Takeover Looks LikeWhen Rivers Caught Fire: A Brief History of Earth Day (with Lesson Plan)A Look Inside the Youth Vaping CrazeIt's Almost Tax Day. This Is How the Government Spends Your Hard-Earned CashIs the Endangered Species Act at Risk of Extinction?March Madness and the Money: Should College Athletes Get Paid?How to Stop a Nuclear War: The Non-Proliferation Treaty, ExplainedMAP: What Does the U.S.-Mexico Border Really Look Like?