Data scientists and civil rights groups are raising the alarm about bias in algorithms that determine everything from who goes to jail to how much your insurance will cost
By Christopher Mims
Lawyers for Eric Loomis stood before the Supreme Court of Wisconsin in April 2016, and argued that their client had experienced a uniquely 21st-century abridgment of his rights: Mr. Loomis had been discriminated against by a computer algorithm.
Three years prior, Mr. Loomis was found guilty of attempting to flee police and operating a vehicle without the owner’s consent. During sentencing, the judge consulted COMPAS (aka Correctional Offender Management Profiling for Alternative Sanctions), a popular software system from a company called Equivant. It considers factors including indications a person abuses drugs, whether or not they have family support, and age at first arrest, with the intent to determine how likely someone is to commit a crime again.
The sentencing guidelines didn’t require the judge to impose a prison sentence. But COMPAS said Mr. Loomis was likely to be a repeat offender, and the judge gave him six years.
The aspects of society that computers are often used to facilitate have a history of abuse and bias.
An algorithm is just a set of instructions for how to accomplish a task. They range from simple computer programs, defined and implemented by humans, to far more complex artificial-intelligence systems, trained on terabytes of data. Either way, human bias is part of their programming. Facial recognition systems, for instance, are trained on millions of faces, but if those training databases aren’t sufficiently diverse, they are less accurate at identifying faces with skin colors they’ve seen less frequently. Experts fear that could lead to police forces disproportionately targeting innocent people who are already under suspicion solely by virtue of their appearance.
COMPAS has become the subject of fierce debate and rigorous analysis by journalists at ProPublica and researchers at Stanford, Harvard and Carnegie Mellon, among others—even Equivant itself. The results are often frustratingly inconclusive. No matter how much we know about the algorithms that control our lives, making them “fair” may be difficult or even impossible. Yet as biased as algorithms can be, at least they can be consistent. With humans, biases can vary widely from one person to the next.
As governments and businesses look to algorithms to increase consistency, save money or just manage complicated processes, our reliance on them is starting to worry politicians, activists and technology researchers. The aspects of society that computers are often used to facilitate have a history of abuse and bias: who gets the job, who benefits from government services, who is offered the best interest rates and, of course, who goes to jail.
“Some people talk about getting rid of bias from algorithms, but that’s not what we’d be doing even in an ideal state,” says Cathy O’Neil, a former Wall Street quant turned self-described algorithm auditor, who wrote the book “Weapons of Math Destruction.”
“There’s no such thing as a non-biased discriminating tool, determining who deserves this job, who deserves this treatment. The algorithm is inherently discriminating, so the question is what bias do you want it to have?” she adds.
In early 2018, New York City became the first government in the U.S. to pass a law intended to address bias in the algorithms used by the city. The law doesn’t do anything beyond create a task force to study and make recommendations on the matter. The first report is expected in December.
New York State’s top insurance regulator clarified in early January that, for determining life insurance qualifications and rates, existing laws preventing insurers from discriminating based on race, religion, national origin and more also apply to algorithms that train on homeownership records, internet use and other unconventional data sources.
In Washington state, a bipartisan group introduced a bill intended to “ensure the fair, transparent and accountable use of automated decision systems” in state government by establishing guidelines for procurement and use of automated decision systems. A similar bill has been proposed in Massachusetts.
A bill in Illinois goes the furthest, proposing that in determining creditworthiness or making hiring decisions, predictive algorithms “may not include information that correlates with the race or ZIP code of the applicant.” As written, such a law would impact private businesses.
Determining what biases an algorithm has is very difficult; measuring the potential harm done by a biased algorithm is even harder.
An increasingly common algorithm predicts whether parents will harm their children, basing the decision on whatever data is at hand. If a parent is low income and has used government mental-health services, that parent’s risk score goes up. But for another parent who can afford private health insurance, the data is simply unavailable. This creates an inherent (if unintended) bias against low-income parents, says Rashida Richardson, director of policy research at the nonprofit AI Now Institute, which provides feedback and relevant research to governments working on algorithmic transparency.
The irony is that, in adopting these modernized systems, communities are resurfacing debates from the past, when the biases and motivations of human decision makers were called into question. Ms. Richardson says panels that determine the bias of computers should include not only data scientists and technologists, but also legal experts familiar with the rich history of laws and cases dealing with identifying and remedying bias, as in employment and housing law.
Companies are waking up to the impending regulatory and compliance burden. Rentlogic, a firm that rates New York City apartments by health and safety standards, has employed Ms. O’Neil as an algorithm auditor, in order to build trust with customers and prepare for future regulation. Eventually there may be something like a chief compliance officer at big companies like Google, says Rentlogic chief executive Yale Fox.
Lawmakers are also thinking about this burden. New York City’s original algorithmic transparency law was watered down, says James Vacca, the former councilman who authored the law, because some city officials were concerned about scaring away software vendors. Similar concerns at state agencies have made it unlikely that Washington’s bill will pass in its current form, says Shankar Narayan, a former Microsoft and Amazon lawyer who’s now director of the Technology & Liberty Project at ACLU Washington.
For Mr. Loomis, the sentencing algorithm did get an audit—in the form of his appeal to the state’s supreme court. His lawyers, the prosecutors and their expert witnesses debated the merits of his sentence. Ultimately, the court decided his sentence was in line with what a human judge would have handed down without help from an algorithm.
He’s due to be released this year.
Appeared in the March 23, 2019, print edition as ‘Our Software Is Biased, Too.’