Tentatively excellent news! The FTC has declared that it is serious about racist algorithms, and it will hold businesses legally accountable for using them. In a friendly-reminder type announcement today, it said that businesses selling and/or using racist algorithms could feel the full force of their legal might.
“Fortunately, while the sophisticated technology may be new, the FTC’s attention to automated decision making is not,” FTC staff attorney Elisa Jillson wrote in a statement on Tuesday, adding that the agency “has decades of experience” enforcing laws that racist algorithms violate. They write that selling and/or using racially biased algorithms could qualify as unfair or deceptive practices under the FTC Act. They also remind businesses that racial discrimination (by algorithm or human) could violate the Fair Credit Reporting Act and the Equal Credit Opportunity Act.
The effects of algorithmic racial bias and automated white favoritism spill out far beyond the types of products Facebook serves us. Racist algorithms have been shown to disproportionately deny Black people recommendations for specialized healthcare programs. They have priced out higher interest rates on mortgages for Black and Latinx people than whites with the same credit scores. They have drastically exaggerated Black defendants’ risk of recidivism, which can impact sentencing and bail decisions. They have encouraged police to target locations and arrest records which perpetuate further disproportionate arrests in Black communities. The list goes on.
Government use of racist algorithms makes the “selling” part especially important. The FTC can’t try the cops, but it might be able to go after a company that misrepresented its tool as race-neutral.
Given the endless churn of stories about the racist results of facial recognition, it could seem that the FTC is equipping itself to practically annihilate the technology. In an email to Gizmodo, an FTC spokesperson said that a seller could be guilty of “deceptive” practices if it “misleads consumers (whether they are businesses or individuals) about (for example) what an algorithm can do, the data it is built from, or the results it can deliver, the FTC may challenge that as a deceptive practice.”
That’s a big deal! Most algorithms that sort through personal data do deliver discriminatory results, and companies tend not to admit it. But this is complicated by the fact that it’s often hard to prove the results because companies also tend to avoid letting us look under the hood, forcing investigative journalists and researchers to piece together clues after the damage is done. (See most of the links above.)
That caginess would likely stall an FTC complaint against an “unfair” practice. The commission would have to perform the time-consuming chore of exposing proof that the algorithm itself directly harms consumers. (In the spokesperson’s example: “compromises consumers’ ability to get credit, housing, jobs”.)
In other words, no one knows the extent of racist algorithms’ damage, and the FTC urges businesses to hold themselves accountable or the FTC “will do it for you,” read: the FTC will come for you, even if you’re a small potatoes Honda dealership.
Businesses will still lie, they know, so the announcement also reminds us that the FTC filed a complaint against Facebook alleging, among other things, that the company knowingly deceived users about facial recognition. This resulted in a settlement of $5 billion, which the FTC had celebrated as “history-making” but Democrats complained was wildly insufficient to make Facebook feel any pain.
On a more hopeful note, the FTC could spread some of the regulatory responsibility around. The spokesperson noted that the Consumer Financial Protection Bureau also enforces the Fair Credit Reporting Act and the Equal Credit Opportunity Act. The Department of Health and Human Services and the Department of Justice, too, could pursue discrimination cases.
Here’s hoping they follow through and drive a hard bargain. People are getting sick and locked up.