Advances in artificial intelligence (AI) technology are expected to revolutionize our approach to medicine, finance, business operations, media, and more. But research highlights that ostensibly “neutral” technology can have troubling consequences, including discrimination against race or other legally protected classes. For example, COVID-19 predictive models can help health systems combat the virus by efficiently allocating ICU beds, ventilators, and other resources. But as a recent study in the Journal of the American College of Medical Informatics shows, artificial intelligence meant to benefit all patients could exacerbate the problem for people of color if the data used by these models reflects existing racial bias in health care delivery. health care disparities.
The question then is, how do we take advantage of artificial intelligence without inadvertently introducing bias or other unfair outcomes? Fortunately, while the sophisticated technology may be new, the FTC’s focus on automated decision-making is not. The FTC has decades of experience enforcing three laws that are important to AI developers and users:
- Section 5 of the Federal Trade Commission Act. The Federal Trade Commission Act prohibits unfair or deceptive practices. This would include, for example, selling or using racially biased algorithms.
- Fair Credit Reporting Act. The FCRA comes into play in certain situations where algorithms are used to deny people employment, housing, credit, insurance or other benefits.
- Equal Credit Opportunity Act. ECOA makes it illegal for companies to use biased algorithms that result in discrimination based on race, color, religion, national origin, sex, marital status, age, or creditworthiness because an individual receives public assistance.
In addition, the FTC also uses its expertise in these laws to report on big data analytics and machine learning; hold hearings on algorithms, artificial intelligence, and predictive analytics; and issue business guidance on artificial intelligence and algorithms. This work, combined with the FTC’s enforcement actions, provides important lessons for the truthful, fair, and equitable use of artificial intelligence.
Start with the right basics. With its arcane terminology (think: “machine learning,” “neural networks,” and “deep learning”) and vast data-processing capabilities, artificial intelligence can seem almost magical. But there’s no mystery about the right place to start for artificial intelligence: a solid foundation. If a data set lacks information about a specific group of people, using that data to build an AI model may produce results that are unfair or inequitable to legally protected groups. From the beginning, consider how to improve the data set, design the model to address data gaps, and limit where or how the model is used based on any shortcomings.
Watch out for discriminatory results. The FTC holds PrivacyCon every year to showcase cutting-edge developments in privacy, data security and artificial intelligence. During PrivacyCon 2020, researchers presented findings showing that algorithms developed for benign purposes such as medical resource allocation and advertising can actually lead to racial bias. How can you reduce the risk that your company becomes a corporate example of a well-intentioned algorithm that perpetuates racial injustice? It’s important to test your algorithm—either before use or regularly after use—to ensure that it does not discriminate based on race, gender, or other protected classes.
Embrace transparency and independence. Who discovered the racial bias in the medical algorithm described at PrivacyCon 2020 and later published in Science? Independent researchers found this out by examining data provided by a large academic hospital. In other words, it was the hospital’s transparency and researchers’ independence that led to the exposure of bias. As your company develops and uses artificial intelligence, consider how to achieve transparency and independence – for example, by using transparency frameworks and independent standards, by conducting and publishing independent audit results, and by opening data or source code to external inspection .
Don’t exaggerate what your algorithm can do or whether it can provide fair or just results. Under the Federal Trade Commission Act, your statements to business customers and consumers must be truthful, non-deceptive and supported by evidence. In the rush to embrace new technology, be careful not to overpromise what your algorithm can deliver. For example, suppose an AI developer tells a customer that its product will provide “100% bias-free hiring decisions,” but the algorithm is built using data that lacks racial or gender diversity. The result can be deception, discrimination—and enforcement action by the Federal Trade Commission.
Be honest about how you use the data. In last year’s guide to artificial intelligence, we advised companies to be careful about how they obtain the data to power their models. We’re aware of the FTC’s complaint against Facebook, which alleges that the social media giant misled consumers by telling them they had the option to use the company’s facial recognition algorithm, when in fact Facebook was using their photos by default. The FTC’s recent action against app developer Everalbum reinforces this. According to the complaint, Everalbum uses photos uploaded by app users to train its facial recognition algorithms. The FTC alleged that the company deceived users about their ability to control the app’s facial recognition features and made false representations about users’ ability to delete photos and videos after their accounts were deactivated. To deter future breaches, the proposed order would require the company to delete not only illegally obtained data but also facial recognition models or algorithms developed using user photos or videos.
Do more good deeds and less bad deeds. Simply put, under the FTC Act, a practice is unfair if it causes more harm than good. Suppose your algorithm will allow companies to target consumers who are most interested in purchasing their products. Seems like a simple benefit, right? But assuming the model pinpoints these consumers by taking into account race, color, religion, and gender, the result is digital redlining (similar to the Department of Housing and Urban Development’s 2019 case against Facebook). If your model causes more harm than good – that is, in section 5 parlance, if it causes or is likely to cause significant harm to consumers that they cannot reasonably avoid and that cannot be offset by offsetting effects on consumers or competition Benefits to offset – The FTC can challenge the use of the model as unfair.
Hold yourself accountable—or be prepared to let the FTC do it for you. As we have pointed out, it is important to hold yourself accountable for the performance of your algorithms. Our advice on transparency and independence can help you do this. But remember, if you don’t hold yourself accountable, the FTC may hold you liable. For example, if your algorithm results in credit discrimination against a protected class, you may find yourself facing complaints of violations of the FTC Act and ECOA. Whether caused by a biased algorithm or more ordinary human misconduct, the FTC takes allegations of credit discrimination very seriously, as its recent action against Bronx Honda demonstrates.
As your company enters the new world of artificial intelligence, make sure your practices are grounded in the Federal Trade Commission’s established consumer protection principles.
3 Comments
Pingback: Strive for truth, fairness and justice when using artificial intelligence in your company – Tech Empire Solutions
Pingback: Strive for truth, fairness and justice when using artificial intelligence in your company – Paxton Willson
Pingback: Strive for truth, fairness and justice when using artificial intelligence in your company – Mary Ashley