University of Wisconsin–Madison

Preventing Racial Bias in Federal AI

Type Policy Brief or Report
Year 2020
Level National
State(s) All States
Policy Areas Data & Technology, Public Safety
While Artificial Intelligence (AI) is often viewed as a neutral technological tool that brings efficiency, objectivity and accuracy, choices on design, implementation and use can embed existing racial inequalities into AI and create racially biased systems that produce harmful consequences. This problem in AI systems is especially consequential in the criminal justice system, where it is increasingly used by the federal government to replace or support judicial decision-making. This report addresses the primary causes for the development, deployment, and use of racially biased AI systems and suggests responses to ensure that federal agencies realize the benefits of AI and protect against racially disparate impact. There are three recommended actions for agencies to take to prevent racial bias: 1) increase racial diversity in AI designers, 2) implement AI impact assessment, 3) establish procedures for staff to contest automated decisions.

Remove Entry

Are you sure you want to remove this?

There was an error communicating with the server.

Please try again later.

There was an error while saving your data.

Please try again later.

Error

Please try again later.

Update Page Content

You are leaving the page

Please make sure you saved all of the modules to avoid losing any data.