Skip to main content

AI is Infringing on Your Civil Rights. Here’s How We Can Stop That


Searching for an apartment online, applying for a loan, going through airport security, or looking up a question on a search engine – you might not think anything of these exchanges other than that they are mundane things you do, but, in many of these instances, you’re actually interacting with artificial intelligence (AI).

Avoiding AI in our quotidian activities feels impossible nowadays, especially when it is now used by public and private organizations to make decisions about us in hiring, housing, welfare, budgeting, and other high-stakes areas. While proponents of AI usage boast about how efficient the technology is, the decisions it makes about us are oftentimes uncontestable, discriminatory, and infringe on our civil rights.

However, inequity and injustice from artificial intelligence need not be our status quo. Senator Ed Markey and Congresswoman Yvette Clarke have just re-introduced the AI Civil Rights Act of 2025, which will help ensure AI developers and deployers do not violate our civil rights. The ACLU strongly urges Congress to pass this bill, so we can prevent AI systems from undermining the equal opportunities our civil rights gave us decades ago.


Why Do We Need the AI Civil Rights Act of 2025?

The AI Civil Rights Act shores up existing civil rights law so their protections now apply to artificial intelligence.

Whether you are looking at the Civil Rights Act of 1964, The Fair Housing Act, The Voting Rights Act, the Americans with Disabilities Act, or a multitude of other civil rights statutes, current civil rights laws may not be easily enforced against discriminatory AI. In many cases, individuals may not even know AI was used, deployers may not be aware of its discriminatory impact, and developers may not have tested the AI model for discriminatory harms. By covering AI harms in several consequential areas -- employment, education, housing, utilities, health care, financial services, insurance, criminal justice, identity verification, and government welfare benefits -- the AI Civil Rights Act provides interlocking protections against discrimination, testing protocols and notice requirements in numerous sectors for people who have their civil rights eroded by AI systems.


Ensuring AI Doesn't Become a Tool for Discrimination

One of the most important aspects of the AI Civil Rights Act is that it will allow us to better defend against discriminatory AI outputs. A decision from an AI model can often appear objective, but when you open up the algorithm, it can have a disparate impact on protected groups. Disparate impact, in the context of artificial intelligence, is a form of discrimination where an AI model disproportionately harms one group over another in its decision making and has been seen within healthcare, financial services, education, criminal justice, and other significant areas.

Unfortunately, disparate impact claims can be onerous to bring forward. For one, to prevail on a disparate impact claim, plaintiffs need to statistically demonstrate that an algorithm disproportionately harms a protected group and that a less discriminatory practice exists. However, the difficulty of meeting this burden can be exacerbated when AI companies refuse to disclose their algorithms for these evaluations by claiming they are "trade secrets." For another, not all civil rights laws give people the private right of action to file a disparate impact claim, and President Donald Trump is constantly rolling back the use of disparate impact in civil rights enforcement. This continual weakening of disparate impact protection makes it even more difficult to file AI-related discrimination claims.

To help with this, the AI Civil Rights Act addresses algorithmic discrimination by making it explicitly unlawful for AI developers or deployers to offer, license, promote, sell, or use an algorithm in critical life areas like housing and employment that causes or contributes to a disparate impact. Centering disparate impact in the AI Civil Rights Act ensures that concrete protections exist for individuals affected by discriminatory AI models.


Transparency and Accountability in AI Systems

Beyond safeguarding against AI-powered discrimination with disparate impact protections, the AI Civil Rights Act gives us the transparency we desperately need from AI developers and deployers. The AI Civil Rights Act requires developers, deployers, and independent auditors to conduct pre-deployment evaluations, impact assessments, and annual reviews of their algorithms. These evaluations will be critical in helping determine whether a model has harmful effects on people's civil rights and where, if at all, it can be deployed in a specific sector.

The AI Civil Rights Act also brings clarity to the long-debated question of who should be held accountable for the civil-rights harms caused by algorithmic systems. If passed, the AI Civil Rights Act will make developers and deployers the parties responsible for taking reasonable steps to ensure their AI models do not violate our civil rights. These steps can include documenting any harms that can arise from the model, being fully transparent with independent auditors, consulting with stakeholders who are impacted by AI models, guaranteeing that the benefits of using an algorithm outweigh the harms, and more. If developers and deployers are found violating the act, they risk facing civil penalties, fees, and other consequences at federal, state, and individual levels. The accountability mechanisms in the act are pivotal to empowering individuals against algorithmic harm while ensuring that AI developers and deployers understand that it is their duty to have low risk models.


What is Next?

If we want our AI systems to be safe, trustworthy, and non-discriminatory, the AI Civil Rights Act is how we start.

“AI is shaping access to opportunity across the country,” says Cody Venze, ACLU senior policy counsel. “‘Black box’ systems make decisions about who gets a loan, receives a job offer, or is eligible for parole, often with little understanding of how those decisions are made. The AI Civil Rights Act makes sure that AI systems are transparent and give everyone a fair chance to compete."

Comments

Popular posts from this blog

Trump's Attempt to Unilaterally Control State and Local Funding is Dangerous, Dumb, and Undemocratic

The Trump administration has not been subtle in its desire to use federal funding for political punishment. Whether threatening to cut off grants to sanctuary cities, to block financial assistance to states that push back against the president’s demands, or to freeze all federal grants and loans for social services across the country, Trump and his allies want us to believe they can wield the federal budget like a weapon. The reality is that the administration’s ability to withhold or condition funding is far more limited than they let on. The Constitution, Supreme Court precedent, and long-standing federal law stand firmly in the way of this brazen abuse of presidential power. Trump’s Attempted Funding Freeze? Blocked Immediately A week into his second administration, Trump attempted to freeze trillions of dollars in federal grants and loans that fund a vast array of critical services already approved by Congress. If allowed to go into effect, this unprecedented and far-reaching...

Documents Reveal Confusion and Lack of Training in Texas Execution

As Texas seeks to execute Carl Buntion today and Melissa Lucio next week, it is worth reflecting on the grave and irreversible failures that occurred when the state executed Quintin Jones on May 19, 2021. For the first time in its history — and in violation of a federal court’s directive and the Texas Administrative Code — Texas excluded the media from witnessing the state’s execution of Quintin Jones. In the months that followed, Texas executed two additional people without providing any assurance that the underlying dysfunction causing errors at Mr. Jones’ execution were addressed. This is particularly concerning given that Texas has executed far more people than any other state and has botched numerous executions. The First Amendment guarantees the public and the press have a right to observe executions. Media access to executions is a critical form of public oversight as the government exerts its power to end a human life. Consistent with Texas policy, two reporters travelled t...

The Supreme Court Declined a Protestors' Rights Case. Here's What You Need to Know.

The Supreme Court recently declined to hear a case, Mckesson v. Doe , that could have affirmed that the First Amendment protects protest organizers from being held liable for illegal actions committed by others present that organizers did not direct or intend. The high court’s decision to not hear the case at this time left in place an opinion by the Fifth Circuit, which covers Louisiana, Mississippi, and Texas, that said a protest organizer could be liable for the independent, violent actions of others based on nothing more than a showing of negligence. Across the country, many people have expressed concern about how the Supreme Court’s decision not to review, or hear, the case at this stage could impact the right to protest. The ACLU, which asked the court to take up the case, breaks down what the court’s denial of review means. What Happened in Mckesson v. Doe? The case, Mckesson v. Doe , was brought by a police officer against DeRay Mckesson , a prominent civil rights activi...