Skip to main content

AI is Infringing on Your Civil Rights. Here’s How We Can Stop That


Searching for an apartment online, applying for a loan, going through airport security, or looking up a question on a search engine – you might not think anything of these exchanges other than that they are mundane things you do, but, in many of these instances, you’re actually interacting with artificial intelligence (AI).

Avoiding AI in our quotidian activities feels impossible nowadays, especially when it is now used by public and private organizations to make decisions about us in hiring, housing, welfare, budgeting, and other high-stakes areas. While proponents of AI usage boast about how efficient the technology is, the decisions it makes about us are oftentimes uncontestable, discriminatory, and infringe on our civil rights.

However, inequity and injustice from artificial intelligence need not be our status quo. Senator Ed Markey and Congresswoman Yvette Clarke have just re-introduced the AI Civil Rights Act of 2025, which will help ensure AI developers and deployers do not violate our civil rights. The ACLU strongly urges Congress to pass this bill, so we can prevent AI systems from undermining the equal opportunities our civil rights gave us decades ago.


Why Do We Need the AI Civil Rights Act of 2025?

The AI Civil Rights Act shores up existing civil rights law so their protections now apply to artificial intelligence.

Whether you are looking at the Civil Rights Act of 1964, The Fair Housing Act, The Voting Rights Act, the Americans with Disabilities Act, or a multitude of other civil rights statutes, current civil rights laws may not be easily enforced against discriminatory AI. In many cases, individuals may not even know AI was used, deployers may not be aware of its discriminatory impact, and developers may not have tested the AI model for discriminatory harms. By covering AI harms in several consequential areas -- employment, education, housing, utilities, health care, financial services, insurance, criminal justice, identity verification, and government welfare benefits -- the AI Civil Rights Act provides interlocking protections against discrimination, testing protocols and notice requirements in numerous sectors for people who have their civil rights eroded by AI systems.


Ensuring AI Doesn't Become a Tool for Discrimination

One of the most important aspects of the AI Civil Rights Act is that it will allow us to better defend against discriminatory AI outputs. A decision from an AI model can often appear objective, but when you open up the algorithm, it can have a disparate impact on protected groups. Disparate impact, in the context of artificial intelligence, is a form of discrimination where an AI model disproportionately harms one group over another in its decision making and has been seen within healthcare, financial services, education, criminal justice, and other significant areas.

Unfortunately, disparate impact claims can be onerous to bring forward. For one, to prevail on a disparate impact claim, plaintiffs need to statistically demonstrate that an algorithm disproportionately harms a protected group and that a less discriminatory practice exists. However, the difficulty of meeting this burden can be exacerbated when AI companies refuse to disclose their algorithms for these evaluations by claiming they are "trade secrets." For another, not all civil rights laws give people the private right of action to file a disparate impact claim, and President Donald Trump is constantly rolling back the use of disparate impact in civil rights enforcement. This continual weakening of disparate impact protection makes it even more difficult to file AI-related discrimination claims.

To help with this, the AI Civil Rights Act addresses algorithmic discrimination by making it explicitly unlawful for AI developers or deployers to offer, license, promote, sell, or use an algorithm in critical life areas like housing and employment that causes or contributes to a disparate impact. Centering disparate impact in the AI Civil Rights Act ensures that concrete protections exist for individuals affected by discriminatory AI models.


Transparency and Accountability in AI Systems

Beyond safeguarding against AI-powered discrimination with disparate impact protections, the AI Civil Rights Act gives us the transparency we desperately need from AI developers and deployers. The AI Civil Rights Act requires developers, deployers, and independent auditors to conduct pre-deployment evaluations, impact assessments, and annual reviews of their algorithms. These evaluations will be critical in helping determine whether a model has harmful effects on people's civil rights and where, if at all, it can be deployed in a specific sector.

The AI Civil Rights Act also brings clarity to the long-debated question of who should be held accountable for the civil-rights harms caused by algorithmic systems. If passed, the AI Civil Rights Act will make developers and deployers the parties responsible for taking reasonable steps to ensure their AI models do not violate our civil rights. These steps can include documenting any harms that can arise from the model, being fully transparent with independent auditors, consulting with stakeholders who are impacted by AI models, guaranteeing that the benefits of using an algorithm outweigh the harms, and more. If developers and deployers are found violating the act, they risk facing civil penalties, fees, and other consequences at federal, state, and individual levels. The accountability mechanisms in the act are pivotal to empowering individuals against algorithmic harm while ensuring that AI developers and deployers understand that it is their duty to have low risk models.


What is Next?

If we want our AI systems to be safe, trustworthy, and non-discriminatory, the AI Civil Rights Act is how we start.

“AI is shaping access to opportunity across the country,” says Cody Venze, ACLU senior policy counsel. “‘Black box’ systems make decisions about who gets a loan, receives a job offer, or is eligible for parole, often with little understanding of how those decisions are made. The AI Civil Rights Act makes sure that AI systems are transparent and give everyone a fair chance to compete."

Comments

Popular posts from this blog

Documents Reveal Confusion and Lack of Training in Texas Execution

As Texas seeks to execute Carl Buntion today and Melissa Lucio next week, it is worth reflecting on the grave and irreversible failures that occurred when the state executed Quintin Jones on May 19, 2021. For the first time in its history — and in violation of a federal court’s directive and the Texas Administrative Code — Texas excluded the media from witnessing the state’s execution of Quintin Jones. In the months that followed, Texas executed two additional people without providing any assurance that the underlying dysfunction causing errors at Mr. Jones’ execution were addressed. This is particularly concerning given that Texas has executed far more people than any other state and has botched numerous executions. The First Amendment guarantees the public and the press have a right to observe executions. Media access to executions is a critical form of public oversight as the government exerts its power to end a human life. Consistent with Texas policy, two reporters travelled t...

The Young Singaporean's Guide to Saving & Investing on a Small Salary (2025 Edition)

The Young Singaporean’s Guide to Saving and Investing on a Small Salary Introduction Living in Singapore can feel overwhelming when you’re just starting your career. Rents are high, kopi prices keep rising, and saving on a monthly salary of $2,500–$3,000 might seem impossible. Yet, many young Singaporeans have proven that with the right habits, even a small income can grow into long-term financial security. The key is to start early, be consistent, and leverage the tools available to you — especially CPF, robo-advisors, and smart budgeting. This guide breaks down practical steps you can take to save and invest, even if you’re earning on the lower side. 💰 1. Start with the Basics: Budgeting the 50/30/20 Way If you’re earning $2,800 a month (a common starting salary for many graduates), here’s how the 50/30/20 rule can be applied in Singapore: 50% Needs ($1,400) – rent, transport (MRT/Grab), phone bills, meals. 30% Want...

ये हैं लॉकडाउन के हीरो, जरूरतमंदों को पहुंचा रहे जरूरी सामान

लॉकडाउन में लोग 21 दिनों से घर में बंद हैं। इस दौरान जरूरतमंंदों को जरूरी सामान पहुंचाने का काम कर रहे हैं कुछ हीरोज। एलेप्पी में 50 साल के बुजुर्ग घर-घर सामान पहुंचा रहे हैं वहीं जम्मू-कश्मीर के पूंछ जिले में सेना फ्री में राशन बांट रही है। from Latest And Breaking Hindi News Headlines, News In Hindi | अमर उजाला हिंदी न्यूज़ | - Amar Ujala https://ift.tt/34dPrA7 via IFTTT