Skip to main content

AI is Infringing on Your Civil Rights. Here’s How We Can Stop That


Searching for an apartment online, applying for a loan, going through airport security, or looking up a question on a search engine – you might not think anything of these exchanges other than that they are mundane things you do, but, in many of these instances, you’re actually interacting with artificial intelligence (AI).

Avoiding AI in our quotidian activities feels impossible nowadays, especially when it is now used by public and private organizations to make decisions about us in hiring, housing, welfare, budgeting, and other high-stakes areas. While proponents of AI usage boast about how efficient the technology is, the decisions it makes about us are oftentimes uncontestable, discriminatory, and infringe on our civil rights.

However, inequity and injustice from artificial intelligence need not be our status quo. Senator Ed Markey and Congresswoman Yvette Clarke have just re-introduced the AI Civil Rights Act of 2025, which will help ensure AI developers and deployers do not violate our civil rights. The ACLU strongly urges Congress to pass this bill, so we can prevent AI systems from undermining the equal opportunities our civil rights gave us decades ago.


Why Do We Need the AI Civil Rights Act of 2025?

The AI Civil Rights Act shores up existing civil rights law so their protections now apply to artificial intelligence.

Whether you are looking at the Civil Rights Act of 1964, The Fair Housing Act, The Voting Rights Act, the Americans with Disabilities Act, or a multitude of other civil rights statutes, current civil rights laws may not be easily enforced against discriminatory AI. In many cases, individuals may not even know AI was used, deployers may not be aware of its discriminatory impact, and developers may not have tested the AI model for discriminatory harms. By covering AI harms in several consequential areas -- employment, education, housing, utilities, health care, financial services, insurance, criminal justice, identity verification, and government welfare benefits -- the AI Civil Rights Act provides interlocking protections against discrimination, testing protocols and notice requirements in numerous sectors for people who have their civil rights eroded by AI systems.


Ensuring AI Doesn't Become a Tool for Discrimination

One of the most important aspects of the AI Civil Rights Act is that it will allow us to better defend against discriminatory AI outputs. A decision from an AI model can often appear objective, but when you open up the algorithm, it can have a disparate impact on protected groups. Disparate impact, in the context of artificial intelligence, is a form of discrimination where an AI model disproportionately harms one group over another in its decision making and has been seen within healthcare, financial services, education, criminal justice, and other significant areas.

Unfortunately, disparate impact claims can be onerous to bring forward. For one, to prevail on a disparate impact claim, plaintiffs need to statistically demonstrate that an algorithm disproportionately harms a protected group and that a less discriminatory practice exists. However, the difficulty of meeting this burden can be exacerbated when AI companies refuse to disclose their algorithms for these evaluations by claiming they are "trade secrets." For another, not all civil rights laws give people the private right of action to file a disparate impact claim, and President Donald Trump is constantly rolling back the use of disparate impact in civil rights enforcement. This continual weakening of disparate impact protection makes it even more difficult to file AI-related discrimination claims.

To help with this, the AI Civil Rights Act addresses algorithmic discrimination by making it explicitly unlawful for AI developers or deployers to offer, license, promote, sell, or use an algorithm in critical life areas like housing and employment that causes or contributes to a disparate impact. Centering disparate impact in the AI Civil Rights Act ensures that concrete protections exist for individuals affected by discriminatory AI models.


Transparency and Accountability in AI Systems

Beyond safeguarding against AI-powered discrimination with disparate impact protections, the AI Civil Rights Act gives us the transparency we desperately need from AI developers and deployers. The AI Civil Rights Act requires developers, deployers, and independent auditors to conduct pre-deployment evaluations, impact assessments, and annual reviews of their algorithms. These evaluations will be critical in helping determine whether a model has harmful effects on people's civil rights and where, if at all, it can be deployed in a specific sector.

The AI Civil Rights Act also brings clarity to the long-debated question of who should be held accountable for the civil-rights harms caused by algorithmic systems. If passed, the AI Civil Rights Act will make developers and deployers the parties responsible for taking reasonable steps to ensure their AI models do not violate our civil rights. These steps can include documenting any harms that can arise from the model, being fully transparent with independent auditors, consulting with stakeholders who are impacted by AI models, guaranteeing that the benefits of using an algorithm outweigh the harms, and more. If developers and deployers are found violating the act, they risk facing civil penalties, fees, and other consequences at federal, state, and individual levels. The accountability mechanisms in the act are pivotal to empowering individuals against algorithmic harm while ensuring that AI developers and deployers understand that it is their duty to have low risk models.


What is Next?

If we want our AI systems to be safe, trustworthy, and non-discriminatory, the AI Civil Rights Act is how we start.

“AI is shaping access to opportunity across the country,” says Cody Venze, ACLU senior policy counsel. “‘Black box’ systems make decisions about who gets a loan, receives a job offer, or is eligible for parole, often with little understanding of how those decisions are made. The AI Civil Rights Act makes sure that AI systems are transparent and give everyone a fair chance to compete."

Comments

Popular posts from this blog

New video by T-Series on YouTube

Aila Re Aillaa (Video) Sooryavanshi | Akshay, Ajay, Ranveer, Katrina, Rohit | 5 November Presenting first song "Aila Re Aillaa " from the most awaited movie of the year "Sooryavanshi". The movie is staring Akshay Kumar, Ajay Devgn, Ranveer Singh and Katrina Kaif in the lead role. The biggest party anthem of the year, this track "Aila Re Aillaa" is sung by Daler Mehndi and the Music Recreated by Tanishk Bagchi and the new lyrics are penned by Shabbir Ahmed. The song originally is composed by Pritam and penned by Nitin Raikwar. Reliance Entertainment, Rohit Shetty Picturez In association with Dharma Productions and Cape Of Good Films presents “Sooryavanshi”. Produced by: Hiroo Yash Johar, Aruna Bhatia, Karan Johar, Apoorva Mehta and Rohit Shetty Directed by: Rohit Shetty Star Cast: Akshay Kumar, Ajay Devgn, Ranveer Singh and Katrina Kaif. SONG CREDITS Song - Aila Re Aillaa Singer - Daler Mehndi Music Reworked by - Tanishk Bagchi Programmed and Arranged by -...

Latest AI tools in 2025

Artificial Intelligence has reached a new height in the year 2025. With the help of powerful tools, AI has made it possible to transform business, revolutionalize the way we live, and the way we work. Chatbots are one of the many amazing things that AI has brought to us in 2025. They have made it possible for businesses to provide 24/7 customer service without the need for human interruption. But chatbots are just the tip of the iceberg of what AI has to offer in 2025. With natural language processing (NLP), AI has made it possible for machines to understand human language and emotions. This has paved the way for virtual assistants like Siri and Alexa to assist with everyday tasks and questions. Robotic Process Automation (RPA) is also one of the growing trends of AI in 2025. This tool facilitates the automation of repetitive tasks, which frees up time for more important work. This improves productivity and efficiency in businesses and organizations. As for the healthcare industry, ...

Documents Reveal Confusion and Lack of Training in Texas Execution

As Texas seeks to execute Carl Buntion today and Melissa Lucio next week, it is worth reflecting on the grave and irreversible failures that occurred when the state executed Quintin Jones on May 19, 2021. For the first time in its history — and in violation of a federal court’s directive and the Texas Administrative Code — Texas excluded the media from witnessing the state’s execution of Quintin Jones. In the months that followed, Texas executed two additional people without providing any assurance that the underlying dysfunction causing errors at Mr. Jones’ execution were addressed. This is particularly concerning given that Texas has executed far more people than any other state and has botched numerous executions. The First Amendment guarantees the public and the press have a right to observe executions. Media access to executions is a critical form of public oversight as the government exerts its power to end a human life. Consistent with Texas policy, two reporters travelled t...