Skip to main content

Your Questions Answered: Where We Are on AI Regulation, and Where We Go From Here


Whether you encounter it in your daily life or never think about it at all, artificial intelligence (AI) affects us all. From applying for a loan to sitting at the doctor’s office, AI systems are often used behind the scenes to make real-world decisions — and impact us in ways that aren’t disclosed upfront.

Play the video

Yet despite the growing reach of AI and the diversity of tools and systems it encompasses, regulations governing how it is developed and deployed and how impacted people are informed remain worryingly sparse. Left unregulated, these systems can infringe on your ability to control your data or reinforce discrimination in hiring and employment practices. As the civil rights implications become more serious, strengthening protections is no longer optional.

While policymakers and advocates must do more, existing local, state, and federal laws already offer some protection against discrimination, including digital discrimination. As part of our “Your Questions Answered” series, we asked four ACLU experts to break down what you need to know about your digital rights today, the current state of AI policy, and where regulation may be headed next.

Why is there a need for more regulation in how AI is used?

AI is often used to make decisions about our lives without transparent disclosure. For example, when you apply for a loan or submit a job application, banks or employers might use AI to analyze your materials before a real person ever does. At the doctor’s office, your provider may use an AI scribe to take notes on your conversation. And government agencies are using AI and other automated systems to make crucial decisions about who gets benefits and what those benefits are. AI should be held to strict standards when dealing with people’s lives.

— Olga Akselrod, senior counsel, ACLU Racial Justice Program

What specific harms to our civil liberties might the unregulated use of AI worsen?

Without careful oversight, AI systems used for decision-making have been proven to perpetuate existing systematic inequalities. We’ve seen that when AI tools are used to screen job applications or assess prospective employees, they can unfairly discriminate against people of color, people with disabilities, neurodiverse people, and people from low-income backgrounds. The use of AI in areas like hiring, housing, and policing means that you can be denied a job or an apartment — or even wrongfully arrested when AI-based systems that use facial recognition technology — which suffer from serious racial biases issues and are often used without appropriate safeguards — misidentify suspects in criminal investigations.

None of this is an accident, and it’s not unavoidable. There is an incredibly diverse set of tools and systems that are often categorized as “AI,” and the civil rights implications of these systems depend on the context in which they are used. While some of these systems may be used in relatively benign ways, in other instances, biased AI systems create serious risks of discriminating against real people in life-altering situations. The people, companies, and institutions developing and deploying AI systems are responsible for enabling these biases,but stricter policies and regulation can hold them accountable for their impact and ensure that these practices do not continue.

— Marissa Gerchick, data science manager & algorithmic justice specialist

How can policymakers and advocates address the real-world challenges emerging from the use of AI?

In our new report with researchers from Brown University's Center for Technological Responsibility, we highlight the wide range of AI regulations proposed by policymakers across the country. There are bills which regulate the use of AI in specific areas like education or elections and broader proposals that further expand civil rights protections that already apply to AI uses in high-stakes areas.

Our report also shows how advocates and policymakers can carefully apply computational tools to spot trends and track similarities across the growing AI policy landscape.

Our research also unearthed two key recommendations to address the challenges that emerge when conducting computational AI policy analysis, and we propose solutions to address them:

  1. We urge researchers and policy staff to work together to create standardized formats and structures for legislative texts across jurisdictions to facilitate computational analysis of data.
  2. We encourage researchers and advocates to incorporate a multilingual perspective when analyzing AI legislation introduced in regions under U.S. jurisdiction. Leveraging language technologies tailored to specific languages and legal contexts, while engaging with native speakers and regional AI policy experts, would provide insights into the diverse approaches to AI policy.

While our focus in our report is AI legislation, our findings and recommendations can be applied to other policy areas seeing a growth of bills across jurisdictions, and thus help to understand and strengthen emerging legislation.

— Evani Radiya-Dixit, algorithmic justice fellow

What digital rights do I have when automated tools are used to make decisions about me?

Whether decisions are made by a human or AI, longstanding federal anti-discrimination laws continue to prohibit discrimination in hiring and employment based on race or ethnicity, sex, sexual orientation or gender identity, disability, and other protected characteristics. In addition to federal protections, a growing number of states have passed laws regulating how employers and third-party vendors collect, use, and share your personal data during hiring. These laws give you greater control over your information and more transparency about whether automated systems are evaluating you — and how those systems may influence employment decisions.

Cody Venzke, senior policy counsel, National Political Advocacy

You can learn more about digital discrimination and your digital rights when searching or applying for jobs at our Know Your Rights page.

Comments

Popular posts from this blog

New video by T-Series on YouTube

Aila Re Aillaa (Video) Sooryavanshi | Akshay, Ajay, Ranveer, Katrina, Rohit | 5 November Presenting first song "Aila Re Aillaa " from the most awaited movie of the year "Sooryavanshi". The movie is staring Akshay Kumar, Ajay Devgn, Ranveer Singh and Katrina Kaif in the lead role. The biggest party anthem of the year, this track "Aila Re Aillaa" is sung by Daler Mehndi and the Music Recreated by Tanishk Bagchi and the new lyrics are penned by Shabbir Ahmed. The song originally is composed by Pritam and penned by Nitin Raikwar. Reliance Entertainment, Rohit Shetty Picturez In association with Dharma Productions and Cape Of Good Films presents “Sooryavanshi”. Produced by: Hiroo Yash Johar, Aruna Bhatia, Karan Johar, Apoorva Mehta and Rohit Shetty Directed by: Rohit Shetty Star Cast: Akshay Kumar, Ajay Devgn, Ranveer Singh and Katrina Kaif. SONG CREDITS Song - Aila Re Aillaa Singer - Daler Mehndi Music Reworked by - Tanishk Bagchi Programmed and Arranged by -...

Latest AI tools in 2025

Artificial Intelligence has reached a new height in the year 2025. With the help of powerful tools, AI has made it possible to transform business, revolutionalize the way we live, and the way we work. Chatbots are one of the many amazing things that AI has brought to us in 2025. They have made it possible for businesses to provide 24/7 customer service without the need for human interruption. But chatbots are just the tip of the iceberg of what AI has to offer in 2025. With natural language processing (NLP), AI has made it possible for machines to understand human language and emotions. This has paved the way for virtual assistants like Siri and Alexa to assist with everyday tasks and questions. Robotic Process Automation (RPA) is also one of the growing trends of AI in 2025. This tool facilitates the automation of repetitive tasks, which frees up time for more important work. This improves productivity and efficiency in businesses and organizations. As for the healthcare industry, ...

Documents Reveal Confusion and Lack of Training in Texas Execution

As Texas seeks to execute Carl Buntion today and Melissa Lucio next week, it is worth reflecting on the grave and irreversible failures that occurred when the state executed Quintin Jones on May 19, 2021. For the first time in its history — and in violation of a federal court’s directive and the Texas Administrative Code — Texas excluded the media from witnessing the state’s execution of Quintin Jones. In the months that followed, Texas executed two additional people without providing any assurance that the underlying dysfunction causing errors at Mr. Jones’ execution were addressed. This is particularly concerning given that Texas has executed far more people than any other state and has botched numerous executions. The First Amendment guarantees the public and the press have a right to observe executions. Media access to executions is a critical form of public oversight as the government exerts its power to end a human life. Consistent with Texas policy, two reporters travelled t...