Skip to main content

Your Questions Answered: Where We Are on AI Regulation, and Where We Go From Here


Whether you encounter it in your daily life or never think about it at all, artificial intelligence (AI) affects us all. From applying for a loan to sitting at the doctor’s office, AI systems are often used behind the scenes to make real-world decisions — and impact us in ways that aren’t disclosed upfront.

Play the video

Yet despite the growing reach of AI and the diversity of tools and systems it encompasses, regulations governing how it is developed and deployed and how impacted people are informed remain worryingly sparse. Left unregulated, these systems can infringe on your ability to control your data or reinforce discrimination in hiring and employment practices. As the civil rights implications become more serious, strengthening protections is no longer optional.

While policymakers and advocates must do more, existing local, state, and federal laws already offer some protection against discrimination, including digital discrimination. As part of our “Your Questions Answered” series, we asked four ACLU experts to break down what you need to know about your digital rights today, the current state of AI policy, and where regulation may be headed next.

Why is there a need for more regulation in how AI is used?

AI is often used to make decisions about our lives without transparent disclosure. For example, when you apply for a loan or submit a job application, banks or employers might use AI to analyze your materials before a real person ever does. At the doctor’s office, your provider may use an AI scribe to take notes on your conversation. And government agencies are using AI and other automated systems to make crucial decisions about who gets benefits and what those benefits are. AI should be held to strict standards when dealing with people’s lives.

— Olga Akselrod, senior counsel, ACLU Racial Justice Program

What specific harms to our civil liberties might the unregulated use of AI worsen?

Without careful oversight, AI systems used for decision-making have been proven to perpetuate existing systematic inequalities. We’ve seen that when AI tools are used to screen job applications or assess prospective employees, they can unfairly discriminate against people of color, people with disabilities, neurodiverse people, and people from low-income backgrounds. The use of AI in areas like hiring, housing, and policing means that you can be denied a job or an apartment — or even wrongfully arrested when AI-based systems that use facial recognition technology — which suffer from serious racial biases issues and are often used without appropriate safeguards — misidentify suspects in criminal investigations.

None of this is an accident, and it’s not unavoidable. There is an incredibly diverse set of tools and systems that are often categorized as “AI,” and the civil rights implications of these systems depend on the context in which they are used. While some of these systems may be used in relatively benign ways, in other instances, biased AI systems create serious risks of discriminating against real people in life-altering situations. The people, companies, and institutions developing and deploying AI systems are responsible for enabling these biases,but stricter policies and regulation can hold them accountable for their impact and ensure that these practices do not continue.

— Marissa Gerchick, data science manager & algorithmic justice specialist

How can policymakers and advocates address the real-world challenges emerging from the use of AI?

In our new report with researchers from Brown University's Center for Technological Responsibility, we highlight the wide range of AI regulations proposed by policymakers across the country. There are bills which regulate the use of AI in specific areas like education or elections and broader proposals that further expand civil rights protections that already apply to AI uses in high-stakes areas.

Our report also shows how advocates and policymakers can carefully apply computational tools to spot trends and track similarities across the growing AI policy landscape.

Our research also unearthed two key recommendations to address the challenges that emerge when conducting computational AI policy analysis, and we propose solutions to address them:

  1. We urge researchers and policy staff to work together to create standardized formats and structures for legislative texts across jurisdictions to facilitate computational analysis of data.
  2. We encourage researchers and advocates to incorporate a multilingual perspective when analyzing AI legislation introduced in regions under U.S. jurisdiction. Leveraging language technologies tailored to specific languages and legal contexts, while engaging with native speakers and regional AI policy experts, would provide insights into the diverse approaches to AI policy.

While our focus in our report is AI legislation, our findings and recommendations can be applied to other policy areas seeing a growth of bills across jurisdictions, and thus help to understand and strengthen emerging legislation.

— Evani Radiya-Dixit, algorithmic justice fellow

What digital rights do I have when automated tools are used to make decisions about me?

Whether decisions are made by a human or AI, longstanding federal anti-discrimination laws continue to prohibit discrimination in hiring and employment based on race or ethnicity, sex, sexual orientation or gender identity, disability, and other protected characteristics. In addition to federal protections, a growing number of states have passed laws regulating how employers and third-party vendors collect, use, and share your personal data during hiring. These laws give you greater control over your information and more transparency about whether automated systems are evaluating you — and how those systems may influence employment decisions.

Cody Venzke, senior policy counsel, National Political Advocacy

You can learn more about digital discrimination and your digital rights when searching or applying for jobs at our Know Your Rights page.

Comments

Popular posts from this blog

Trump's Attempt to Unilaterally Control State and Local Funding is Dangerous, Dumb, and Undemocratic

The Trump administration has not been subtle in its desire to use federal funding for political punishment. Whether threatening to cut off grants to sanctuary cities, to block financial assistance to states that push back against the president’s demands, or to freeze all federal grants and loans for social services across the country, Trump and his allies want us to believe they can wield the federal budget like a weapon. The reality is that the administration’s ability to withhold or condition funding is far more limited than they let on. The Constitution, Supreme Court precedent, and long-standing federal law stand firmly in the way of this brazen abuse of presidential power. Trump’s Attempted Funding Freeze? Blocked Immediately A week into his second administration, Trump attempted to freeze trillions of dollars in federal grants and loans that fund a vast array of critical services already approved by Congress. If allowed to go into effect, this unprecedented and far-reaching...

Documents Reveal Confusion and Lack of Training in Texas Execution

As Texas seeks to execute Carl Buntion today and Melissa Lucio next week, it is worth reflecting on the grave and irreversible failures that occurred when the state executed Quintin Jones on May 19, 2021. For the first time in its history — and in violation of a federal court’s directive and the Texas Administrative Code — Texas excluded the media from witnessing the state’s execution of Quintin Jones. In the months that followed, Texas executed two additional people without providing any assurance that the underlying dysfunction causing errors at Mr. Jones’ execution were addressed. This is particularly concerning given that Texas has executed far more people than any other state and has botched numerous executions. The First Amendment guarantees the public and the press have a right to observe executions. Media access to executions is a critical form of public oversight as the government exerts its power to end a human life. Consistent with Texas policy, two reporters travelled t...

The Supreme Court Declined a Protestors' Rights Case. Here's What You Need to Know.

The Supreme Court recently declined to hear a case, Mckesson v. Doe , that could have affirmed that the First Amendment protects protest organizers from being held liable for illegal actions committed by others present that organizers did not direct or intend. The high court’s decision to not hear the case at this time left in place an opinion by the Fifth Circuit, which covers Louisiana, Mississippi, and Texas, that said a protest organizer could be liable for the independent, violent actions of others based on nothing more than a showing of negligence. Across the country, many people have expressed concern about how the Supreme Court’s decision not to review, or hear, the case at this stage could impact the right to protest. The ACLU, which asked the court to take up the case, breaks down what the court’s denial of review means. What Happened in Mckesson v. Doe? The case, Mckesson v. Doe , was brought by a police officer against DeRay Mckesson , a prominent civil rights activi...