Artificial intelligence is being deployed across the economy at a pace and scale we have never seen before, with little regulation, almost no public input, and most of the benefits flowing to the same handful of corporations and ultra-wealthy investors who have captured so much of the rest of our economy. The harms, meanwhile, are spreading rapidly: workers losing jobs, creators having their work stolen, communities seeing their water and electricity diverted to data centers, kids and vulnerable adults harmed by chatbots designed to maximize engagement, and a federal government rushing to integrate AI into everything from warfare to immigration enforcement before anyone has figured out how to keep it accountable.
This is the wild west of AI, with the people most likely to be harmed having the least say in what gets built and how it gets used. We need strong, immediate regulation to prevent harm, to mitigate the damage already being done, and to ensure the benefits of AI flow to working people rather than only to the Epstein class building these systems on the rest of our backs.
On day one of his second term, Donald Trump revoked the modest AI safeguards established under the previous administration and signaled to the entire industry that the United States would impose no meaningful guardrails on AI development. Since then, his administration has done everything in its power to accelerate AI deployment while gutting the agencies that might have regulated it. Federal contracts have been steered to politically connected AI companies. Massive new data center construction has been incentivized regardless of local environmental and infrastructure costs. The administration has pushed efforts to preempt state-level AI regulation, leaving the country with no enforceable rules at any level.
What we have is a captured market dressed up as a free market, with ordinary people paying the price for it.
AI data centers consume enormous quantities of electricity and water. The boom in AI development is driving construction of massive new facilities across the country. Some of these data centers are causing brownouts and elevated electricity costs for residential customers in their service areas. Others are draining aquifers and stressing municipal water supplies in regions already facing climate-driven water scarcity. Many are being built on sweetheart deals with utilities and local governments, with the public footing the bill for transmission upgrades, water infrastructure, and the increased emissions that come with surging power demand.
I am opposed to the construction of any new AI data centers until sweeping regulations are in place to prioritize residential water, electricity, and public safety first. That includes:
A requirement that data center operators prove their facilities will not harm surrounding communities before construction begins, including binding limits on water and energy use during drought, peak demand, and emergency conditions.
Full polluter-pays accountability for environmental damage, infrastructure costs, and elevated utility rates caused by data center construction. The companies profiting from these facilities should bear the costs, not residents and small businesses in the surrounding area.
Mandatory disclosure of energy and water consumption, emissions, and supply chain impacts, so that communities can make informed decisions about whether to host these facilities at all.
Strict limits on the use of public subsidies, tax breaks, and infrastructure investments to attract data centers, which currently extract enormous public concessions for facilities that produce few local jobs and substantial local harm.
The promise of generative AI has been packaged as productivity and innovation. The reality, for many workers, is wage compression, mass layoffs, and the elimination of jobs that supported families. AI is replacing customer service workers, writers, artists, designers, paralegals, translators, coders, and increasingly white-collar professionals across the economy. The workers bearing the cost of this transition were not consulted, are not being meaningfully retrained, and have no share in the productivity gains AI is generating for the corporations deploying it.
I am opposed to the use of generative AI to replace American workers. AI should be used to enhance existing workflows and to take on the tasks that genuinely benefit from automation, with displaced workers retrained, supported, and given a real share in any productivity gains. As long as the alternative remains poverty for displaced workers, AI-driven labor automation cannot be allowed to proceed unchecked.
I support a federal universal basic income, part of my broader economic platform, to provide every person a stable floor in an economy where the labor market is being transformed faster than any safety net was designed to handle. UBI works alongside good jobs and labor protections, ensuring no one is left destitute by changes they did not choose.
I also support holding AI companies accountable for the creative and intellectual work they have used without permission to train their models. Writers, artists, musicians, journalists, photographers, and countless other creators have had their work scraped, ingested, and repurposed by AI systems whose business models depend on that uncompensated labor. AI companies must compensate every creator whose work was used without permission, and they must obtain meaningful consent for any future use. Existing copyright and intellectual property law has to apply, with real enforcement, to companies that have spent years operating as if it did not.
AI features are being pushed into every piece of consumer software, every website, and every mobile application, often without consent and frequently without any way to disable them. People who do not want their email summarized by AI, their search results filtered through AI, their photos processed by AI, or their personal data fed into AI systems are increasingly unable to find products and services that simply work without it.
I support requiring every company to allow consumers to opt out of any and all AI features in software, websites, and mobile applications. The opt-out should be easy to find and fully functional. Users should be able to keep using the products and services they have paid for, exactly as they are, without being forced to participate in AI experiments they did not consent to.
This is part of a broader principle. Consumers and workers should have meaningful agency over how AI is used in the contexts that affect their lives. Consent has to mean something more than a buried checkbox in a terms-of-service update.
AI companies must be held fully responsible for the outputs of their models and for the safety of the users who interact with them. This is especially urgent for children, teenagers, and other vulnerable populations who are increasingly using AI chatbots, companions, and tools that have already been linked to serious harm.
Cases have already emerged of AI systems contributing to suicides, fueling eating disorders, encouraging self-harm, and reinforcing dangerous delusions in users experiencing mental health crises. These are predictable outcomes of products designed to maximize engagement, deployed without meaningful safety testing, and aimed at populations that include children. The companies responsible have escaped accountability so far because the legal framework for AI liability is decades behind the technology.
In Congress, I will fight to:
Establish clear federal liability for harm caused by AI systems, with consequences that scale to the size and reach of the companies deploying them.
Require meaningful safety testing and external auditing of AI systems before deployment, particularly for systems aimed at or accessible to minors.
Mandate clear verification and additional protections for AI products marketed to or used by children and teens, with strict limits on data collection and engagement-maximizing design.
Require AI systems to clearly disclose their nature to users, so that no one mistakes a chatbot for a human therapist, friend, or medical professional.
Empower the FTC, state attorneys general, and private parties to bring meaningful enforcement actions against companies whose products cause harm.
I am opposed to any effort to remove human beings from targeting decisions in warfare or law enforcement, and I am against the use of AI to wage wars without strong human oversight and full accountability for the results. A person must always be responsible for the use of lethal force. That principle is not negotiable, regardless of how cheap, fast, or efficient the alternative looks to military planners and defense contractors.
Beyond warfare, federal agencies are increasingly using AI to make consequential decisions about housing, healthcare, immigration enforcement, criminal justice, and benefits eligibility. Too often these systems have been deployed with no transparency, no appeal process, and documented patterns of bias against the same communities our institutions have always failed. I will fight to:
Ban autonomous weapons systems and require meaningful human control over any use of force, in line with the principles long advocated by the international disarmament community.
Require human review and clear appeal rights for any federal agency decision affecting individual rights, benefits, or liberty in which AI plays a meaningful role.
Mandate algorithmic accountability and transparency for AI systems used by the federal government, with binding standards for bias testing, performance auditing, and public disclosure.
Take concrete and definitive action to prevent the loss of control over AI systems whose behavior is not yet well understood, including the largest and most general-purpose models being developed today.
The technology is advancing faster than our ability to govern it and the gap is widening. Closing that gap is one of the central regulatory challenges we face.
A democratic society cannot make good decisions about a technology its citizens do not understand. AI is currently being marketed to the public through a veil of hype, mystification, and corporate PR, with the result that many Americans either dramatically overestimate AI's capabilities or accept claims about its inevitability that the evidence does not actually support.
I support major federal investment in AI literacy and public education, so that workers, students, parents, voters, and communities can understand what AI actually is, what it can and cannot do, where its real risks lie, and how to evaluate the claims being made by the companies selling it. Informed people make better choices, both individually and through the political system. The companies that benefit from public confusion will not provide that education, which means the public sector has to.
Technology is shaped by the political choices we make about it. The version of AI we currently have, dominated by a small number of corporations, deployed without consent or accountability, and producing concentrated wealth alongside diffuse harm, is not the only version possible. A different path, one that puts ordinary people first, requires sustained federal action, real corporate accountability, and the willingness to say no to the people who profit from the status quo.
My Checklist for AI
Rescind Trump admin rules barring AI regulations
Require AI companies to pay for polluting the environment
Prevent building new data centers before fully assessing environmental impacts
Make AI companies pay creators for stealing content without permission
Require opt-out options for users of all platforms using AI
Establish clear federal liability for harm caused by AI systems
Require transparency and disclosure of all AI agents, including customer service chatbots