Anthropic, the artificial intelligence company, announced on Thursday a $20 million investment into the 2026 midterm election races through a newly formed group called Public First Action. The group's mission is to defend states' authority to write their own AI regulations, directly opposing efforts by the Trump White House and OpenAI's political operation to establish federal control over AI policy nationwide.
The funding battle is starkly uneven. Public First Action faces Leading the Future, a group backed by OpenAI president Greg Brockman and tech investor Marc Andreessen, which has raised $125 million since its August 2025 launch. Andreessen's venture firm, A16Z, holds a stake in OpenAI, intensifying the personal rivalry between the AI developers.
President Trump's December executive order escalated the conflict. The order directs federal agencies to build a national AI framework with minimal rules and use it to override stricter state regulations. It also creates a Justice Department task force to challenge state AI laws in court, threatening states with the loss of federal funding for rules deemed too strict. Trump's AI advisor, David Sacks, has specifically criticized Colorado's AI Act as "probably the most excessive."
Several state laws are at stake in 2026. Colorado's AI Act, delayed until June 30, 2026, will require companies building "high-risk" AI systems to prevent algorithmic discrimination. California's Transparency in Frontier AI Act took effect on January 1, 2026, as part of a package of seven AI laws passed in 2025. Texas has banned AI use for certain purposes through its Responsible AI Governance Act.
The political spending war reflects a deep ideological split in Silicon Valley over AI oversight. Anthropic, founded by former OpenAI employees Dario and Daniela Amodei over safety concerns, advocates for stronger rules to prevent harm. In its announcement, Anthropic stated, "The companies building AI have a responsibility to help ensure the technology serves the public good, not just their own interests." OpenAI and its supporters favor lighter regulation to accelerate innovation. Earlier this year, OpenAI reportedly asked the Trump administration to block state AI rules in exchange for government access to its models, arguing fragmented laws would damage U.S. AI leadership.
The outcome of the midterm elections could determine the regulatory landscape for AI development in the United States. If candidates backed by Public First Action win enough seats, they could block federal preemption bills, preserving the state-by-state approach. Otherwise, Trump's executive order provides federal agencies with immediate tools to challenge state laws.