Anthropic Invests $20M in Midterm Races to Defend State AI Laws, Clashing with OpenAI and Trump Administration

Feb 13, 2026, 8:21 a.m. 1 sources neutral

Key takeaways:

  • Political AI funding battles signal regulatory uncertainty that may impact tech sector valuations.
  • Anthropic's stance suggests potential for stricter AI rules, contrasting with OpenAI's innovation-focused approach.
  • Investors should monitor 2026 election outcomes for clues on future AI policy direction.

Anthropic, the artificial intelligence company, announced on Thursday a $20 million investment into the 2026 midterm election races through a newly formed group called Public First Action. The group's mission is to defend states' authority to write their own AI regulations, directly opposing efforts by the Trump White House and OpenAI's political operation to establish federal control over AI policy nationwide.

The funding battle is starkly uneven. Public First Action faces Leading the Future, a group backed by OpenAI president Greg Brockman and tech investor Marc Andreessen, which has raised $125 million since its August 2025 launch. Andreessen's venture firm, A16Z, holds a stake in OpenAI, intensifying the personal rivalry between the AI developers.

President Trump's December executive order escalated the conflict. The order directs federal agencies to build a national AI framework with minimal rules and use it to override stricter state regulations. It also creates a Justice Department task force to challenge state AI laws in court, threatening states with the loss of federal funding for rules deemed too strict. Trump's AI advisor, David Sacks, has specifically criticized Colorado's AI Act as "probably the most excessive."

Several state laws are at stake in 2026. Colorado's AI Act, delayed until June 30, 2026, will require companies building "high-risk" AI systems to prevent algorithmic discrimination. California's Transparency in Frontier AI Act took effect on January 1, 2026, as part of a package of seven AI laws passed in 2025. Texas has banned AI use for certain purposes through its Responsible AI Governance Act.

The political spending war reflects a deep ideological split in Silicon Valley over AI oversight. Anthropic, founded by former OpenAI employees Dario and Daniela Amodei over safety concerns, advocates for stronger rules to prevent harm. In its announcement, Anthropic stated, "The companies building AI have a responsibility to help ensure the technology serves the public good, not just their own interests." OpenAI and its supporters favor lighter regulation to accelerate innovation. Earlier this year, OpenAI reportedly asked the Trump administration to block state AI rules in exchange for government access to its models, arguing fragmented laws would damage U.S. AI leadership.

The outcome of the midterm elections could determine the regulatory landscape for AI development in the United States. If candidates backed by Public First Action win enough seats, they could block federal preemption bills, preserving the state-by-state approach. Otherwise, Trump's executive order provides federal agencies with immediate tools to challenge state laws.

Disclaimer

The content on this website is provided for information purposes only and does not constitute investment advice, an offer, or professional consultation. Crypto assets are high-risk and volatile — you may lose all funds. Some materials may include summaries and links to third-party sources; we are not responsible for their content or accuracy. Any decisions you make are at your own risk. Coinalertnews recommends independently verifying information and consulting with a professional before making any financial decisions based on this content.