3.5.3 Interest Groups – Significance, resources, tactics and debates about impact on democracy
In March 2026, Anthropic, the San Francisco-based artificial intelligence company behind the Claude AI model, filed two separate lawsuits against the Trump administration after the Department of Defense designated it a “supply chain risk” and ordered federal agencies to cease all use of its technology. The fallout followed months of behind-the-scenes negotiations over Anthropic’s $200 million Pentagon contract, signed in July 2025, which had made Claude the first AI model deployed across the US government’s classified networks. When contract renegotiations collapsed, President Trump directed federal agencies to stop using Anthropic’s products, and Defence Secretary Pete Hegseth declared that any contractor or supplier doing business with the US military was barred from working with the company.
The dispute centred on two conditions Anthropic refused to abandon. The company insisted that Claude would not be used for fully autonomous weapons systems or for the mass domestic surveillance of American citizens. The Pentagon, by contrast, demanded unrestricted access to the model for all lawful military purposes, arguing that it could not allow a private company to insert itself into the chain of command by restricting what the military could or could not do with its own technology. When negotiations broke down, OpenAI moved swiftly to fill the void, accepting terms that Anthropic had rejected, prompting a public war of words between the two companies’ chief executives over AI safety and commercial compromise.
The legal and commercial consequences have been significant. Anthropic’s lawsuit argues the supply chain risk designation is unprecedented and unlawful, noting that the label has historically been applied only to foreign adversaries. Defence technology firms across the sector began directing employees to switch away from Claude. Analysts warned of short-term disruption for major partners such as Palantir, which had integrated Claude into its tools for intelligence and defence agencies. More than thirty employees from rival firms including Google DeepMind and OpenAI filed a legal brief in support of Anthropic, arguing that punishing one leading AI company would damage American competitiveness more broadly.
This episode illustrates the growing power of corporate interest groups in shaping technology policy. Anthropic has effectively acted as a well-resourced insider group, lobbying through litigation, public statements, and industry coalitions to defend its preferred policy position on AI governance. The case demonstrates how interest groups can constrain executive authority even when facing a politically assertive administration: by litigating aggressively, mobilising allied voices, and framing their cause in terms of democratic principles rather than commercial self-interest. Yet it also exposes the limits of corporate influence when a determined executive branch is willing to deploy its procurement power as a political weapon.