Advertisementspot_imgspot_img
36 C
Delhi
Sunday, March 8, 2026
Advertismentspot_imgspot_img

Pentagon vs Anthropic ‘fight’ may mean these new rules for all AI companies

Date:

Pentagon vs Anthropic 'fight' may mean these new rules for all AI companies
Left: Pentagon head Pete Hegseth Right: Anthropic CEO Dario Amodei

The US is reportedly preparing new rules for artificial intelligence (AI) companies. The proposed guidelines come amid a dispute between the Pentagon and the Claude maker Anthropic over how its technology could be used by the military. Under the new rules, AI companies must allow the US government broad access to their models for “any lawful” use to secure federal contracts. According to a draft of the guidance seen by the Financial Times, the US General Services Administration (GSA) plans to require AI companies working with civilian agencies to grant the government an irrevocable licence to use their systems for all legal purposes. The guidance would apply to civilian contracts and is part of a wider government effort to tighten procurement standards for AI services. The report cited a person familiar with the matter who claimed that similar principles are also being considered by the Pentagon for military contracts.The policy discussion gained attention after the Department of War said it would cancel a $200 million contract with Anthropic when the company declined to grant unrestricted access to its technology, citing concerns about domestic surveillance and lethal autonomous weapons. Pentagon also designated Anthropic a supply-chain risk, a classification usually applied to companies linked to countries such as China or Russia. Anthropic also became the first American company to get a ‘national security risk’ label from the US government.Anthropic, a $380 billion AI startup, had argued that its technology could be used for domestic surveillance if handed over for “all lawful use” and sought additional safeguards. Defence secretary Pete Hegseth said the company’s “true objective” was “to seize veto power over the operational decisions of the United States military”.

What other things would the GSA guidelines make mandatory for AI companies

The GSA guidance also requires AI companies that are or will be US military contractors to provide “a neutral, non-partisan tool that does not manipulate responses in favour of ideological dogmas such as diversity, equity and inclusion”. The provision follows an executive order from US President Donald Trump targeting what he described as “woke” AI models.“The contractor must not intentionally encode partisan or ideological judgments into the AI systems’ data outputs,” the draft guidance noted.Another clause includes language intended to question compliance with the European Union’s Digital Services Act, according to a person familiar with the matter.Under the provision, AI companies would need to disclose whether their models have been “modified or configured to comply with any non-US federal government or commercial compliance or regulatory framework”.The General Services Administration (GSA), led by Ed Forst, oversees the procurement of software and technology services for the US federal government. Its subsidiary, the Federal Acquisition Service, headed by former KKR director Josh Gruenbaum, has signed agreements over the past year with AI companies, including OpenAI, Meta, xAI and Google to supply their models to US agencies at lower cost.The GSA terminated its agreement with Anthropic following the dispute involving the Pentagon. The agency will be “soliciting further comments” from industry participants before finalising the new guidelines, the report added, citing the source.



Source link

Share post:

Advertisementspot_imgspot_img

Popular

More like this
Related

Advertisementspot_imgspot_img