天美网站传媒传媒

Microsoft, Google and xAI to Give Government Early Access to AI Models

By and | May 6, 2026

Microsoft, Google and Elon Musk’s xAI agreed to give the U.S. government early access to new artificial intelligence models for national security testing, as U.S. officials grow alarmed by the hacking capabilities of Anthropic’s newly unveiled Mythos.

The Center for AI Standards and Innovation at the Department of Commerce said on Tuesday that the agreement would allow it to evaluate the models before deployment and conduct research to assess their capabilities and security risks. The agreement fulfills a pledge the Trump administration made in July 2025 to with technology companies to vet their AI models for “national security risks.”

Microsoft will work with U.S. government scientists to test AI systems “in ways that probe unexpected behaviors,” the company said in a statement. Together they will develop shared datasets and workflows for testing the company’s models, the company said. Microsoft signed a similar agreement with the UK’s AI Security Institute, according to the statement.

Concern is growing in Washington over the national security risks posed by powerful AI systems. By securing early access to frontier models, U.S. officials are aiming to identify threats ranging from cyberattacks to military misuse before the tools are widely deployed.

The development of advanced AI systems including Anthropic’s Mythos has in recent weeks created a stir globally, including among U.S. officials and corporate America, over their ability to supercharge hackers.

“Independent, rigorous measurement science is essential to understanding frontier AI and its national security implications,” CAISI Director Chris Fall said in a statement.

The move builds on previous agreements with OpenAI and Anthropic, established in 2024 under the Biden administration when CAISI was known as the U.S. Artificial Intelligence Safety Institute. Under former President Joe Biden, the institute focused on developing AI tests, definitions and voluntary safety standards. It was led by Biden tech adviser Elizabeth Kelly, who has since joined Anthropic, according to her LinkedIn profile.

CAISI, which serves as the government’s main hub for AI model testing, said it had already completed more than 40 evaluations, including on cutting-edge models not yet available to the public.

Developers frequently hand over versions of their models with safety guardrails stripped back so the center can probe for national security risks, the agency said.

xAI did not immediately respond to a request for comment. Google declined to comment.

Last week, the Pentagon said it had reached agreements with seven AI companies to deploy their advanced capabilities on the Defense Department’s classified networks as it seeks to broaden the range of AI providers working across the military.

The Pentagon announcement did not include Anthropic, which has been embroiled in a dispute with the Pentagon over guardrails on the military’s use of its AI tools.

Topics InsurTech Data Driven Artificial Intelligence Google

Was this article valuable?

Here are more articles you may enjoy.