Vetting Frontier AI for Security Risks
US Agency Signs AI Vetting Deals with Tech Giants
CAISI partners with Google, Microsoft, and xAI to evaluate frontier models for national security risks before release.
A digital blue security shield icon overlaid on a modern government building, representing the US AI vetting agreements with major tech companies.
Photo: Avantgarde News
The U.S. Department of Commerce’s Center for AI Standards and Innovation (CAISI) signed agreements with Google DeepMind, Microsoft, and xAI [1]. These partnerships allow the agency to evaluate frontier AI models for security risks before public release [2]. This follows previous collaborations with OpenAI and Anthropic to ensure national safety [1][3].
The initiative focuses on identifying national security threats, specifically in the areas of cybersecurity and biosecurity [1]. Experts will test how these advanced models might be used in cyberattacks or biological threats [2]. Officials aim to set industry benchmarks for safe AI development [3].
Editorial notes
Transparency note
AI assisted drafting. Human edited and reviewed.
- AI assisted
- Yes
- Human review
- Yes
- Last updated
Risk assessment
Reviewed for sourcing quality and editorial consistency.
Sources
- 1.↗
nist.gov
https://www.nist.gov/news-events/news/2026/05/caisi-signs-agreements-regarding-frontier-ai-national-security-testing
- 2.↗
cio.com
https://www.cio.com/article/4168122/us-government-agency-to-safety-test-frontier-ai-models-before-release.html
- 3.↗
seekingalpha.com
https://seekingalpha.com/news/4585585-google-microsoft-xai-agree-to-give-us-early-access-to-evaluate-ai-models
Related stories
View allTopics
About the author
Avantgarde News Desk covers vetting frontier ai for security risks and editorial analysis for Avantgarde News.