US government secures early access to AI models for review
The US government has established agreements with major tech firms for the early review of AI models. This initiative aims to address potential security risks associated with advanced AI technologies. (sources: theguardian, bbc, aljazeera, forbes, thehill)

The US government has reached agreements with Microsoft, Google DeepMind, and xAI to review early versions of their AI models. This effort focuses on identifying cybersecurity, biosecurity, and chemical weapons risks.
- Agreements involve Microsoft, Google DeepMind, and xAI providing early access to their AI models.
- The reviews will focus on assessing risks related to cybersecurity, biosecurity, and chemical weapons.
- These agreements build on previous pacts established during the Biden administration.
Why it matters
This initiative reflects the government's effort to ensure the safety and security of emerging AI technologies.
↓ Why this is on ModernAction
3 bills on this issue are moving right now — and the most active one is Artificial Intelligence Risk Evaluation Act of 2025.
S2938 · 119th Congress
Artificial Intelligence Risk Evaluation Act of 2025
Where do you stand on this bill?
Takes about 60 seconds
About this bill
What S2938 actually does
This story is about US announcements that tech firms will have AI models reviewed for national security before release. This bill would stand up a DOE Advanced Artificial Intelligence Evaluation Program to test advanced AI systems and require compliance before deployment.
If passed, it would:
- Creates DOE program for adversarial and safety testing of AI • Requires compliance before deployment and allows civil penalties.
2 other bills moving on this issue
Take action on any of them individually.
This story is about US announcements that tech firms will have AI models reviewed for national security before release. This bill would create a NIST pilot program of AI testbeds to develop measurement standards and formalize stakeholder review of test results.
If passed, it would
- Sets up NIST AI testbeds to develop evaluation standards • Creates a stakeholder review loop around test outcomes.
This story is about US announcements that tech firms will have AI models reviewed for national security before release. This bill would direct NIST to develop voluntary technical guidelines for AI testing, evaluation, validation, and verification to create a common baseline for assurances.
If passed, it would
- Directs NIST to create voluntary technical guidelines for AI assurance • Creates a common baseline for independent evaluation and verification.
Top coverage · 12 sources
