Advanced AI Security Readiness Act
H.R. 3919 – Advanced AI Security Readiness Act to create an AI security playbook at NSA
119th Congress
This bill tells the National Security Agency (NSA) to create an “AI Security Playbook” to protect certain powerful AI technologies from theft by well‑resourced threats. It focuses on advanced AI systems that could cause serious national security risks if stolen. The bill sets timelines, reporting rules, and requires both classified and public guidance.
- Bill Number
- HR3919
- Chamber
- house
What This Bill Does
The bill orders the Director of the National Security Agency, working through the NSA’s Artificial Intelligence Security Center or its successor, to develop strategies called the “AI Security Playbook.” This playbook must focus on defending “covered AI technologies” from theft by nation‑states or other highly resourced actors. Covered AI technologies are advanced AI systems that the Director decides would pose a grave national security threat if stolen, such as AI that can match or beat human experts in areas like chemical, biological, radiological, nuclear issues, cyber offense, persuasion, research and development, or self‑improvement. The playbook must first identify potential vulnerabilities in advanced AI data centers and among advanced AI developers, with a focus on cybersecurity and other risks that are different from normal IT systems. It must identify which AI components or information, if accessed, would greatly help a threat actor’s own AI progress, including models, key model parts, training methods, system engineering, and other core insights. It must then lay out strategies to detect, prevent, and respond to cyber threats targeting these covered AI technologies. The playbook also has to identify what levels of security would require major U.S. government involvement in building or overseeing very advanced AI systems. It must analyze how the government would be involved to reach those security levels, including describing a hypothetical program to build AI systems in a very secure government environment. This includes looking at cybersecurity protocols, protecting model weights, insider‑threat mitigation, vetting and clearances, access control, counterintelligence and anti‑espionage measures, and emergency response plans. The bill states that describing these security levels does not itself give the government any new regulatory or enforcement powers. The playbook must include detailed methods and intelligence assessments, which may be classified, plus an unclassified portion with general guidelines and best practices that can be shared with relevant people, including in the private sector. To develop it, the NSA Director must engage with leading AI developers and researchers by reviewing industry documents, interviewing experts, hosting roundtables, and visiting AI facilities. The Director must also collaborate with at least one federally funded research and development center that has already studied how to secure AI models from nation‑states and other highly resourced actors. The bill sets deadlines for reports to the House and Senate Intelligence Committees. Within 90 days of enactment, the Director must report on progress, remaining sections, and early insights. Within 270 days, the Director must submit a final report on the playbook that includes an unclassified version suitable for sharing with relevant individuals and a publicly available version, and may include a classified annex.
Why It Matters
The bill focuses on protecting very capable AI systems and their key components from being stolen by foreign governments or other powerful groups. These systems could help with tasks such as cyber offense, advanced research, or handling dangerous materials, so their theft could affect national security and global stability. By asking the NSA to map out vulnerabilities and defenses, the bill aims to make it harder for such actors to copy or misuse U.S.-linked advanced AI. For AI companies, researchers, and operators of advanced data centers, the playbook could become an important set of security expectations and best practices. Because the bill requires an unclassified and a public version, some guidance may be shared widely and influence how private organizations secure their models, data, and infrastructure. The exact real‑world impact will depend on how detailed the playbook is, how it is used by the government and industry, and whether future laws or regulations build on its findings. The bill does not itself create new regulatory or enforcement powers, but it may shape future discussions about how much the government should be involved in securing the most advanced AI systems. It could also affect how intelligence agencies, private companies, and research institutions work together to protect sensitive AI technology from theft.
External Categories and Tags
Categories
Tags
Arguments
Arguments in support
- Helps the U.S. government better understand and address unique security risks of advanced AI systems before they are widely deployed.
- Uses existing NSA structures and expertise, including the AI Security Center and a qualified federally funded research and development center, rather than creating a new agency.
- Encourages cooperation between government and leading AI developers and researchers, which may improve the practical value of the security guidance.
- Provides both classified and unclassified outputs, allowing sensitive details to remain protected while still sharing useful best practices with industry and the public.
- Clarifies that the bill itself does not expand regulatory or enforcement powers, which may ease concerns about sudden new mandates on AI developers.
Arguments against
- Could be seen as an early step toward greater government involvement in advanced AI development, which some may worry could later lead to heavier regulation or oversight.
- May increase compliance expectations or informal pressure on private AI developers and data centers without clear funding or support for meeting higher security standards.
- Focus on national security threats may prioritize defense and intelligence needs over other issues related to AI, such as civil liberties, transparency, or broader societal impacts.
- The use of classified annexes and intelligence processes may limit public visibility into how AI security risks are defined and managed.
- The broad definition of “covered AI technologies” leaves significant discretion to the NSA Director, which some may view as too open‑ended.
Key Facts
- Directs the NSA Director, through the Artificial Intelligence Security Center or its successor, to create an “AI Security Playbook” focused on protecting certain advanced AI technologies from theft.
- Defines “covered AI technologies” as advanced AI with critical capabilities that would pose a grave national security threat if stolen, including systems matching or exceeding human experts in sensitive domains (e.g., CBRN, cyber offense, persuasion, self‑improvement).
- Requires the playbook to identify vulnerabilities in advanced AI data centers and among advanced AI developers, with special attention to risks that differ from conventional IT systems.
- Requires identification of specific AI components and information (such as models, model weights, architectures, and core algorithmic insights) whose compromise would meaningfully advance a threat actor’s own AI capabilities.
- Mandates development of strategies to detect, prevent, and respond to cyber threats and other technology‑theft attempts targeting covered AI technologies.
- Requires analysis of what levels of security would need substantial U.S. government involvement and describes a hypothetical initiative to build highly secure government‑run AI systems, including insider‑threat and counterintelligence measures.
- Specifies that describing higher security levels does not grant or require any new regulatory or enforcement actions by the U.S. government.
- Requires both classified content (in a possible annex) and unclassified guidance suitable for sharing with relevant private‑sector and other non‑government stakeholders.
- Directs the NSA to engage with prominent AI developers and researchers via document reviews, expert interviews, roundtables, and facility visits, and to collaborate with a federally funded research and development center experienced in AI security.
- Sets deadlines: an initial progress report to the House and Senate Intelligence Committees within 90 days of enactment, and a final report on the playbook within 270 days, including an unclassified and a publicly available version (plus an optional classified annex).
Gotchas
- The bill explicitly states that identifying security levels needing substantial government involvement does not by itself authorize or require any regulatory or enforcement actions, limiting immediate legal effects.
- It requires a hypothetical description of building AI systems in a highly secure government environment, which could later serve as a reference model even though no such program is created by the bill.
- Activities such as expert roundtables and panels are specifically exempted from being treated as a formal advisory committee under the Federal Advisory Committee Act, reducing procedural requirements for those engagements.
- Public and unclassified versions of the final report mean that at least some AI security guidance developed by the NSA will be accessible beyond government and intelligence communities.
Full Bill Text
We're fetching the official bill text from Congress.gov. Check back shortly.
