NIST Unveils Initial Draft of Cyber AI Profile

Published:

The Launch of NIST’s Cyber AI Profile: A Game-Changer for AI Risk Management

On December 16, 2025, the National Institute of Standards and Technology (NIST) unveiled its preliminary draft of the Cyber AI Profile (NIST IR 8596), aimed at providing organizations with essential guidance in managing the risks associated with artificial intelligence (AI) tools. As AI transitions from the experimental phase to becoming integral to daily operations for many U.S. businesses, NIST’s framework aims to address the cybersecurity risks and opportunities this technology introduces.

The Role of NIST in Cybersecurity and AI

NIST has a storied history in cybersecurity, and its work during this transition from legacy systems to AI-driven models is pivotal. The Cyber AI Profile is closely aligned with NIST’s Cybersecurity Framework (CSF) 2.0. By extending these existing frameworks, it attempts to offer a consolidated approach to AI-related risks while acknowledging the unique challenges AI entails. The document opens a 45-day comment window for stakeholders, ending on January 30, 2026, which NIST will use to integrate feedback before issuing a public draft.

The Growing Importance of AI in Businesses

As AI technology becomes a staple in products and workflows, organizations are increasingly embedding AI into their risk management and budgeting processes. This transformation has multifaceted implications, affecting legal, technical, procurement, and governance functions. It highlights the necessity for cross-functional collaboration among teams to address the complexities that AI introduces, from data usage policies to security requirements. Given that both attackers and defenders use AI for their advantage—attackers scale phishing attacks or create deepfakes, while defenders enhance threat detection—the stakes are notably high.

The Structure of the Cyber AI Profile

The draft Cyber AI Profile builds upon two foundational NIST frameworks: CSF 2.0 and the AI Risk Management Framework (AI RMF). It synthesizes these frameworks to create a tailored approach to AI-specific risks, thereby allowing organizations to both secure their AI systems and prepare for AI-enabled threats. While it does not define "AI," it provides various examples to clarify what constitutes AI systems, emphasizing that AI encompasses a wide range of applications and models.

Three Practical Focus Areas

The draft recommends a three-pronged focus approach:

  1. Securing AI System Components (Secure): This focus area revolves around managing the cybersecurity challenges that arise from integrating AI into existing organizational systems and infrastructures.

  2. Conducting AI-Enabled Cyber Defense (Defend): Here, organizations are encouraged to leverage AI to enhance their cybersecurity efforts, while acknowledging the inherent challenges, such as the necessity for human oversight to comply with regulations and maintain ethical standards.

  3. Thwarting AI-Enabled Cyber Attacks (Thwart): This aspect calls for building resilience against emerging threats and vulnerabilities posed by AI.

Detailed Guidance within a Familiar Framework

To assist organizations in navigating these focus areas, NIST has laid out a series of tables aligned with the six CSF functions: Govern, Identify, Protect, Detect, Respond, and Recover. Each table expands on AI-specific considerations relevant to the three focus areas and assigns a proposed priority level for effective planning.

Unique Considerations for AI Integration

The draft highlights unique factors that organizations must consider when utilizing AI:

  1. Model and Data Inventories: NIST recommends maintaining comprehensive inventories covering all AI components, from models to datasets, to bolster boundary enforcement and anomaly detection.

  2. Data Integrity and Provenance: The integrity of training and input data should be validated rigorously, akin to traditional software and hardware.

  3. Supply Chain Risk Management: Organizations must extend their risk management protocols to include both model and data supply chains, encompassing AI-specific contract language and continuous monitoring of key suppliers.

  4. Evolving Legal Landscape: The rapidly changing regulatory environment necessitates that organizations remain vigilant about their legal responsibilities related to AI.

  5. Human Oversight: Assigning a human owner for AI system actions and defining who authorizes AI-assisted defense actions are essential for maintaining accountability.

  6. Characterizing AI-enabled Attacks: The draft encourages that organizations include unique AI attack vectors such as adversarial inputs and model evasion in their risk assessments.

  7. Dynamic Risk Assessment: Given the fast-paced evolution of AI capabilities and threats, regular updates to AI-related policies and risk assessments are advised.

  8. Communication Protocols: Establishing dedicated lines of communication for AI-related risks is vital for aligning stakeholders during incidents.

  9. Training and Resources: NIST emphasizes the importance of equipping team members with knowledge about AI capabilities and risks, underlining this in human resources practices.

Building a Comprehensive Cyber AI Strategy

NIST’s Cyber AI Profile serves as a valuable resource within its broader Cybersecurity, Privacy, and AI program. It encourages organizations to adapt their risk management strategies to align with the realities posed by AI. At a strategic level, it underscores the importance of leadership accountability and the necessity for cross-functional teamwork.

Operationally, the draft translates these themes into pragmatic actions for organizations. Immediate steps might include updating asset inventories, regular reviews of risk assessments focused on AI threats, and setting frequent triggers for policy updates. The draft also suggests applying guardrails and maintaining human oversight of AI-assisted tools to enrich security frameworks.

Future Guidance from NIST: COSAiS

In conjunction with the Cyber AI Profile, NIST is developing SP 800-53 “Control Overlays for Securing AI Systems” (COSAiS). This effort aims to offer implementation-level guidance that complements the Cyber AI Profile’s outcomes-focused approach. While both documents are still open for comment and set to evolve, organizations are encouraged to take proactive steps to align their current practices with the guidance available now.

By taking these preparatory measures, businesses can better position themselves to handle the challenges of an increasingly AI-driven cybersecurity landscape. This blending of foresight and preventive action can go a long way in safeguarding organizations against the evolving nature of cyber threats that leverage AI capabilities.

Related articles

Recent articles

New Products