From:                                         Gideon T. Rasmussen, CISSP, CRISC, CISA, CISM, CIPP

Sent:                                           Sunday, January 4, 2026 7:18 PM

To:                                               cyberaiprofile@nist.gov

Cc:                                               Katerina Megas; Barbara Cuthill; Martin Stanley; Marissa Dotter; Ishika Khemani; Bronwyn Patrick; Noah Schiro; Julie Snyder; Mohammad Zarei; Mary Rasmussen; Kaeli Rasmussen; Hunter Rasmussen

Subject:                                     Feedback: NIST Cyber AI Profile (IR 8596 Initial Preliminary Draft)

 

Cyber AI Profile Team,

 

Thanks for this opportunity to provide feedback. The preliminary draft is well-written. Here is my response to your request for feedback:

 

I. High-level feedback:

 

• Label guidance entries by ‘AI Consumer’ and ‘AI Developer’

- Some organizations are only consumers of AI. They purchase AI software or AI as a service

▪ Therefore, some governance and technical control objectives will not apply

- This change would make it easier for organizations to adopt the profile

Create one ‘Guidance’ field

- The profile has four fields: General Considerations, Secure, Defend and Thwart

- 4 four columns x all NIST CSF subcategories (107) = 428 guidance entries

▪ 151 occurrences of “Standard cybersecurity practices apply.”

- Consolidating guidance into one field makes it easier to adopt the profile

- Enables combining:

▪ NIST CSF v2.0 'Implementation Examples' field and

▪ Cyber AI Profile content into a merged Guidance field

- Control framework owners and GRC companies will thank you

 

Control #

Control Description

Guidance

NIST CSF v2.0

NIST PFW v1.0

etc.

ACME-001

Configuration management practices are established and applied

Ex1: Establish, test, deploy, and maintain hardened baselines that enforce the organization’s cybersecurity policies and provide only essential capabilities (i.e., principle of least functionality)

Ex2: Review all default configuration settings that may potentially impact cybersecurity when installing or upgrading software

Ex3: Monitor implemented software for deviations from approved baselines

AI-Ex1 Configure artificial intelligence software to prevent company data from being used to train commercial AI models

PR.PS-01

 

Reduce the number of entries in the Cyber AI Profile

- My take is there are 27 AI governance and technical control objectives

▪ Subjective and based upon one person’s opinion

▪ A conservative approach, relative few entries

- Document necessary AI governance and technical control objectives

▪ The number entries in an internal control framework must be kept to a minimum

▪ Companies must “align the organization’s cybersecurity strategy with legal, regulatory, and contractual requirements” (GV.OC-03)

- Fewer entries makes it easier to consume the profile, increasing adoption

 

Establish a revision cycle to keep pace with rapid changes in AI technology

- This document is time perishable

- A two year revision cycle is recommended

 

II. Detailed feedback:

 

• GV.OC-03: Legal, regulatory, and contractual requirements regarding cybersecurity — including privacy and civil liberties obligations — are understood and managed

AI-Ex1 Update the organization’s control framework to account for artificial intelligence; sourcing requirements from laws, regulations and contractual obligations

• GV.RM-01: Risk management objectives are established and agreed to by organizational stakeholders

AI-Ex1 Require human validation before certain tasks can be completed by an AI agent (e.g. when there is risk of AI hallucination or bias)

• GV.PO-01: Policy for managing cybersecurity risks is established based on organizational context, cybersecurity strategy, and priorities and is communicated and enforced

AI-Ex1 Include acceptable use directives for artificial intelligence technology within cybersecurity policy

• GV.SC-05: Requirements to address cybersecurity risks in supply chains are established, prioritized, and integrated into contracts and other types of agreements with suppliers and other relevant third parties

AI-Ex1 Include artificial intelligence cybersecurity requirements within default contract language
AI-Ex2 Require third parties to provide an explainability statement* outlining the critical dimensions of the AI solution
* "The AI explainability statement is a public document released by an AI organization that outlines how its AI algorithms work, its intended use, technology infrastructure, model accuracy, bias detection and mitigation, system maintenance, risk management, ethical principles, and data sources."
Adopting AI Responsibly: Guidelines for Procurement - World Economic Forum

• GV.SC-07: The risks posed by a supplier, their products and services, and other third parties are understood, recorded, prioritized, assessed, responded to, and monitored over the course of the relationship

AI-Ex1 Integrate artificial intelligence cybersecurity requirements within third party assessment report evaluations and questionnaires

• ID.AM-02: Inventories of software, services, and systems managed by the organization are maintained

AI-Ex1 Include artificial intelligence technology within software inventories 

• ID.AM-08: Systems, hardware, software, services, and data are managed throughout their life cycles

AI-Ex1 Maintain data quality processes to mitigate the risk of AI hallucinations and bias

• ID.RA-01: Vulnerabilities in assets are identified, validated, and recorded

AI-Ex1 Incorporate artificial intelligence into security architecture reviews
AI-Ex2 Conduct human validation of AI generated content prior to use in critical decisions

• ID.RA-05: Threats, vulnerabilities, likelihoods, and impacts are used to understand inherent risk and inform risk response prioritization

AI-Ex1 Conduct risk assessments of AI use cases

• ID.RA-07: Changes and exceptions are managed, assessed for risk impact, recorded, and tracked

AI-Ex1 Incorporate artificial intelligence security requirements into change and project management processes

• ID.IM-02: Improvements are identified from security tests and exercises, including those done in coordination with suppliers and relevant third parties

AI-Ex1 Conduct penetration testing of artificial intelligence systems when they process sensitive data or support mission critical processes

• ID.IM-03: Improvements are identified from execution of operational processes, procedures, and activities

AI-Ex1 Evaluate human validation of AI-enabled processes to confirm controls remain in place and effective (e.g. through annual review)

• ID.IM-04: Incident response plans and other cybersecurity plans that affect operations are established, communicated, maintained, and improved

AI-Ex1 Establish incident response playbook(s) based on artificial intelligence usage and related business processes

• PR.AA-05: Access permissions, entitlements, and authorizations are defined in a policy, managed, enforced, and reviewed, and incorporate the principles of least privilege and separation of duties

AI-Ex1 Maintain role based accesses for AI agents based on process and task execution. Restrict access to data, systems and privileged roles

• PR.AT-01: Personnel are provided with awareness and training so that they possess the knowledge and skills to perform general tasks with cybersecurity risks in mind

AI-Ex1 Training includes proper use of artificial intelligence, threats and countermeasures (e.g. AI hallucinations and bias, conduct human validation, do not use publicly hosted GenAI for work activities)

• PR.DS-02: The confidentiality, integrity, and availability of data-in-transit are protected

AI-Ex1 Maintain an internally administered GenAI implementation and block access to commercial GenAI services from the internal network and company owned systems.
AI-Ex2 Prevent sensitive data loss to commercial GenAI services (e.g. within prompts or file uploads)

• PR.PS-01: Configuration management practices are established and applied

AI-Ex1 Configure artificial intelligence software to prevent company data from being used to train commercial AI models

• PR.PS-04: Log records are generated and made available for continuous monitoring

AI-Ex1 Log all AI agent activities, including inputs, outputs, actions taken and human overrides

• PR.PS-05: Installation and execution of unauthorized software are prevented

AI-Ex1 Restrict AI agent access to software tools required for process and task execution

• PR.PS-06: Secure software development practices are integrated, and their performance is monitored throughout the software development life cycle

AI-Ex1 Conduct use and abuse case testing to validate AI agent process and task execution within expected parameters. Test for bias in algorithms and training data

• PR.IR-03: Mechanisms are implemented to achieve resilience requirements in normal and adverse situations

AI-Ex1 Monitor AI agents to detect failed process and task execution. Implement automated response procedures to maintain operations

AI-Ex2 Service delivery does not rely solely on the AI system. If it fails or produces inaccurate results, alternate procedures maintain service levels

• DE.AE-02: Potentially adverse events are analyzed to better understand associated activities

AI-Ex1 Monitor AI agents and alert on unexpected, adverse or harmful behavior

• DE.AE-06: Information on adverse events is provided to authorized staff and tools

AI-Ex1 An operational function continuously monitors AI agents and promptly responds to functionality issues, outages and security alerts

 

Cyber AI Profile Team: Thanks for your service to our InfoSec community! Your hard work helps identify and mitigate risk in many organizations. Feel free to reach out to me with questions or comments.

 

Gideon

 

Gideon T. Rasmussen | CISSP, CRISC, CISA, CISM, CIPP | Management Consultant

Virtual CSO, LLC |www.virtualcso.com | www.gideonras.com

 

The opinions expressed here are my own and not necessarily those of my current or past clients/employers.