A Guide to the NIST AI Risk Management Framework
AI systems are embedded in everything from hiring software to medical diagnostics. How AI risk is managed is a practical question impacting lives right now. To address this issue, the National Institute of Standards and Technology (NIST) developed the AI Risk Management Framework (AI RMF).
The AI RMF is a voluntary guide designed to help organizations manage the potential risks associated with artificial intelligence systems throughout their entire lifecycle. The framework aims to promote the responsible design, development, deployment, and use of AI. It provides a common language for technical teams, legal counsel, and executives to discuss and address risk collaboratively. Using the framework to understand and manage AI risk can help organizations prepare for compliance with emerging AI regulations, reduce legal and reputational risks and help to cultivate public trust.
The Framework is divided into two parts:
Part 1 sets out the foundations of the framework. It frames risk in the context of AI, identifies the various AI actors across the AI lifecycle, and sets out the characteristics of trustworthy AI systems.
Part 2 comprises the “Core” of the Framework. It describes four specific functions (governance, mapping, measuring and managing) to help organizations address the risks of AI systems in practice.
Part 1: Foundational Information
Framing Risk
Risk is defined as a function of the negative impact, or magnitude of harm, that would arise if a circumstance or event occurs and the likelihood of occurrence. Measuring AI risk is difficult due to lack of consensus around methods to measure AI risk and system complexity.
The AI RMF does not set a standard for acceptable risk. Risk tolerance is specific to each organization and context. Organizations should follow existing sector-specific regulations, or define their own reasonable tolerance levels if none exist, which the AI RMF can then help manage.
As it is impractical to try to eliminate all risk, organizations should prioritize risks based on their potential impact. Systems with the highest potential for severe or imminent harm should receive the most urgent attention and may need to be halted until risks are managed. Even systems deemed "low-risk" require ongoing assessment, as they can have unforeseen downstream consequences.
AI risk management should be integrated into the organization's broader enterprise risk management strategies. Effective risk management requires a strong organizational culture, clear accountability, and commitment from senior leadership. The AI RMF is a tool, but its success depends on the organization's willingness to establish the right roles, responsibilities, and incentives.
AI Actors Across the AI Lifecycle
The AI RMF’s intended audience includes all the AI actors across the AI lifecycle. The framework has been designed to be flexible and simple so that these different actors can apply it throughout the lifecycle.
The Seven Pillars of Trustworthy AI
The framework sets out the characteristics of a trustworthy AI system. Managing these characteristics is a holistic process. Improving one (e.g., privacy) can impact others (e.g., accuracy), and no single characteristic alone guarantees trustworthiness. The goal is to balance these characteristics throughout the AI lifecycle:
- Valid and Reliable: AI systems must be accurate and robust (perform well under a variety of real-world conditions).
- Safe: AI systems must be designed to avoid endangering human life, health, property, or the environment. This is achieved through responsible design, rigorous testing, real-time monitoring, and clear guidelines for safe use.
- Secure and Resilient: AI systems must be protected against attacks (like data poisoning) and be resilient enough to withstand adverse events, maintaining functionality or degrading safely when necessary.
- Accountable and Transparent: Trust requires accountability, which is built on transparency. This means providing appropriate information about how the system was built, how it works, and its intended uses. Transparency is crucial for identifying problems, enabling redress, and fostering confidence.
- Explainable and Interpretable: These traits help users understand the system. Explainability clarifies how a system reached an output, while interpretability clarifies why the output matters in a specific context. Together, they make systems easier to debug, monitor, and trust.
- Privacy-Enhanced: AI systems should be designed to safeguard personal data and individual autonomy using privacy-enhancing technologies. However, strengthening privacy can sometimes involve trade-offs with other goals, like accuracy.
- Fair with Harmful Bias Managed: Fairness is complex and context-dependent.It requires actively managing three categories of harmful bias:
- Systemic bias (in societal structures and data).
- Computational/statistical bias (in datasets and algorithms).
- Human-cognitive bias (in how people design and use AI systems).
Part 2: The Core Functions
Part 2 provides actionable guidance through the four core functions that form the basis of the framework. These functions are meant to be an iterative process that can be tailored to an organization's specific needs. While govern applies to all stages of organizations’ AI risk management processes and procedures, the map, measure, and manage functions can be applied in AI system-specific contexts and at specific stages of the AI lifecycle.
- Govern: This function focuses on creating a culture of AI risk management. It involves establishing policies, defining roles and responsibilities, and ensuring that AI risk management aligns with an organization's values and strategic goals.
- Map: The goal of this function is to identify and frame the context of AI risks. It involves understanding the potential impacts, stakeholders, and the environment in which the AI system will operate.
- Measure: This function involves analyzing, assessing, and tracking the identified risks. Organizations use a combination of quantitative and qualitative methods to evaluate the performance and potential impacts of their AI systems.
- Manage: This final function is about prioritizing and acting on the identified risks. It involves developing and implementing strategies to mitigate risks and continuously monitoring the system for new or emergent risks.
The NIST has created an AI RMF Playbook as part of their Trustworthy and Responsible AI Resource Center to help organizations navigate the AI RMF and through tactical actions they can apply within their own contexts according to their needs and interests.
In conclusion, the NIST AI RMF is not a one-size-fits-all checklist but a flexible guide for building a culture of responsible AI. However, its successful adoption requires more than just following the steps; it demands a genuine organizational commitment. The framework provides the structure, but organizations must supply the ongoing leadership, clear accountability, and supportive culture necessary for truly effective risk management.