top of page

Achieving Safe and Secure AI: NIST AI Risk Management Framework (AI RMF 1.0) Explained



The rise of artificial intelligence (AI) has brought with it a host of new opportunities and challenges. On the one hand, AI has the potential to revolutionize industries and improve people's lives in countless ways. But on the other hand, the development and use of AI systems also raise some significant ethical, legal, and societal concerns.


To help organizations navigate these challenges, the National Institute of Standards and Technology (NIST) has recently published the AI Risk Management Framework (AI RMF 1.0) along with a companion NIST AI RMF Playbook. This framework provides a comprehensive set of guidelines for managing the many risks of AI and promoting trustworthy and responsible development and use of AI systems.


In this article, we will explore the key takeaways from the AI RMF 1.0, including the framework's focus on human-centricity, social responsibility, and sustainability; its voluntary, rights-preserving, and use-case agnostic approach; and its four core functions for addressing AI risks in practice.


The AI RMF 1.0 has two parts. Part 1 explains how organizations can identify AI risks and outlines the target audience. It also looks into the trustworthiness of AI systems, including characteristics such as reliability, safety, security, accountability, transparency, explainability, privacy, and unbiased fairness.


Part 2 of the Framework, the "Core," includes four functions to help organizations manage AI risks: GOVERN, MAP, MEASURE, and MANAGE. These functions are broken down into specific categories and subcategories, with GOVERN applying to all stages of the AI risk management process and the other functions applied in context-specific and stage-specific situations.


Part 1:

  1. Framing Risk: AI risk management offers a path to minimize potential negative impacts of AI systems, such as threats to civil liberties and rights, while also providing opportunities to maximize positive impacts. Addressing, documenting, and managing AI risks and potential negative impacts effectively can lead to more trustworthy AI systems.

  2. Audience: Identifying and managing AI risks and potential positive and negative impacts requires a broad set of perspectives and actors across the AI lifecycle. Ideally, AI actors will represent a diversity of experience, expertise, and backgrounds and comprise demographically and disciplinarily diverse teams. The AI RMF is intended to be used by AI actors across the AI lifecycle and dimensions.

  3. AI Risks and Trustworthiness: For AI systems to be trustworthy, they often need to be responsive to an assortment of criteria that are of value to interested parties. Approaches that enhance AI trustworthiness can reduce harmful AI risks.

  4. Effectiveness of the AI RMF: Evaluations of AI RMF effectiveness – including ways to measure bottom-line improvements in the trustworthiness of AI systems – will be part of future NIST activities in conjunction with the AI community.


Part 2:

5. AI RMF Core: The AI RMF Core provides outcomes and actions that enable dialogue, understanding, and activities to manage AI risks and responsibly develop trustworthy AI systems. The Core comprises four functions: GOVERN, MAP, MEASURE, and MANAGE. Governance is designed to be a cross-cutting function that informs and is infused throughout the other three functions.

6. AI RMF Profiles: AI RMF use-case profiles are implementations of the AI RMF functions, categories, and subcategories for a specific setting or application based on the requirements, risk tolerance, and resources of the Framework user. For example, an AI RMF hiring profile or an AI RMF fair housing profile. AI RMF profiles assist organizations in deciding how they might best manage AI risk that is well-aligned with their goals, considers legal/regulatory requirements and best practices and reflects risk management priorities.

The AI RMF is intended to be practical and adapt to the AI landscape as AI technologies continue to develop and to be operationalized by organizations in varying degrees and capacities so that society can benefit from AI while also being protected from its potential harm.


The AI RMF 1.0 is a valuable resource for organizations of all sizes and in all sectors designing, developing, deploying, or using AI systems. By providing a framework for managing AI risks and promoting trustworthy and responsible development and use of AI systems, the AI RMF can help organizations harness AI's full potential while mitigating its potential harms.

It is important to note that the framework is voluntary, rights-preserving, and use-case agnostic, providing flexibility to organizations to implement the approaches in the framework.


Don't navigate the complexities of AI risk management alone. Contact Aspire Cyber for expert guidance on adopting the NIST AI Risk Management Framework (AI RMF 1.0) and ensure the responsible development and use of AI systems.

Comments


bottom of page